AI Is Not A Black Box
- Dan Greenberg

- Nov 5
- 6 min read
Organizational and business leaders are struggling with how to build for the future. This makes sense because we live in a world where the promise of AI is very real and apparent, but the current output is less reliable and transcendent than the promise. In this article, I am going to focus on revenue organizations, but this logic can be applied across functions.
Business leaders across industries are embracing AI, partially because they want to and partially because they have to. Imagine being a CEO and telling your board that you aren’t leveraging AI tools; that is an expensive ticket to the land of severance and headhunter phone calls. So, the correct answer for any business leader is, “Yes, we are leveraging AI tools and looking toward a future where we can build more automated workflows, and achieve more efficiency through AI usage”. This is a great answer, the only problem with it is that it inevitably leads to the next logical question from the board, the boss, or any other investor. And that question is, “How is it affecting the bottom line?”
The only real way for AI usage to affect the bottom line is when tools replace humans. The relatively lower cost of tools replaces human employees bringing costs down without affecting revenue, and, Voilá! Improved bottom line.
The only slight hitch with the picture painted above is that it doesn't work yet. Certainly, there are functional business areas and specific workflows that can be automated with AI tools. We can see that across businesses, and it is evident from large recent layoffs at companies like Amazon and the like. However, at scale, the outputs generated by AI tools cannot replace the work humans do, not yet anyway. So, what we see is a lot of stagnation. This is evident in the economic numbers, and the reason it is happening is that business leaders still need people but they already told their investors that the AI is doing the work. So, they can’t get rid of most people, but they also can’t hire. Stagnation.
Looking at it through a technological lens, what we are saying is that the promise of AI has not yet been met by the operational usability of AI tools for most functional business areas. This mismatch between the technology and the outputs exists for two reasons. The first reason is that the technology is so new that companies are building tools quickly and getting them in market but the market is not mature enough yet to create a cohesive product feedback loop so we are left with exciting capabilities and wanting usability. This issue is somewhat transitory and will improve over time.
The second reason is more enduring. AI tends toward generic outputs. Large Language Models, the foundation for most AI outputs, work off of numerical representations of data, called vectors. They look for the most common possible output and serve that up to the user. This means that AI tends toward generic outputs. Most of the time they are right, and often they are very helpful, save us much time, and allow us to get much more done. However, they are generic, and that means that they still need a thinking human to apply them to the specific situation that is being solved for. This is especially true in revenue organizations when dealing with prospects and clients where it is not just about the right answer but it is about resonating with someone who is reluctant to commit time to listen to what you have to say. I am not saying anything radical here, but the result of all of this is that we cannot yet rely on AI, and we cannot yet pass off entire tasks to AI across the majority of business functions. AI is still a tool to be used by humans, not a replacement.
Become a member
We see this across industries, but I’ll give one example. The media industry has been using algorithmic models to match up ads with media inventory to serve them to people for 25 years. The industry now mostly uses AI tools. The tools themselves are incredible feats of technology but the improvements are modest because the algorithms were already good, and because the output from the AI tools can’t be leveraged by humans in a complimentary way that allows for revolutionary technology to translate to revolutionary results.
The technology and the promise of AI are a revolutionary improvement on what we have had in the past. However, The output and results of AI tools are only incrementally better than what we have had in the past.
There is a fundamental barrier preventing revolutionary results and that is the lack of compatibility between AI and the human ability to leverage that output. In other words, the AI knows the perfect answer but it is not possible to express in a way that humans can understand it and use it because it can only be expressed with a massive amount of data. At the same time, that perfect answer is generic and lacks nuance. This means that in order for the AI to be usable by a human, the output must be simplified into something that is less accurate, and in order for it to be highly useful, it needs to be augmented by human nuance. Until AI bots can act effectively on the analysis done by AI, we need to contend with the issue of compatibility between the AI tools doing the analysis and the humans taking the action.
If you inspect the conclusions drawn above you will see that the outputs from AI are much more powerful and the promise is even more so, but it is not revolutionary, and therefore, not a black box. We have seen these types of incremental output improvements before and we have a sense of what to do with them. So, let’s return to the original problem posed above which is how do you build an organizational structure for ‘now’ that supports the use of AI tools and is prepared for future advancements in AI technology and outputs. The answer is that AI tools finally give us the ability to flatten our organizations and empower our top performers, something we have been wanting to do for some time but have not been able to do effectively.
Ill be spending more time on organizational structure and process in a future article, but here is the cliffs notes version. The traditional structure of an organization is pyramid shaped. It is this way because we need centralized direction and process to flow down through the levels so that multiple people operate using the same procedures. This is necessary in order to attain scale and measure success. However, because AI tools can replace human research, data manipulation, and data analysis, and because these tools can help us personalize, we now have the ability to transfer those functions to our tools. Much more importantly, we have the ability to decentralize and localize process based on the needs of individual books of business. High performing sellers can work with tools to make localized decisions on how to allocate time and resources and how to address their individual book of business. High performing marketers can work with tools to make localized decisions on how to construct and distribute content and make decisions on where to target time and resources from a cohort perspective. We certainly still need some layer of leadership because strategy and direction must be made at levels above process decisions, but we don’t need as many layers as we have had, and we can distribute that decision making power to our top performers.
All that may seem logical but here is the key to making it work. We spent significant time above discussing the incompatibility of humans and AI tools in the present, so we still need a function that sits in the middle of the high performing person and the AI tool. Your high performing sellers, marketers and relationship builders are not the people who are going to learn to use and interact with the AI tools. And even if there are some that will, that is not where the organizations want their high performers spending their time. The connecting function has to be human, and it has to be a person that will translate the massive amount of data from the tools into the important pieces for the high performers so that they can add the human touch and nuance of an expert. That person also has to input messy real world data into the AI tools in an efficient and accurate way. In other words, we need a new kind of expert. We need an expert who can smooth over all of the problems generated by the incompatibility of our human brains and the AI tools. When we have that in place, we can create a working functional pod that includes the right tools, the high performer who understands the market and the human interactions, and the expert who can liaise between the two.
The idea is that we are not solving for how to use AI tools. We are solving for how to make current outputs more compatible with human uses. Once we orient the problem in that way it becomes a lot easier to see how to harness AI now, and into the future.





Comments