- Discover why critics believe our current approach to AI is fundamentally flawed, highlighting concerns about ethics, bias, and the need for a more comprehensive framework.
Usually, when we discuss AI, we concentrate on just one number: productivity. Every technology announcement since the dawn of the modern tech era has included that measure.
Rewinding to my early days as an external tech analyst, during the run-up to Windows 95’s release, the claim was that it increased productivity to the point where it would pay for itself within a year of purchase. It turns out that the product broke so much in the first year that productivity was first negatively impacted rather than positively.
AI’s return on investment may be far worse, and oddly, a large portion of our issues this century have less to do with performance or productivity than with inadequate decision support.
I went to a Computex prep event last week. Observing the talks, I saw a recognizable hint of efficiency. I continue to worry that we may not be able to survive if we much increase speeds without simultaneously improving the caliber of the decisions made in connection with such speeds.
This week, let’s talk about that. Finally, I’ll present my Product of the Week, which is the airline I recently flew to Taiwan. I felt I’d share why so many non-US airlines are much better than US carriers since it was so much better than United, which I generally use for overseas flights.
Quality versus Productivity
I used to work at IBM. I was one of a select few employees who completed IBM’s executive training program during my time there. One of the values instilled in each employee was the importance of quality.
In this sense, the course I took from the Society of Competitive Intelligence Professionals (SCIP) rather than IBM is the one I remember the most. It was centered on direction vs speed. The lecturer made the case that when it comes to new procedures and technologies, most businesses prioritize speed above all else.
According to him, if direction is not your primary concern, you will ultimately find yourself traveling faster and faster in the incorrect way. Speed won’t help you if you don’t first concentrate on identifying the objective. Things will get worse as a result.
As a competitive analyst, I had the unpleasant experience of working for both Siemens and IBM, offering decision assistance, and having our suggestions not only disregarded but also actively opposed before being implemented. It led to disastrous losses and the demise of multiple organizations I was involved with.
Executives preferred to seem correct rather than be correct, which was the cause. My unit eventually disbanded (a trend that affected the entire industry) because executives didn’t want to be called out for disregarding sound advice following a disastrous failure because their “gut” told them their predetermined course must be better, even though it wasn’t consistently the case.
Upon leaving corporate jobs to work as an external analyst, I was astounded to discover that executives were more likely to heed my recommendations because they didn’t perceive my correctness as a danger to their careers.
They viewed me as a risk from inside the company. Since they didn’t feel like they were in competition with me, they were more open to listening and adopting a different approach because, from the outside, I wasn’t.
Executives should be able to make better decisions because they have access to vast volumes of data. But I still witness far too many people making ill-informed choices that have disastrous consequences.
Thus, AI’s primary purpose should be to assist businesses in making better decisions; productivity and performance should come later. You are more likely to move in the wrong way much faster if you prioritize speed above making sure the decision guiding you is the right one. This will lead to more costly and frequent errors.
Difficulties in Making Decisions
AI allows us to make judgments more quickly in both our personal and professional lives, but the quality of those decisions is declining. Looking back, you would see that throughout a large portion of their lives, especially this century, Microsoft and Intel, two of the main proponents of the current wave of AI technology, made poor choices that cost them each one or more CEOs.
My longtime buddy Steve Ballmer was doomed to make terrible decisions after awful decisions, and I still believe that this was more a function of those who supported him than of anything intrinsic about the man.
The man was possibly the smartest person I’ve ever met and was top of his class at Harvard. He’s given the credit for the Xbox’s success. Even so, he managed Microsoft’s financial performance admirably until then. However, his failures with the Zune, Microsoft Phone, and Yahoo severely damaged Microsoft’s valuation, leading to his termination.
I was initially tasked with assisting him in making better selections, along with a few other analysts. But even though I wrote email after email warning him that he would be dismissed if he did not make better decisions, we were all ignored very instantly. Sadly, my attempts merely made him angrier. His failure still feels like my mine, and I will always be troubled by it.
This issue is similar to what happened to IBM’s John Akers, who was encircled by individuals who withheld information from those of us who were closer to issues. Although my efforts at IBM to solve the company’s problems were rewarded, Akers lost his job as a result of the influence of people like me—of which there were many—being so little valued. It wasn’t that he was unintelligent or unresponsive. It was because executives who had his ear prevented us because they didn’t want to lose the status that came with that access.
As a result, the CEOs of both companies were denied access to information that they required for success by people they trusted. Their priorities were not as much ensuring the success of the companies they worked for as they were status and access.
The Dual Nature of the AI Decision Problem
First of all, despite their amazing potential, we are aware that AI attempts produce terribly erroneous or partial outcomes. In an evaluation of the best AI products, the Wall Street Journal discovered that, despite being the most popular, Microsoft’s Copilot and Google’s Gemini were, for the most part, of the lowest quality.
Furthermore, as I mentioned previously, CEOs can choose to trust their instincts above any information provided by a system, even if the results were far more accurate given their past conduct. This may lessen the effect of these items’ quality problems, but the end effect is a system that cannot or will not be trusted.
Even if the quality concerns with AI are fixed, it will still fall short of its potential to increase corporate and government success since the current challenges with AI serve to legitimize and promote the poor conduct that existed before the current generation of AI.
Concluding
At the moment, we are considerably less in need of speed (productivity, performance) than we are of the technology that is delivering this benefit being reliable and deserving of our trust. Even if we were to solve this issue, however, Argumentative Theory indicates that we will not use the technology to support better decisions since we generally cannot regard internal advice as anything other than a danger to our reputation, status, and careers.
There is some validity to this viewpoint since people may eventually think you are unnecessary if they discover your actions are dependent on AI assistance.
To avoid being inundated with poor choices and guidance at machine speeds, we must shift our attention from productivity-focused AI to significantly higher quality and better decision support.
Then, in order to effectively go forward at machine speeds instead of being buried by poor decisions at the same pace, we must actively teach people to accept sound counsel. Additionally, rather than making people fear that using AI will jeopardize their jobs and professions, we should commend them for using it effectively.
AI has the potential to improve the world, but only if it produces high-quality results that humans can use to guide our decisions.
For more information, visit https://technoworldhub.com/