Our world at present seems positively addicted to apocalyptic predictions, and the launch of ChatGPT has ignited a torrent of articles about the dire consequences of sentient Artificial Intelligence. Google will be dead in a year; massive unemployment looms in journalism, in consultancy, in education, in all information-based or communications-based industries; soon mankind will groan beneath the metal heels of robot overlords (that is, assuming mankind is not directly obliterated by next Thursday); etc.
No, I am not exaggerating. Iconic AI scientist and leader of the Machine Intelligence Research Institute, Eliezer Yudkowsky, observed in Time magazineend of March, “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
Yudowsky’s concerns are such that whole swaths of our thought leadership, from Elon Musk to Steve Wozniak, and 15,000 others have signed an the open letter calling for a six-month pause in the development of Ais stronger than the present model. Not forgetting also Google’s former “Godfather of Artificial Intelligence” Geoffrey Hinton speaking this week about the risks posed by AI.
I confess to a certain déjà vu concerning all this. In my novel Chaotic Butterfly, one of the key themes was the development and application of artificial intelligence to business. My fictional chief developer, Edward Cioran, aimed to engineer and deploy an AI program that promised to make better business decisions than the business personnel in the firm—with the ultimate goal of turning businesses into stand-alone sentient AI entities operating on their own. No need for inefficient meat puppets like Boards of Directors, C-Level executives, or supply chain consultants! (Things do not end well for poor Edward, but those interested in more detail should read the book.)
Fiction and narrative are sometimes better predictors of the future than fact. What my story predicted was that while AI certainly has valuable business applications, expectations of apocalyptic disaster, like intoxication over possible sci-fi utopian acceleration, would be much overblown.
So with ChatGPT. Like most everyone, I have been surprised, intrigued, impressed and somewhat unnerved by ChatGPT. But, like the pandemic, like Trump, like hyperinflation, like Ukraine, while there’s certainly cause for attention and concern, there’s no call for panic. Yes, ChatGPT is here now, and over a million people so far are interacting with it. But the program is far, far from anything like fully developed AI.
What is ChatGPT, exactly? Essentially, a website—openai.chatgpt.com— that manages to give a lucid and plausible-sounding reply to whatever question you ask it. Not a reply that is necessarily true, but almost always a reply that soundscogent and reasonable and well-informed and adult. In short, it is a search engine—a search engine that uses artificial intelligence to combine highly sophisticated natural language processing and lightning-speed internet and database access to answer you in the same way that an intelligent human being would.
This is why it’s said to pose a threat to Google and to all conventional search engines. Ask Google a question and it presents several gazillion websites that in some way may contain the answer, or an approach to an answer. You have to click on those websites, search for yourself, pull together bits of information from several sources, and draw your own conclusions.
ChatGPT does what Google does, but instead of providing you with a sprawling buffet of information beyond your capacity to ever digest, it gives you a smooth-sounding reply that suavely packages five or six leading responses in an almost conversational form as ‘the’ answer to your question. Ask Google “What time is it?” and you will get 15,080,000,000 results. (I know because I just sent that request to Google.) Ask ChatGPT the same question, and it will say something like, “Greenwich Mean Time reports that that it is currently 14:47 UTC (Coordinated Universal Time), but that at your present location in Lausanne, the time is 16:47 CET. Is that helpful?”
“Yes, it is,” you respond.
“I’m glad,” says Chat GPT with a twinkle in its algorithmic dimple. “Is there any other question I can help you with?”
The potential impact of this should not be underestimated. In the high-pressure context of gaining competitive business advantage, we want clear reasonable actionable answers, and we want them fast. ChatGPT serves them up in mere seconds.
But are they good answers? Are they trustworthy?
Yes and no.
The sophistication of ChatGPT is such that I have no doubt that second-tier supply chain consultants will use it to generate content for reports and summaries, so as to give their documents the appearance of thoughtful well-researched input. That may impress the client—at least till the client receives an equally thoughtful and well-researched proposal in the same style from another ChatGPT pseudo-consultant supporting perfectly opposed actions.
But I doubt that either will prove a threat to really top-tier consultants—quite the contrary.
Ultimately what drives decision makers is a trust in the person giving them the data and the advice they need to move forward. That trust will never be given to a machine-generated response, particularly when an equally plausible and completely opposite response can be generated by the push of a button. That’s why we put our investment dollars not into cutting-edge AI investment software, but into the age-spotted hands of Warren Buffet. We like Mr. Buffet. We know of his decades of experience and success. We trust him.
AI has no life experience. After all, it’s not alive. The response it gives are fundamentally driven by access to digital data, and much of what informs the judgments of truly sterling consultants is not that. There is critical information that never reaches the official databases. To find out what’s really happening in a business and its processes, sometimes the key is just to sit down over a private coffee in the break room with a tired production manager, or chat over a sandwich with an assembly line worker taking a break. ChatGPT can’t do that. No lips.
When business leaders make decisions, they know very well that the survival of the business, and their own career survival, may rest on those decisions.
They want more than program-generated text suggesting they take one action when they know full well that the same program can generate equally persuasive text arguing for a completely different action.
They want something—someone—who can provide arguments carrying the weight of personal commitment and personal judgment.
ChatGPT may be able to provide a professional veneer to a report, but when the supposed writer of that report has to present it and discuss it in person, the veneer will crack. Like the student trying to pass off ChatGPT-written essays to the teacher, the attempted short cut will cut both ways.
After all, unlike human consultants, ChatGPT has no stake in the game. It does not have to make the right call; it doesn’t care. It’s just a set of algorithms. Intangibles—like personal commitment, like intuition, like inspiration, like flashes of insight or even genius—are not part of its program.
That said, ChatGPT certainly has vast potential as a tool for supply chain consultants and analysts, not to say suppliers and companies at almost any stage of the supply chain process.
ChatGPT can help automate routine tasks such as answering customer queries or providing product information. It can help predict demand and forecast sales. Ask ChatGPT to identify inefficiencies and recommend alternative solutions and it will. Ask it to analyze transportation routes and suggest more efficient ways to move goods and it will do that too. It can help develop contingency plans to mitigate risks in unstable times and to ensure continuity of operations.
It can spark good thinking about supply chain operations and possibilities—in partnership with the human element. In that spirit I certainly intend to look into it further.
And yet… I have to admit, I have a nagging discomfort about the overall impact of ChatGPT and its sister programs. Smart supply chain analysts will look at such tools coolly and critically and with a certain grain of salt. They will become expert in its uses, and remain well aware of its limitations.
But will the public?
In his book Understanding Media, the great media analyst and philosopher Marshall McLuhan made a striking observation: “The wheel,” he wrote, “is an extension of the foot. The book is an extension of the eye. Clothing, an extension of the skin. Electric circuitry, an extension of the central nervous system.”
He was making the point that our tools extend our natural abilities—but at the same time they sap them, for “Every extension is also an amputation”: we drive, so we no longer walk; and our legs grow weaker. Staring into our televisions and computers and smartphones, we watch; so we no longer live.
If every extension is an amputation, and if ChatGPT-style technologies are extending our capacity to articulate, to summarize, to reason, then what are we amputating?
Possibly our ability to think. For if ChatGPT can seemingly answer all our questions, why bother to work any answers out for ourselves?
Perhaps the danger is not, as my character Edward Cioran suggested, that over time companies may become increasingly sentient.
The danger is that human beings will become less so.