That’s no longer the question now.
The real question is…
Actually, it is not a single question.
There are questions:
– What to AI?
– When to AI?
– Where to AI?
– Who will AI?
– How much to AI?
These are reasonable questions, but we are not there yet.
When we experience a technological leap, people get divided into believers and non-believers. Embracers and repealers. Pro and against. This is completely natural.[1] It is similar to being thrown into a war. A state of uncertainty.
Once we come to terms with such shock, we begin to think about better questions. Not for a philosophical discussion. But to understand what we should do with this new situation. Understanding how we fit in the new context.
So naturally, when ChatGPT 3.5 made loud and visible noise in November 2022, the world was divided into two groups. A doomsday group and a rosy-glassed group. Some people from both groups got seriously radicalized.
It is important that we briefly talk about what these zealots did or are still doing and will keep doing.
On one side, the ROSY-GLASSED zealots began dreaming and conjuring up things that AI could do. But they didn’t ask the question, “Should it do that?” Take Microsoft’s much-thrashed AI feature Recall.[2] Or take LinkedIn’s AI writing feature and “contribute expertise” harassment. They’re examples of mass experiments on the world because a mad scientist was absorbed by his invention.
On the other side, the DOOMSDAY zealots began attacking the technology and tech-bros with some of the oldest tricks in the book: regulations, taxes, strikes, and lawfare. For example, most technologically savvy educational universities have begun regulating the use of AI in their classroom, so much so as to ban it. On a government level, every major economy has begun enacting laws or “frameworks” to regulate AI to “protect” their citizens.[3]
So, that’s why public opinion still vacillates in the response to the question: “To AI or Not To AI?” A faction of public shouts “yessss!” and another faction cries “noooo!”
This is not new. Historically, public opinion has been formed by journalists—a category that no longer keeps a journal. Now most journalists are trained to be newsmakers, so it is difficult to expect an intellectually honest opinion-making. For example, when newspapers printed that the increase in short-term capital gain tax from 15% to 20% is a rise of 5%, we have to understand that the newsmakers have stopped thinking. This implies we can’t expect them to understand how AI systems, tokenization, or models work and communicate it to the public—the public with an average IQ of 100.
And that leads us to the current dilemma of the tech business founders: “to AI or not to AI?”
Businessmen[4] live among the public and they’re not isolated. Being influenced by the two extreme sides of public opinion doesn’t help them form a useful opinion about which way to move ahead. They move ahead with doubts: whether they choose to do AI or not to do AI.
Generally speaking, if you think about it, doubts are of two types. In the first type of doubt, an individual doubts whether something will work out in the end or not. In the second type of doubt, the individual doubts whether they should do something or not. The second type of doubt is a dilemma–and that’s the main problem.
Because when we get afflicted with this second type of doubt about any new technology, it blurs our vision and makes us feel indecisive. This type of doubt is the main obstacle to fulfilling our potential, serving our customers and contributing to the world.[5]
The first type of doubt is net positive. In reality, the first type of doubt is often the fuel that pushes the ship of progress forward. When determined people are filled with the first type of doubt, they become more determined to overcome the doubt by doing something about it.
This also means that when tech businessmen are filled with doubt about whether the AI integration will work for their product, they will figure out the answer by doing things. And eventually, finding an answer. Surprisingly, the answer will not be BLACK AND WHITE. Also, there would not be an answer, but there would be answers. Nuanced ones. They would answer the questions that we posed at the beginning of this essay.
Take for example, “What to AI?”
After months of struggle, the businessmen’s tech team may find that their image editing software doesn’t require generative AI or the non-profit founder’s tech team would brief them that their beneficiaries prefer chatting with real humans over the AI bot. That’s real-world feedback. That’s not coming from a journalist’s ideological utopian world that doesn’t exist.
The real world may also answer the “When to AI?” question by telling us when during the customer journey, we should involve the AI. Or when the business should think about doing AI in their growth journey: when they hit 5 years, 50000 customers or $200k revenue? The real world may answer the question “Where to AI?” by telling us which part of the product to do AI and which part to leave untouched.
The most important question that we don’t think much about is “How much to AI?” It forces us not to go extreme on either side and stay within reason.
AI technology is not a deadly airborne virus around us that we have to think “Should I wear a mask or not?” It’s just another technology, like thermonuclear power that can be used for progress.[6]
Let’s hope that we get out of this rut of choosing between “to AI or not to AI” soon and make some real contribution to the world.
FOOTNOTES:
[1] I have a theory about this natural reaction. Or what I’m calling a natural reaction. Evolution has instilled the priority of survival in our brains. When we face something absolutely novel, our first question requires a binary answer: to flee or to freeze. To stay or to leave?
[2] https://learn.microsoft.com/en-us/windows/ai/apis/recall Recall utilizes Windows Copilot Runtime to help you find anything you’ve seen on your PC. Search using any clues you remember or use the timeline to scroll through your past activity, including apps, documents, and websites.
[3] I have used double quotes to communicate the other meaning that goes along with these words: framework and protect. The recent COVID-era government behaviour across the world has shown the true nature of the government’s hunger to hold onto and amass power through censorship, propaganda and force. In the case of AI, governments have realized the infinite potential of AI for their own use: to surveil, dox, and ultimately control every aspect of the common man. Before the recent AI breakthrough, this wasn’t possible at such an efficiency.
[4] No, that’s not a mistake. Over 90% AI tech business founders are men. Apart from the statistics, the main reason for using “businessmen” is my wish to write what I want to write.
[5] To be clear, I’m not implying that choosing “not to do AI” is equal to “not making progress or not serving our customers.” When the choice of “not to do AI” is backed with reason in the immediate context, it is an appreciable decision.
[6] That reminds me of this latest research paper on how after the 1986 Chernobyl Nuclear Disaster, the global regulations led by the US substantially increased the cost of Nuclear energy. New Nuclear Power Plants (NPPs) stagnated. Thanks to the global oil lobbyists and targeted propaganda to get the public on their side. Nuclear Power remains the cleanest and most reliable power source to date. But we know that public opinion has been changed through years of false narrative. Find the paper here: https://conference.nber.org/conf_papers/f205791.pdf