ChatGPT Is Changing the Game in AI
OpenAI’s large language model with more than a million downloads now is quite capable of writing very convincing words and code; professionals are processing its implications
By John P. Desmond, Editor, AI in Business

The ChatGPT model that OpenAI made publicly available recently is very capable of answering questions like a person, chatting, creating content, writing code, taking tests and manipulating data. Professionals in a range of industries are working to process the implications.
Educators, for example, are wondering how they will be able to distinguish original writing from an essay generated by the algorithm.
At least one observer believes it was not responsible of OpenAI to release ChatGPT. Economist, venture capitalist and MIT fellow Paul Kedrosky tweeted after the release, “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society. I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions,” according to a recent account in TechCrunch.
Kedrosky heard from a colleague at UCLA who is perplexed about what to do with essays at the end of the current term. “They have no idea anymore what’s fake and what’s not,” he stated of the facult
He also sees the release of ChatGPT having legal implications. “We’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms,” Kedrosky stated.
(That lawsuit, originating in California, is a class action suit against Microsoft, GitHub and OpenAI that claims the AI-powered Copilot programming tool infringes on copyright by using millions of lines of human-written code without proper attribution.)
Asked if a student used ChatGPT to write a paper if it would be plagiarism, Kedrosky stated, “The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose.”
ChatGPT is Free in the Research Preview Stage
OpenAI released ChatGPT on November 30. In a press release, OpenAI stated, “We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free.” A link was given for a free download; when the link is tried and the screening is passed, the user gets a message stating, “This is a free research project. Our goal is to get external feedback in order to improve our systems and make them safer. While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
Then under a heading of “How we collect data,” it states, “Conversations may be reviewed by our AI trainers to improve our systems. Please don’t share any sensitive information in your conversations.”
Then, “The system is optimized for dialogue. Let us know if a particular response was good or unhelpful. Share your feedback in our Discord server,” which is linked. If you get in, and you ask the question, “How is ChatGPT being received?,” the response is, “I’m sorry, but I am not able to browse the internet or access information about current events. I am a large language model trained by OpenAI, and my knowledge is limited to the text I have been trained on. My knowledge cutoff is September 2021, so I don’t have any information about ChatGPT or how it is being received. Can I help you with something else?” (Checking in with ChatGPT a few days later, the system was unavailable; a message said it was at capacity and OpenAI is working to support more users.)
One Observer Sees ChatGPT Going After Google, Bing Search Engines
For a recent account in The New York Times, author Cade Metz, a technology correspondent, interviewed scientists who built ChatGPT and many users of it. He stated in part, “After the release of ChatGPT — which has been used by more than a million people — many experts believe these new chatbots are poised to reinvent or even replace internet search engines like Google and Bing.”
Aaron Levie, chief executive of Box, a company offering cloud-based content management, stated, “You now have a computer that can answer any question in a way that makes sense to a human.” he has been exploring ways the new chatbot will change things. He stated further, “It can extrapolate and take ideas from different contexts and merge them together.”
Naturally, many like Kedrosky are concerned about the potential for abuse. While ChatGPT (referred to in the plural) provides an answer with what seems like complete confidence, “They do not always tell the truth. Sometimes, they even fail at simple arithmetic. They blend fact with fiction,” Metz wrote.
In a problem AI scientists are calling “hallucination,” advanced chatbots “have a way of taking what they have learned and reshaping it into something new — with no regard for whether it is true,” Metz stated.
People testing the system have been asked to rate its performance. The ratings are used to more carefully define what the system does and does not do well, using the AI technique of reinforcement learning. “This allows us to get to the point where the model can interact with you and admit when it’s wrong,” stated Mira Murati, OpenAI’s chief technology officer. “It can reject something that is inappropriate, and it can challenge a question or a premise that is incorrect.
Figure Out How to Adapt, University of Toronto Academic Suggests
Now that ChatGPT is released, it is incumbent on knowledge workers to start to gain experience with it and figure out how to use it, suggests one academic. Writing in the Harvard Business Review recently, coauthor Ajay Agrawal of the University of Toronto’s Rotman School of Management, stated, “The question isn’t whether AI will be good enough to take on more cognitive tasks but rather how we’ll adapt.” Agrawal is also the founder of the Creative Destruction Lab and a coauthor of the book, “Power and Prediction: The Disruptive Economics of Artificial Intelligence,” (Harvard Business Review Press, 2022).
In the book, Agrawal discusses “systems solutions” as importation to the AI adaptation process for humans, as in “a system solution changes a workflow, disrupting old ways of doing things and generating new ones.”
Using the example of maps, he noted how the AI of mapping advises on the most efficient route and takes into account road closures, current traffic conditions, accidents, slowdowns and so on. The taxi industry used the automation to improve their routing and dispatch. Then Uber and Lyft came into being and built on top of the mapping tools, to create a new system of ride hailing.
“With reliable driving directions and ubiquitous mobile devices, they realized that anyone could provide ride services,” the authors state. The number of professional taxi and limo drivers in the US five years ago was 200,000. Today, more than ten times that number, 3.5 million in the US, drive for Uber alone.
Not everyone will be a winner. “These recent advances in AI will surely usher in a period of hardship and economic pain for some whose jobs are directly impacted and who find it hard to adapt — what economists euphemistically call “adjustment costs,” state the authors.
Effective leaders in the AI community we hope will work on applying new tools like ChatGPT to create systems and workflows that benefit society overall.
Read the source articles and information in TechCrunch, in the The New York Times, in a press release from OpenAI and in the Harvard Business Review
(Write to the editor here.)