ChatGPT, one year on
2024 may rein in the chatbot that opened the floodgates of artificial intelligence
Good morning! The OpenAI drama last month threw everyone, including investors, in a headspin. Which begs the question: if there’s chaos *now*, what will the future bode for the world’s most famous AI company? Sam Altman & Co. may be back, but today’s story foresees that 2024 will be a grounding year for AI as regulators take stock and countries go to elections in an environment of increasing misinformation. Also in this edition: our picks of the week’s best longreads.
ChatGPT was launched on Nov. 30, 2022, ushering in what many have called artificial intelligence’s breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social media, and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application.
Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT’s influence in society comes as much from how people view and use it as the technology itself.
Generative AI systems like ChatGPT are becoming pervasive. Since ChatGPT’s release, some mention of AI has seemed obligatory in presentations, conversations and articles. Today, OpenAI claims 100 million people use ChatGPT every week.
Besides people interacting with ChatGPT at home, employees at all levels up to the C-suite in businesses are using the AI chatbot. In tech, generative AI is being called the biggest platform since the iPhone, which debuted in 2007. All the major players are making AI bets, and venture funding in AI startups is booming.
Along the way, ChatGPT has raised numerous concerns, such as its implications for disinformation, fraud, intellectual property issues and discrimination. In my world of higher education, much of the discussion has surrounded cheating, which has become a focus of my own research this year.
Lessons from ChatGPT’s first year
The success of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI’s other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI’s breakout year.
There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces drive widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference.
At the same time, one of the technology’s principal strengths – generating convincing language – makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI.
Amid the real promises and perils of generative AI, the technology has also provided another case study in the power of hype. This year has brought no shortage of articles on how AI is going to transform every aspect of society and how the proliferation of the technology is inevitable.
ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched.
The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people’s hopes and fears is a recipe for anxiety with none of the ingredients for wise decision making.
What the future may hold
The AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.
This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.
ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.
But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.
Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.
As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.
Tim Gorichanaz is Assistant Teaching Professor of Information Science, Drexel University.
This article is republished from https://theconversation.com under a Creative Commons licence. Read the original article at https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704
When in Rome…: Roads paved with good intentions seldom hold strong. When Foxconn announced its plans to make the latest iPhone 15 in India, the move was heralded as a global shift in the world’s manufacturing ecosystem. Since then, the company has been caught in a classic case of being a cultural misfit. Chinese supervisors are frustrated with their Indian counterparts' slow speed, multiple tea breaks, and their need for too many holidays. They’re also puzzled to find Indians not willing to work longer for bonuses. Clearly, China’s infamous work culture of intense competition, cheekily nicknamed ‘neijuan’ or involuted, is finding no takers here. Despite that, efficiencies are increasing. Workers are warming to the idea of Foxconn’s relentless production pursuits and the challenges they entail. To know more about this fascinating world of Foxconn in India, read this insightful piece in Rest of World.
First there was sportswashing, now there’s…: …greenwashing. West Asia’s oil-rich states are snapping up sports clubs and leagues with sovereign fund (read: oil) money, and they—specifically, the UAE—seems to be applying that playbook to climate change too. The appointment of Adnoc (Abu Dhabi National Oil Company) chief Sultan al-Jaber as COP28 president was contentious as it is. The Financial Times now reports that al-Jaber is pretty much using the climate conference for dealmaking. And it’s not just energy transition projects across Asia, Africa, and South America. The UAE is the eighth-largest oil producer and one of the world’s largest emitters of hydrocarbons, yet al-Jaber hasn’t specified any deadline for a phase-down. Could Adnoc setting aside $150 billion for a five-year expansion plan have anything to do with it? Your guess is as good as ours.
Cloak & dagger in research: In his stellar work on ecological collapse, The Nutmeg’s Curse, writer Amitav Ghosh describes how the responsibility of climate impact was consciously shifted to the individual by sustained advertising efforts. Energy company British Petroleum (BP) spent over $100 million annually on campaigns that also deeply embedded the perception that climate change was not a present reality but a future threat. The energy industry was way ahead in understanding what was in store for the world and did its best to shift responsibility elsewhere.
An investigation by Europe’s clean transport campaign group, Transport and Environment, suggests that Big Oil executives helped set up a research group on air pollution. Concawe or the Conservation of Clean Air and Water in Europe was set up by the industry, which allegedly masqueraded as an advocacy organisation but in reality tried to combat studies and opinions unfavourable to the industry’s products with its own research papers. It specifically tried to discredit research that linked benzene pollution and cancer.
The man behind the machines: Artificial intelligence (AI) as we know it today wouldn’t have existed without Nvidia. And Nvidia wouldn’t exist without CEO Jensen Huang. In one of the best longform profiles in months, The New Yorker walks us through the evolution of the company that came into being as NVision (a name that was chucked soon after Huang and Co. learnt it was the name of a toilet paper manufacturer). Huang, a Taiwanese immigrant, was schooled in a religious reform institution where kids literally fought for their lives, married his high school sweetheart, and started his Silicon Valley career as a microchip designer. In 2013, he had the foresight to know that AI would be the next big thing. That foresight has turned Nvidia into a company with a market cap of more than $1 trillion. You’d think Huang would be a flagbearer for autonomous-everything, but he isn’t. If anything, he’s bullish on the “omniverse”. Also consider this longread a crash course on GPUs and CUDA.
A portrait of contradictions: Ammon Bundy and his family are willing to lay their lives down for freedom. And so are the thousands of other members of Bundy’s People’s Rights Network, a loose collection of right-leaning and libertarian groups and people in the US, characterised by their deep distrust of the state, science, and the ‘system’. But Bundy, whose father Cliven provided the spark for the movement in the ‘70s by refusing to yield grazing land to the government, is not an all-American hero. Instead, this profile in The Atlantic finds a man of deep contradictions. He is an ordinary suburban dad whose followers storm a hospital threatening violence at his command. He is bombastic about fighting the system and refusing to yield to its rules but a stint in solitary confinement breaks his resolve, pushing him to post bail when he’s arrested a second time. Bundy’s politics is also contradictory—he doesn’t like Donald Trump for instance—and his years of leading standoffs and doxxing people he deems instruments of the state have left him financially ruined. Read the story to understand the origins of US’ culture wars and domestic terrorism groups, as well as the mystery of how Ammon Bundy suddenly disappeared, to the relief of his victims.
How China made advanced chips: In August 2023, when Chinese electronics company Huawei unveiled the sleek Mate 60 smartphone series, it sent alarm bells ringing in the West, particularly the US. It was as if all the restrictions imposed on China to prevent it from accessing high technology were futile. The Mate 60 is powered by Charlotte, an advanced 7nm chip. Until then it was believed that China did not have the capability to make such an advanced semiconductor. After the US imposed crippling sanctions, Huawei took a risky wager on the Semiconductor Manufacturing International Corporation (SMIC), which claimed it could make advanced chips using equipment it already had. It would be expensive but the job could be done. The collaboration produced Charlotte or Kirin 9000S, whose performance is only marginally lower than Qualcomm’s semiconductors. Financial Times pieced together the story of how Beijing did what appeared to be impossible or at least not possible in two years. Its next target: taking on Nvidia with AI chips.