2023 forAI(AI), has been a unique and turbulent year for AI. It has seen countless product launches, power transitions within companies, intense policy debates about AI disasters, and a race to find the next big innovation. However, we have also seen the introduction of concrete tools and policies designed to make the AI industry more responsible and hold powerful players accountable. All of this brings great hope for the future development of AI.
Here are the main insights from AI in 2023:
1. The direction of generative AI is still unclear
The year started with big tech companies investing in generative AI. OpenAI’s ChatGPT was a huge success, prompting every major tech company to release their own version. AI Models, such as Meta's LLaMA 2, Google's Bard chatbot and Gemini, Baidu’s Ernie Bot, OpenAI’s GPT-4, etc., but we have not yet witnessed any AI application become an overnight success. The AI-driven search features launched by Microsoft and Google did not become the killer application as expected.
2. Our understanding of language models has deepened, but there are still many unknowns
Although technology companies are rapidly launching large language model products, we still know very little about how they work. These models often fabricate information and have serious gender and racial biases. Research this year also found that different language models have different political biases, which can be used to hack people's private information.
3. AI doomsday theory becomes a mainstream discussion topic
Discussions about the existential risks that AI may pose to humanity have become common this year. From deep learning pioneers Geoffrey Hinton and Yoshua Bengio totop notch CEOs of AI companies, such as Sam Altman and Demis Hassabis, as well as many scientists, business leaders, and others, including California Congressman Ted Lieu and former Estonian President Kersti Kaljulaid.leaderand policy makers participated in the discussion.
4. The end of the “Wild West” era of AI
Thanks to ChatGPT, AI policy and regulation have been discussed from the US Senate to the G7 this year. European lawmakers reached a consensus on the AI bill at the beginning of this year, which will introduce binding rules and standards to develop riskier AI more responsibly while prohibiting certain "unacceptable" AI applications.
One specific policy proposal that has received a lot of attention is watermarking — invisible signals in text and images that can be detected by computers to mark AI-generated content. Watermarking can be used to track plagiarism or help combat misinformation, and this year we have seen research successfully applying watermarking to AI-generated text and images.
It’s not just lawmakers who are busy, lawyers are too, as we’ve seen a record number of lawsuits filed by artists and writers who argue that AI companies are using their intellectual property without their consent and without compensation.
In an exciting counterattack,Researchers at the University of Chicago have developed a new data poisoning tool called Nightshade, which allows artists to fight back against generative AI by disrupting the training data, thereby wreaking havoc on image-generating AI models. This rebellion is brewing, and expect more grassroots efforts to change the balance of power in technology next year.
As 2023 comes to an end, we are excited about the future of AI. Despite many challenges, this year has given us a deeper understanding of AI and more thoughts on how to better use this technology. The coming year will be critical in determining the true value of generative AI.