Stanford University Center for HumanAIThe 2024 Artificial Intelligence Index released by the institute reveals eight major trends in artificial intelligence in the business field, covering key issues such as human advantages, costs, supervision, investment growth, and improved work efficiency, providing important reference and inspiration for enterprises and decision makers.
1. Humans are still superior to AI in many tasks
Studies have shown thatadvancedAI is still inferior to humans at complex tasks such as mathematical problem solving, visual commonsense reasoning, and planning. To reach this conclusion, the researchers compared the model to human baselines across many different business functions, including coding, agent-based behavior, reasoning, and reinforcement learning.
While AI has indeed surpassed human capabilities in image classification, visual reasoning, and English comprehension, the results suggest that there is potential for businesses to use AI to perform tasks that human employees actually perform better. Many businesses have begun to worry about the consequences of over-reliance on AI products.
2. FirstAdvanced AI models are becoming increasingly expensive
AI Index reports that OpenAI's GPT-4 and Google's Gemini Ultra will cost about $78 million and $191 million to train, respectively, by 2023. "At current growth rates, cutting-edge AI models will cost about $5 billion to $10 billion by 2026, and few companies will be able to afford these training costs by then," data scientist Rahman told TechRepublic in an email.
In October 2023, the Wall Street Journal reported that Google, Microsoft, and other large tech companies are struggling to monetize their generative AI products because they are expensive to run.mostIf the technologies of the future become so expensive that only large companies can access them, their advantages over small and medium-sized enterprises may increase disproportionately. The World Economic Forum pointed this out as early as 2018.
However, Rahman stressed that manymostAll AI models are open source and therefore available to businesses of all budgets, so the technology shouldn't widen any gaps. "Open source and closed source AI models are growing at the same rate," he told TechRepublic.maximumMeta, one of the most popular tech companies in the world, is open-sourcing all of its models so that those who can’t afford to train their own canmaximumModelers can download their models.”
3. AI improves productivity and work quality
By evaluating a number of existing studies, Stanford researchers concluded that AI enables workers to complete tasks faster and improve the quality of their output. The professions observed included computer programmers (of whom 32.8% reported an increase in productivity), consultants, support agents, and recruiters.
In the case of consultants, the use of GPT-4 bridged the gap between low-skilled and high-skilled professionals, with the low-skilled group seeing a greater performance boost. Other research has also shown that generative AI in particular can act as an equalizer, as less experienced, less skilled workers gain more from it.
However, other studies do suggest that “the use of AI without appropriate supervision may result in degraded performance,” the researchers wrote. For example, there are widespread reports of hallucinations in large language models performing legal tasks. Other studies have found that we may not realize the full potential of productivity gains from AI for another decade, as suboptimal output, complex guidelines, and a lack of proficiency continue to hold workers back.
4. US AI regulation continues to strengthen
The AI Index report found that in 2023, there were 25 active AI-related regulations in the United States, compared to just one in 2016. However, this is not a steady increase, with the total number of AI-related regulations increasing by 56.3% from 2022 to 2023 alone. Over time, these regulations have also shifted from being expansionary to restrictive in terms of AI development, with the most prevalent themes being foreign trade and international finance.
AI-related legislation is also growing in the EU, with 46 new pieces of legislation set to be adopted in 2021, 22 in 2022, and 32 in 2023. In the region, regulation tends to take a broader approach, often covering science, technology, and communications.
See: NIST launches AI safety alliance
For businesses interested in AI, it is imperative to stay up to date with the regulations that affect them, otherwise they risk severe non-compliance penalties and reputational damage. Research published in March 2024 found that only 2% of large companies in the UK and EU were aware of the upcoming EU AI legislation.
5. Generative AI investments continue to increase
Funding for generative AI products that generate content based on prompts increased nearly eightfold from 2022 to 2023, to $25.2 billion). OpenAI, Anthropic, Hugging Face, and Inflection, among others, have received significant funding.
The building of generative AI capabilities is likely to meet the needs of enterprises that want to incorporate it into their processes. In 2023, 19.71% of all earnings calls of Fortune 500 companies will mention generative AI, and a McKinsey report shows that 551% of organizations now use AI, including generative AI, in at least one business unit or function.
Awareness of generative AI has grown rapidly since the launch of ChatGPT on November 30, 2022, and organizations have been racing to incorporate its capabilities into their products or services ever since. A recent survey of 300 global enterprises by MIT Technology Review Insights in partnership with Telstra International found that respondents expect the number of functions in which they deploy generative AI to more than double by 2024.
However, according to artificial intelligenceauthorityAccording to Gary Marcus, an analyst at Stanford University, there is some evidence that the boom in generative AI "may end soon" and companies should be wary. This is mainly due to the limitations of current technology, such as potential bias, copyright issues and inaccuracies. According to the Stanford report, the limited amount of online data available for training models may exacerbate existing problems, limiting improvements and scalability. The report states that AI companies may run out of high-quality language data in 2026, low-quality language data in 20 years, and image data in the late 2030s to mid-2040s.
6. The basis for LLM liability varies widely
According to the report, there are significant differences in the benchmarks that tech companies use to assess the trustworthiness or accountability of their LLMs. This “makes it difficult to systematically compareTopWork has been further complicated by the risks and limitations of AI models. “These risks include biased outputs and the leakage of private information from training datasets and conversation histories.
"There are currently no reporting requirements, and we have no reliable assessments that would allow us to confidently say that if a model passes these assessments, then it is safe," Ruel, a PhD student in Stanford University's Intelligent Systems Laboratory, told TechRepublic in an email.Firstname."
Without standardization in this area, there is an increased risk that some untrustworthy AI models could slip through the net and be integrated by enterprises. “Developers may selectively report on benchmarks that positively highlight their model’s performance,” the report added.
"There are multiple reasons why harmful models slip through the cracks," Reuel told TechRepublic. "First, there are no standardized or required assessments, so it's hard to compare models and their (relative) risk; second, there are no reliable assessments, especially of the underlying models, to verify the models' performance."absoluteA solid, comprehensive understanding of risk.”
7. Employees are nervous and concerned about AI
The report also tracks how attitudes toward AI are changing as awareness grows. A survey found that 52% of people are nervous about AI products and services, and that number has risen by 13% in 18 months. The study also found that only 54% of adults believe that the benefits of using AI products and services outweigh the disadvantages, while 36% of adults are worried that AI may take away their jobs in the next five years.
Other surveys cited in the AI Index report found that 531,033 million Americans are currently more concerned than excited about AI, with their most common concern being the impact of AI on jobs. As AI technology begins to be integrated into organizations, such concerns may have a particular impact on employees’ mental health, and companies may need to be more proactive in their response to AI.LeadersThis should be monitored.
8. TodayMost PopularMost of the LLMs were created by the United States and China
Ben Abbott of TechRepublic covered this trend in the Stanford report in his article on building AI infrastructure models in the Asia-Pacific region. He wrote in part:
“The United States’ dominance in AI continues throughout 2023. Stanford University’s 2024 AI Index report found that the United States released 61 notable models in 2023; this was ahead of China’s 15 new models and France,maximumThe contributors are eight models from Europe (Figure 1). The UK and the EU as a region produce 25FamousModel - since 2019firstBeat China - Singapore has 3 models, the largest in Asia PacificonlyAnother productionFamousThe state of large language models. ”