Apple researchers say their on-device model ReALM outperforms GPT-4 and can significantly improve Siri's intelligence

Although currently Siri You can try to describe the images in the message, but the results are not stable. However,appleThe company hasn’t given up on exploring artificial intelligence. In a recent research paper, Apple’s AI team described a model that could significantly improve Siri’s intelligence, and they believe the model, called AI, could help it improve its performance. ReALM The model outperformed OpenAI's well-known language model in the test GPT-4.0.

Apple researchers say their on-device model ReALM outperforms GPT-4 and can significantly improve Siri's intelligence

What’s special about ReALM is that it can understand both what’s on the user’s screen and what actions they’re taking at the same time. The paper divides this information into three types:

  • Screen entity: refers to the content currently displayed on the user's screen.

  • Dialogue entity: refers to the content related to the conversation. For example, if the user says "call mom", then mom's contact information is the dialogue entity.

  • Background entities: refers to entities that may not be directly related to the user's current operation or the content displayed on the screen, such as the music being played or the alarm that is about to ring.

If it works perfectly, ReALM will make Siri more intelligent and useful. They compared the performance of ReALM with OpenAI's GPT-3.5 and GPT-4.0:

“We tested both the GPT-3.5 and GPT-4.0 models provided by OpenAI and provided them with contextual information to predict a range of possible entities. GPT-3.5 only accepts text input, so we only provided text prompts. GPT-4 can understand image information, so we provided it with screenshots, which significantly improved its screen entity recognition performance.”

So how does Apple perform with ReALM?

“Our models have made significant progress in recognizing different types of entities. Even the smallest model has improved the accuracy of on-screen entity recognition by more than 5% compared to the original system.In a comparison with GPT-3.5 and GPT-4.0, our smallest model performs on par with GPT-4.0, while the larger models significantly outperform it.. "

One of the conclusions of the paper is thatEven with far fewer parameters than GPT-4, ReALM is able to match its performance and perform better when processing domain-specific user instructions., which makes ReALM a practical and efficient entity recognition system that can run on the device side.

For Apple, how to apply this technology to devices without affecting performance seems to be the key. With the WWDC 2024 Developer Conference to be held on June 10, the outside world generally expects Apple to demonstrate more artificial intelligence technology achievements in new systems such as iOS 18.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI announced that users can use ChatGPT without registering an account, but there are some restrictions

2024-4-2 9:15:53

Information

Samsung plans to add generative AI features to Bixby to make it smarter

2024-4-2 9:17:33

Search