ARTICLE AD BOX
Reuters revealed that OpenAI, the company that created the virtual assistant ChatGPT, is secretly working on a new approach to its artificial intelligence technology.
The agency said in an article published on Friday that the company, led by Sam Altman, is working on a new thinking technology for so-called “large language models” (LLMs), which are characterized by their ability to understand and generate language for general purposes, and which bears the code name “Strawberry.”
Through this project, the Microsoft-backed company is trying to significantly improve the reasoning capabilities of its models.
How the Strawberry model works is “top secret” even within OpenAI itself, a person familiar with the project told Reuters.
The source said the project includes a "specialized method" for processing the AI model after it has been pre-trained on large-scale data sets.
The source explained that the goal of the model is to enable artificial intelligence, not only to generate answers to queries, but to plan in advance enough to conduct what is called “deep search,” by navigating the Internet independently and reliably.
Reuters said it had reviewed an internal OpenAI document detailing a plan for how the US company would deploy the strawberry model for research.
However, the agency noted that it was unable to say when the technology would become publicly available. The source described the project as a "work in progress."
“We want our AI models to see and understand the world as we humans do,” an OpenAI spokesperson told Reuters. “Continuous research into new AI capabilities is common practice in the industry, with a shared belief that these systems will get better at reasoning over time.” The spokesperson did not directly address the strawberry model in his response.
According to the report, current large language models based on artificial intelligence are able to summarize huge amounts of text and assemble coherent texts faster than humans can, but they often struggle to find solutions that make sense to humans. When this happens, they often try to present false or misleading information as facts.