ChatGPT maker secretly developing new type of AI – Reuters

4 months ago 3
ARTICLE AD BOX

OpenAI is looking to substantially increase the reasoning capacity of its models, the agency has said

OpenAI, the creator of virtual assistant ChatGPT, is working on a novel approach to its artificial intelligence technology, Reuters has reported.

As part of the project, code-named ‘Strawberry,’ the Microsoft-backed firm is trying to drastically improve the reasoning capabilities of its models, the agency said in an article on Friday.

The way Strawberry works is “a tightly kept secret” even within OpenAI itself, a person familiar with the matter told Reuters.

The source said the project involves a “specialized way” of processing an AI model after it has been pre-trained on extensive datasets. Its aim is to enable artificial intelligence to not just generate answers to queries, but to plan ahead sufficiently to conduct so-called “deep research,” by navigating the internet autonomously and reliably, the source explained.

Reuters said it had reviewed an internal OpenAI document, detailing a plan for how the US firm could deploy Strawberry to perform research. However, the agency said it was not able not establish when the technology will become available to the public. The source described the project as a “work in progress.”

Read more
A Wikipedia logo seen displayed on a smartphone. Wikipedia’s political bias spreading to AI – study

When addressed on the issue, an OpenAI spokesperson told Reuters: “We want our AI models to see and understand the world more like we [humans] do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.” The spokesperson did not address Strawberry directly in his response.

Current AI large language models are capable of summarizing vast amounts of text and putting together coherent prose quicker than people do, but usually struggle with common sense solutions that are intuitive to humans. When this happens, the models often “hallucinate” by trying to represent false or misleading information as facts.

Researchers who talked to Reuters said that reasoning, which has so far eluded AI models, is key to artificial intelligence achieving human or super-human level.

Last week, one of the world's leading experts in artificial intelligence and a pioneer in deep learning, Yoshua Bengio, again warned of the “many risks,” including possible “extinction of humanity,” posed by private corporations racing to achieve AI of human-level and beyond.

”Entities that are smarter than humans and that have their own goals: are we sure they will act towards our well-being?” the Montreal University professor and scientific director of the Montreal Institute for Learning Algorithms (MILA) said in an article on his website.

READ MORE: Musk drops case against OpenAI – reports

Bengio urged the scientific community and society as a whole to make “a massive collective effort” to figure out ways to keep advanced AI in check.

Read Entire Article