OpenAI Says It Has Begun Training a New Flagship A.I. Model

OpenAI Says It Has Begun Training a New Flagship A.I. Model


OpenAI said on Tuesday that it has begun training a new flagship artificial intelligence model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.

The San Francisco start-up, which is one of the world’s leading A.I. companies, said in a blog post that it expects the new model to bring “the next level of capabilities” as it strives to build “artificial general intelligence,” or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple’s Siri, search engines and image generators.

OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company said.

OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

OpenAI’s GPT-4, which was released in March 2023, enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.

Days after OpenAI showed the updated version — called GPT-4o — the actress Scarlett Johansson said it used a voice that sounded “eerily similar to mine.” She said she had declined efforts by OpenAI’s chief executive, Sam Altman, to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using the voice. The company said that the voice was not Ms. Johansson’s.

Technologies like GPT-4o learn their skills by analyzing vast amounts of data digital, including sounds, photos, videos, Wikipedia articles, books and news stories. The New York Times sued OpenAI and Microsoft in December, claiming copyright infringement of news content related to A.I. systems.

Digital “training” of A.I. models can take months or even years. Once the training is completed, A.I. companies typically spend several more months testing the technology and fine tuning it for public use.

That could mean that OpenAI’s next model will not arrive for another nine months to a year or more.

As OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said. The committee includes Mr. Altman, as well as OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman. The company said that the new policies could be in place in the late summer or fall.

Earlier this month, OpenAI said Ilya Sutskever, a co-founder and one of the leaders of its safety efforts, was leaving the company. This caused concern that OpenAI was not grappling enough with the dangers posed by A.I.

Dr. Sutskever had joined three other board members in November to remove Mr. Altman from OpenAI, saying Mr. Altman could no longer be trusted with the company’s plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Mr. Altman’s allies, he was reinstated five days later and has since reasserted control over the company.

Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways of ensuring that future A.I. models would not do harm. Like others in the field, he had grown increasingly concerned that A.I. posed a threat to humanity.

Jan Leike, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team’s future in doubt.

OpenAI has folded its long-term safety research into its larger efforts to ensure that its technologies are safe. That work will be led by John Schulman, another co-founder, who previously headed the team that created ChatGPT. The new safety committee will oversee Dr. Schulman’s research and provide guidance for how the company will address technological risks.



Source link

Posted in

Kim browne

Leave a Comment