Research projects

3 AI research projects Meta is working on this year

Meta is working on concepts like universal speech translation, AI that can learn like a human, and a more conversational AI assistant.

Facebook parent company Meta has revealed where its AI technology research will focus for the coming year as it works to build its metaverse concept.

In a statement released yesterday (February 23), the tech giant said the Metaverse is the “most ambitious long-term project” the company has attempted and will require major advances in “nearly every technology with which we work”, including AI breakthroughs.

CEO Mark Zuckerberg revealed late last year that the company had a strong focus on developing an AR and VR universe, which he described at the time as the “successor to the mobile internet. “.

Speaking at an online event yesterday, Zuckerberg shared some of the company’s ambitious AI projects and said they would “provide the highest levels of privacy”.

“As we build for the Metaverse, we will need AI to do much of the heavy lifting that makes next-generation computing experiences possible,” the company said in a statement.

“This means continuing to innovate in areas such as self-supervised learning, so that we are not dependent on limited labeled datasets and true multimodal AI, so that we can accurately interpret and predict the type of interactions that will take place in persistent environments, 3D virtual spaces with many participants.

Meta has been investing in AI research for some time. The company recently said its AI research team has been working for years on a supercomputer that could be the “biggest and fastest” in the world when fully built.

However, the upcoming research projects listed yesterday did not mention in detail the use of AI for moderation, a challenge facing online social spaces.

Yesterday the National Society for the Prevention of Cruelty to Children said there was an urgent need to improve online safety. It was in response to a BBC investigation where a researcher posing as a 13-year-old girl witnessed grooming, sexual material and rape threats on a virtual reality app that can be downloaded from an app store on Facebook’s Meta Quest headset.

Here are some of the major AI research projects announced by Meta:

Universal speech translation

The tech giant said billions of people cannot access information on the internet in their native language due to barriers from machine translation (MT) systems.

Some languages ​​such as English, Spanish, and Mandarin are easily translated by modern machine translation systems, but these systems may struggle to translate languages ​​that lack available training data or a translation system. standardized writing.

In order to solve this problem, Meta is working on two long-term projects. The first is called No Language Left Behind, an AI model that could learn from languages ​​with fewer examples to train from, with the goal of translating hundreds of languages.

The second idea is a Universal Speech Translator, which aims to use new approaches to translate speech from one language to another in real time, in a way that could better support languages ​​that don’t have a system standardized writing.

“It will take a lot more work to provide truly universal translation tools to everyone in the world,” the company said. “But we believe the efforts described here are an important step forward.”

The tech giant also said it plans to work on universal translation “responsibly”, looking for ways to mitigate bias and “maintain cultural sensitivities” when information moves between languages. to another.

Last year, documents shared by Facebook whistleblower Frances Haugen indicated that Meta was ill-equipped to solve problems such as hate speech and misinformation in languages ​​other than English. Speaking at a meeting of the Oireachtas joint committee yesterday, Haugen also said Facebook research indicated that using AI or TM systems to address issues such as hate speech is limited because “the language is nuanced”.

An AI that can learn like humans and animals

As part of Meta’s presentation yesterday, its chief AI scientist, Yann LeCun, highlighted current problems with AI’s learning ability, saying a human can learn to drive in 20 hours. approximately, while autonomous driving systems require huge amounts of data and testing in virtual environments. .

In order to improve the learning ability of AI, LeCun is investigating new ways to develop “human-level AI” by mimicking the way animals learn.

“Human and non-human animals seem able to learn enormous amounts of basic knowledge about how the world works through observation and through an incomprehensible amount of interaction in a task-independent and non-task-independent manner. supervised,” LeCun said.

“It can be hypothesized that this accumulated knowledge may form the basis of what is often called common sense.”

The company noted in a blog post that developing machines that can gather information like humans is a long-term endeavor “with no guarantee of success.”

“But we are confident that basic research will continue to yield deeper understanding of minds and machines, and lead to advances that will benefit everyone who uses AI,” Meta said.

Conversational AI

Meta is also looking to build better AI assistants that can be more conversational and natural when dealing with users.

The company said it has developed an end-to-end neural model that can power “more personal and contextual AI conversations.” A model has been implemented in Portal, Meta’s video calling device, for testing purposes.

“Even in this first test, we believe the model outperforms standard approaches,” Meta said in a post.

“On Portal, we saw a significant improvement over our existing approach in evaluating the reminders domain, measured by the success rate of completing a set of reminders goals, while maintaining a number of equal turns.”

Don’t miss out on the knowledge you need to succeed. Sign up for the brief dailySilicon Republic’s must-have science and technology news digest.