close
close

Guiltandivy

Source for News

Meta's “Self-Taught Evaluator” marks a breakthrough in autonomous AI development
Update Information

Meta's “Self-Taught Evaluator” marks a breakthrough in autonomous AI development

Meta Platforms (META: Financials) has unveiled several new AI models from its research division, including a groundbreaking “Self-Taught Evaluator” designed to curb human involvement in AI development. By allowing AI to independently evaluate and improve other AI models, this creative approach can transform the process of AI development. Using a Large Language Model (LLM) as a judge, the Self-Taught Evaluator creates contrasting model results and simultaneously evaluates and tracks the logic. Unlike current approaches such as reinforcement learning from human feedback, this iterative self-improvement technique aims to improve AI performance without the intervention of human annotations.

The release of the model follows Meta's previous work, first introduced in August, which leverages the chain of thought approach of OpenAI's o1 models. This approach improves the AI's ability to produce consistent assessments regarding the results of other models. The ability of AI models to evaluate themselves and grow from their mistakes paves the way for the creation of fully autonomous AI systems. Metaresearchers believe that this method could significantly reduce the need for expensive human knowledge for training models. Meta has updated its Anything Model 2 (SAM 2.1) segment, adding new tools for developers and improved image and video training capabilities.

This article first appeared on GuruFocus.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *