Noti.Group RSS Feed
  • Contact Us
Monday, April 27, 2026
Noti Group Logo
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
Noti Group
No Result
View All Result
ADVERTISEMENT

Meta gets caught gaming AI benchmarks with Llama 4

in Technology
Reading Time: 4 mins read
407 4
A A
0
Meta wants to teach kids how to spot predators online
137
SHARES
6.8k
VIEWS
ShareShareShareShareShare

Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.”

Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs from different systems and vote on the best one. In Meta’s press release, the company highlighted Maverick’s ELO score of 1417, which placed it above OpenAI’s 4o and just under Gemini 2.5 Pro. (A higher ELO score means the model wins more often in the arena when going head-to-head with competitors.)

The achievement seemed to position Meta’s open-weight Llama 4 as a serious challenger to the state-of-the-art, closed models from OpenAI, Anthropic, and Google. Then, AI researchers digging through Meta’s documentation discovered something unusual.

In fine print, Meta acknowledges that the version of Maverick tested on LMArena isn’t the same as what’s available to the public. According to Meta’s own materials, it deployed an “experimental chat version” of Maverick to LMArena that was specifically “optimized for conversationality.”

“Meta’s interpretation of our policy did not match what we expect from model providers,” LMArena posted on X two days after the model’s release. “Meta should have made it clearer that ‘Llama-4-Maverick-03-26-Experimental’ was a customized model to optimize for human preference. As a result of that, we are updating our leaderboard policies to reinforce our commitment to fair, reproducible evaluations so this confusion doesn’t occur in the future.“

A spokesperson for Meta didn’t have a response to LMArena’s statement in time for publication.

While what Meta did with Maverick isn’t explicitly against LMArena’s rules, the site has shared concerns about gaming the system and taken steps to “prevent overfitting and benchmark leakage.” When companies can submit specially-tuned versions of their models for testing while releasing different versions to the public, benchmark rankings like LMArena become less meaningful as indicators of real-world performance.

”It’s the most widely respected general benchmark because all of the other ones suck,” independent AI researcher Simon Willison tells noti.group. “When Llama 4 came out, the fact that it came second in the arena, just after Gemini 2.5 Pro — that really impressed me, and I’m kicking myself for not reading the small print.”

Shortly after Meta released Maverick and Scout, the AI community started talking about a rumor that Meta had also trained its Llama 4 models to perform better on benchmarks while hiding their real limitations. VP of generative AI at Meta, Ahmad Al-Dahle, addressed the accusations in a post on X: “We’ve also heard claims that we trained on test sets — that’s simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.”

“It’s a very confusing release generally.”

Some also noticed that Llama 4 was released at an odd time. Saturday doesn’t tend to be when big AI news drops. After someone on Threads asked why Llama 4 was released over the weekend, Meta CEO Mark Zuckerberg replied: “That’s when it was ready.”

“It’s a very confusing release generally,” says Willison, who closely follows and documents AI models. “The model score that we got there is completely worthless to me. I can’t even use the model that they got a high score on.”

Meta’s path to releasing Llama 4 wasn’t exactly smooth. According to a recent report from The Information, the company repeatedly pushed back the launch due to the model failing to meet internal expectations. Those expectations are especially high after DeepSeek, an open-source AI startup from China, released an open-weight model that generated a ton of buzz.

Ultimately, using an optimized model in LMArena puts developers in a difficult position. When selecting models like Llama 4 for their applications, they naturally look to benchmarks for guidance. But as is the case for Maverick, those benchmarks can reflect capabilities that aren’t actually available in the models that the public can access.

As AI development accelerates, this episode shows how benchmarks are becoming battlegrounds. It also shows how Meta is eager to be seen as an AI leader, even if that means gaming the system.

[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]

Tags: AImetaTech
Previous Post

Broadway star Jasmine Amy Rogers is born in new musical

Next Post

‘Handmaid’s Tale’ creator knew how the show would end years ago

Related Posts

Google’s new gradient icon design is coming to more apps
Technology

Google’s new gradient icon design is coming to more apps

April 26, 2026
Skylight’s 15-inch smart calendar is down to its lowest price to date
Technology

Skylight’s 15-inch smart calendar is down to its lowest price to date

April 26, 2026
When I say “slide every volume on the drive to the right” I’m not kidding.
Technology

After three months on Linux, I don’t miss Windows at all

April 26, 2026
An influx of used EVs could drive down prices
Technology

An influx of used EVs could drive down prices

April 25, 2026
Load More
Next Post
'Handmaid's Tale' creator knew how the show would end years ago

'Handmaid's Tale' creator knew how the show would end years ago

No Result
View All Result

Recent Posts

  • Trevor Bauer takes jab at critics after throwing Ducks no-hitter: ‘So washed’
  • NFL draft shines light on John Harbaugh and Joe Schoen’s possible long-term Giants chemistry
  • ‘The Lost Boys’ review: Vampire show is best new musical on Broadway
  • Red-carded goalkeeper punches opponent in face as game descends into chaos
  • Carlos Mendoza’s seat couldn’t be hotter with Mets a mess

Recent Comments

  • Stefano on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Van Hens on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Ioannis K on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Panagiotis Nikolaos on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • John Miele on UK government suggests deleting files to save water

Noti Group All rights reserved

No Result
View All Result
Noti Group

What’s New Here

  • Trevor Bauer takes jab at critics after throwing Ducks no-hitter: ‘So washed’
  • NFL draft shines light on John Harbaugh and Joe Schoen’s possible long-term Giants chemistry
  • ‘The Lost Boys’ review: Vampire show is best new musical on Broadway

Topics to Cover!

  • Business (4,868)
  • Entertainment (1,973)
  • General News (326)
  • Health (327)
  • Investigative Journalism (12)
  • Lifestyle (4)
  • Sports (9,844)
  • Technology (6,683)
  • World News (1,336)
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • RSS
  • Contact News Room
  • Code of Conduct
  • Careers
  • Values
  • Advertise
  • DMCA

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.