Monday, April 7, 2025
HomeAIMeta releases Llama 4, a new crop of flagship AI models

Meta releases Llama 4, a new crop of flagship AI models

Share


Meta has released a new collection of AI models, Llama 4, in its Llama family — on a Saturday, no less.

There are four new models in total: Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. All were trained on “large amounts of unlabeled text, image, and video data” to give them “broad visual understanding,” Meta says.

The success of open models from Chinese AI lab DeepSeek, which perform on par or better than Meta’s previous flagship Llama models, reportedly kicked Llama development into overdrive. Meta is said to have scrambled war rooms to decipher how DeepSeek lowered the cost of running and deploying models like R1 and V3.

Scout and Maverick are openly available on Llama.com and from Meta’s partners, including the AI dev platform Hugging Face, while Behemoth is still in training. Meta says that Meta AI, its AI-powered assistant across apps including WhatsApp, Messenger, and Instagram, has been updated to use Llama 4 in 40 countries. Multimodal features are limited to the U.S. in English for now.

Some developers may take issue with the Llama 4 license.

Users and companies “domiciled” or with a “principal place of business” in the EU are prohibited from using or distributing the models, likely the result of governance requirements imposed by the region’s AI and data privacy laws. (In the past, Meta has decried these laws as overly burdensome.) In addition, as with previous Llama releases, companies with more than 700 million monthly active users must request a special license from Meta, which Meta can grant or deny at its sole discretion.

“These Llama 4 models mark the beginning of a new era for the Llama ecosystem,” Meta wrote in a blog post. “This is just the beginning for the Llama 4 collection.”

Meta Llama 4
Image Credits:Meta

Meta says that Llama 4 is its first cohort of models to use a mixture of experts (MoE) architecture, which is more computationally efficient for training and answering queries. MoE architectures basically break down data processing tasks into subtasks and then delegate them to smaller, specialized “expert” models. 

Maverick, for example, has 400 billion total parameters, but only 17 billion active parameters across 128 “experts.” (Parameters roughly correspond to a model’s problem-solving skills.) Scout has 17 billion active parameters, 16 experts, and 109 billion total parameters.

According to Meta’s internal testing, Maverick, which the company says is best for “general assistant and chat” use cases like creative writing, exceeds models such as OpenAI’s GPT-4o and Google’s Gemini 2.0 on certain coding, reasoning, multilingual, long-context, and image benchmarks. However, Maverick doesn’t quite measure up to more capable recent models like Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and OpenAI’s GPT-4.5.

Scout’s strengths lie in tasks like document summarization and reasoning over large codebases. Uniquely, it has a very large context window: 10 million tokens. (“Tokens” represent bits of raw text — e.g. the word “fantastic” split into “fan,” “tas” and “tic.”) In plain English, Scout can take in images and up to millions of words, allowing it to process and work with extremely lengthy documents.

Scout can run on a single Nvidia H100 GPU, while Maverick requires an Nvidia H100 DGX system or equivalent, according to Meta’s calculations.

Meta’s unreleased Behemoth will need even beefier hardware. According to the company, Behemoth has 288 billion active parameters, 16 experts, and nearly two trillion total parameters. Meta’s internal benchmarking has Behemoth outperforming GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro (but not 2.5 Pro) on several evaluations measuring STEM skills like math problem solving.

Of note, none of the Llama 4 models is a proper “reasoning” model along the lines of OpenAI’s o1 and o3-mini. Reasoning models fact-check their answers and generally respond to questions more reliably, but as a consequence take longer than traditional, “non-reasoning” models to deliver answers.

Meta Llama 4
Image Credits:Meta

Interestingly, Meta says that it tuned all of its Llama 4 models to refuse to answer “contentious” questions less often. According to the company, Llama 4 responds to “debated” political and social topics that the previous crop of Llama models wouldn’t. In addition, the company says, Llama 4 is “dramatically more balanced” with which prompts it flat-out won’t entertain.

“[Y]ou can count on [Lllama 4] to provide helpful, factual responses without judgment,” a Meta spokesperson told TechCrunch. “[W]e’re continuing to make Llama more responsive so that it answers more questions, can respond to a variety of different viewpoints […] and doesn’t favor some views over others.”

Those tweaks come as some White House allies accuse AI chatbots of being too politically “woke.”

Many of President Donald Trump’s close confidants, including billionaire Elon Musk and crypto and AI “czar” David Sacks, have alleged that popular AI chatbots censor conservative views. Sacks has historically singled out OpenAI’s ChatGPT as “programmed to be woke” and untruthful about political subject matter.

In actuality, bias in AI is an intractable technical problem. Musk’s own AI company, xAI, has struggled to create a chatbot that doesn’t endorse some political views over others.

That hasn’t stopped companies including OpenAI from adjusting their AI models to answer more questions than they would have previously, in particular questions relating to controversial subjects.

Popular

Bird cuts 120 jobs as part of strategic realignment

Cloud communication service Bird has cut 120 jobs — roughly one-third of its total workforce. The Amsterdam-based firm — formerly known as MessageBird...

Uber users in Austin are getting matched with Waymo robotaxis

Uber users in Austin now have a choice between a Waymo robotaxi and a human driver. The two companies officially launched...

Related Articles

BBC seeks more prominent credit in Apple and Google News

The British Broadcasting Corporation has filed a complaint with a UK antitrust regulator...

Metas benchmarks for its new AI models are a bit misleading

One of the new flagship AI models Meta released on Saturday, Maverick, ranks...

Cosmic Russian Roulette: Death by Dark Energy

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black...

Microsoft releases AI-generated Quake II demo, but admits limitations

Microsoft has released a browser-based, playable level of the classic video game Quake...

Deels comms chief departs amidst spying lawsuit from Rippling

Elisabeth Diana, head of communications at human resources Deel, is no longer with...

DOGE reportedly planning a hackathon to build mega API for IRS data

Elon Musk’s Department of Government Efficiency (DOGE) plans to host a hackathon next...

Week in Review: Nintendo unveils the Switch 2

Welcome back to Week in Review! Tons of stuff for you today, including...

Gravitational SETI

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black...
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x