Meta officially unveiled its latest open-source large model, Llama 4, yesterday, introducing two variations of the Mixture of Experts (MoE) architecture: Scout (totaling 109 billion parameters, with 17 billion active) and Maverick (totaling 400 billion parameters, with 17 billion active). Llama 4 inherently supports both text and image inputs, boasting an expanded context window of 10 million tokens, positioning it as a truly cutting-edge model. Nevertheless, initial evaluation results indicate that its actual performance might fall short of expectations.
