Within a mere 36 hours of its release, Meta's latest foundational model, Llama 4, was met with an avalanche of disappointing reviews in the comment section. Users voiced complaints about its subpar performance across various tests, particularly highlighting its deficient coding abilities. Despite official evaluation scores painting a positive picture, Llama 4 placed last in third-party benchmark tests, raising concerns about potential data overfitting or vote manipulation. Adding to the turmoil, the head of Meta AI research abruptly resigned, and insiders disclosed that the training of Llama 4 had lagged significantly behind that of Deepseek v3, plunging the team into a state of chaos and concern.
