Meta recently introduced their new family of large language models (LLMs) called Llama 3. This release includes 8B and 70B parameters pre-trained and instruction-tuned models.
Here is a summary of the mentioned technical details of Llama 3:
Notably, Llama 3 8B (instruction-tuned) outperforms Gemma 7B and Mistral 7B Instruct. Llama 3 70 broadly outperforms Gemini Pro 1.5 and Claude 3 Sonnet and falls a bit behind on the MATH benchmark when compared to Gemini Pro 1.5.
Source: Meta AI
The pretrained models also outperform other models on several benchmarks like AGIEval (English), MMLU, and Big-Bench Hard.
Source: Meta AI
Meta also reported that they will be releasing a 400B parameter model which is still training and coming soon! There are also efforts around multimodal support, multilingual capabilities, and longer context windows in the pipeline. The current checkpoint for Llama 3 400B (as of April 15, 2024) produces the following results on the common benchmarks like MMLU and Big-Bench Hard:
Source: Meta AI
The licensing information for the Llama 3 models can be found on the model card.
Here is a longer review of Llama 3: