Meta-Llama-3-70B-Instruct
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Last updated
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Last updated
Developer Portal : https://api.market/store/bridgeml/meta-llama3-70b
This cheap LLM API was developed by Meta and released the Meta Llama 3 family of large language models (LLMs), a collection of pre-trained and instruction-tuned generative text models in 8 and 70B sizes. The Llama 3 instruction-tuned models are optimized for dialogue use cases and outperform many of the available open-source chat models on common industry benchmarks.
Input: Models input text only.
Output: Models generate text and code only.
Model Architecture: Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Params | Context length | Token count | Knowledge cutoff |
---|---|---|---|
70B | 8K | 15T+ | December, 2023 |
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction-tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks. This is an easy-to-use LLM API and cheap LLM with cost of $1.20 per million tokens.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaβs sustainability program.
Time (GPU hours) | Power Consumption (W) | Carbon Emitted(tCO2eq) |
---|---|---|
6.4M | 700 | 1900 |
Overview Llama 3 was pre-trained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively.
You can try this cheap and easy to use LLM API out here at https://api.market/store/bridgeml/meta-llama3-70b