Meta-Llama-3-8B-Instruct

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

Developer Portal : https://api.market/store/bridgeml/meta-llama3-8b

Llama 3

This cheap LLM API was developed by Meta and released the Meta Llama 3 family of large language models (LLMs), a collection of pre-trained and instruction-tuned generative text models in 8 and 70B sizes. The Llama 3 instruction-tuned models are optimized for dialogue use cases and outperform many of the available open-source chat models on common industry benchmarks.

Input: Models input text only.

Output: Models generate text and code only.

Model Architecture: Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Params
Context length
Token count
Knowledge cutoff

8B

8K

15T+

March, 2023

Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction-tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks. This is an easy-to-use LLM API and cheap LLM with cost of $0.18 per million tokens.

Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.

Time (GPU hours)
Power Consumption (W)
Carbon Emitted(tCO2eq)

1.3M

700

390

Training Data

Overview Llama 3 was pre-trained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.

Data Freshness The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively.

Request and Response

Request

Response

This is a cheap and easy to use LLM API which you can try out at https://api.market/store/bridgeml/meta-llama3-8b

Last updated