The following models are available to use with our fine-tuning API.
Training Precision Type indicates the precision type used during training for each model.
bf16 (bfloat 16): This uses bf16 for all weights. Some large models on our platform uses full bf16 training for better memory usage and training speed.
Pricing for fine-tuning is based on model size, the number of training tokens, the number of validation tokens, the number of evaluations, and the number of epochs. In other words, the total number of tokens used in a job is n_epochs * n_tokens_per_dataset.
For example, if you start a "meta-llama/Meta-Llama-3-8B-Instruct" fine-tuning job with 1M token and 1 epoch, the cost will be ¥3.47