OpenAi Fine-Tuner

OpenAI Fine-Tune Uploader

OpenAI Fine-Tune Uploader

Not Connected

Upload Fine-Tune

ID Filename Purpose Created At Actions

Your Fine-Tunes

ID Model Status Created At Actions

Create Fine-Tune

Key

MODEL:

A model is like a computer program that can learn from data and make predictions. It’s like a brain that can be taught new things.

When we talk about the “base model,” we’re referring to the starting point for the program. It’s like having a blank canvas to start with.

But, we can fine-tune the base model to make it better at doing specific things.

For example, if we want the program to be really good at writing stories, we might choose a base model called “curie” and fine-tune it to be even better at storytelling.

There are a few different base models to choose from, including “ada,” “babbage,” “curie,” and “davinci.” And, if someone has created a customized model after April 21st, 2022, we can use that too!

If we want to learn more about these different models and how they work, we can look at the Models documentation.

n_epochs:

N_epochs is a setting that controls how many times the model will go through the training data to learn from it. An epoch is like a full circle, where the model looks at each example in the training data and learns from it. Think of it like a student studying for a test – the more times they review the material, the better they understand it. Similarly, the more epochs the model goes through, the better it can learn from the data and make accurate predictions.

For example, if we want the program to be really good at writing stories, we might choose a base model called “curie” and fine-tune it to be even better at storytelling.

There are a few different base models to choose from, including “ada,” “babbage,” “curie,” and “davinci.” And, if someone has created a customized model after April 21st, 2022, we can use that too!

batch_size:

Batch size is the number of examples that a machine learning algorithm looks at during one cycle of training.

For example, let’s say we want to teach a machine learning model to recognize different animals in pictures. We might show the model a bunch of pictures of animals and tell it what each animal is.

The batch size is how many pictures the model looks at during each round of training. If we set the batch size to 10, the model will look at 10 pictures at a time and try to learn from them.

The default batch size is set to be around 0.2% of the total number of examples in the training set, with a maximum of 256. This means that if we have a lot of pictures to train on, we might use a larger batch size so the model can learn from more examples at once.

In general, larger batch sizes tend to work better for larger datasets, but it’s important to find the right balance so the model doesn’t become overwhelmed and stop learning effectively.

Prompt loss weight:

Prompt loss weight is a setting that controls how much the model focuses on learning from the instructions given to it (called the “prompt”) compared to the text it generates (called the “completion”).

The default value for prompt loss weight is 0.01. When the prompt loss weight is set to this value, the model will focus a little bit more on learning from the prompt than from the completion.

This can help stabilize the training process when the completions are short (not very long). It’s like giving the model a little extra help to understand the instructions better so it can generate more accurate and useful results.

Join The Prompt Muse Gang!

Get the latest A.I News, Reviews, and tutorials, hand-picked by A.I depending on your preference and sent directly to you.

A newsletter featuring two pictures of a man and a woman highlighting 'prompt news vs muse'.

*We promise not to spam, sell or do any other naughty things with your details – simply keep you updated when we launch new awesome tutorials and news.