OpenAI The company recently issued a press release,Announcement of improved fine-tuningAPI, and further expandCustom Modelsplan.
The following are the improvements to the fine-tuning API in the press release:
Epoch-based Checkpoint Creation
A complete checkpoint of the fine-tuned model is automatically generated during each training epoch (the training process that passes all examples in the training dataset once (and only once)), which can reduce the need for subsequent retraining, especially in cases of overfitting.
Comparative Playground
New side-by-side playground UI for comparing model quality and performance, allowing human evaluation of the output of multiple models or fine-tuning snapshots based on a single cue word.
Third-party integrations:
Support for integration with third-party platforms (starting this week with Weights and Biases) allows developers to share detailed fine-tuning data with other parts of the stack.
More comprehensive validation metrics:
Ability to calculate metrics like loss and accuracy on the entire validation dataset (rather than sampled batches), providing better insight into model quality.
Hyperparameter Configuration
The ability to configure available hyperparameters from the dashboard (rather than only through the API or SDK)
Improved fine-tuning control panel
The ability to configure hyperparameters, view more detailed training metrics, and rerun jobs from previous configurations.
Expanding the Custom Model Program
To further expand the custom model plan, OpenAI has also launched an assisted fine-tuning service. Developers can seek help from OpenAI professional team members to train and optimize models for specific fields, attach Hyperparameters and various parameter efficient fine-tuning (PEFT) methods.