Welcome to Facet AI
This quickstart guide will walk you through creating your first fine-tuned model using Facet AI. You’ll go from data upload to model deployment in just a few steps.Prerequisites: You’ll need a Facet AI account. If you don’t have one,
sign up here first.
Step 1: Create Your Account and Access the Platform
1
Sign up for Facet AI
- Visit Facet AI
- Click “Get Started” and create your account
- Verify your email address if required
You should see the Facet AI dashboard after successful signup.
2
Navigate to the Dashboard
Once logged in, you’ll see the main dashboard with sections for:
- Datasets: Manage your training data
- Training: Create and monitor fine-tuning jobs
- Models: View your trained models
- Exports: Download models in various formats
Step 2: Prepare Your Dataset
1
Upload Your Data
You have two options for getting your training data:
Connect Hugging Face
- Go to Datasets → Create Dataset
- Choose “Import from Hugging Face”
- Enter the dataset name (e.g.,
huggingface/datasets) - Select the specific split you want to use
- Upload Custom Data
- Go to Datasets → Create Dataset
- Choose “Upload from file”
- Upload your data file (CSV, JSON, or TXT)
- Give your dataset a descriptive name
2
Configure Dataset Processing
- After upload, configure your dataset:
- Task Type: Choose from Language Modeling, Preference Tuning, or Multimodal
- Format: Convert your dataset into conversational format for training
- Augmentation: Setup data augmentation if desired
- Click “Process Dataset”
Processing typically takes 1-5 minutes depending on dataset size.
Step 3: Start Your First Training Job
1
Create Training Configuration
- Go to Training → New Job
- Select your processed dataset from the dropdown
- Choose your model:
- Gemma 3 270M: Fastest, good for experimentation
- Gemma 3 1B: Balanced performance and speed
- Gemma 3 4B: Better quality, longer training time
- Gemma 3 12B: High quality, requires more resources
For your first model, we recommend starting with Gemma 3 270M/1B to get quick results.
2
Configure Training Parameters
Set your training parameters (or use defaults):
- Learning Rate: Start with default (0.0001)
- Batch Size: Use default (4) for most cases
- Epochs: Generally 1-3 epochs suffice, for testing limit to 100-500 training steps
- Training Method: Select between SFT, DPO, or GRPO based on your task
3
Launch Training
- Review your configuration
- Give your training job a descriptive name
- Click “Start Training”
Training will begin immediately. You can monitor progress in the Training section.
Step 4: Monitor and Evaluate Your Model
1
Track Training Progress
- Go to Training to see your active jobs
- Click on your training job to view detailed progress
- Monitor metrics like loss, learning rate, and training time
Training time varies: 270M models train in ~30 minutes, while 12B models can take several hours.
2
Test Your Model
Once training completes:
- Go to Models section
- Find your newly trained model
- Click “Test Model” to run inference
- Try different prompts to evaluate performance
Step 5: Export and Deploy Your Model
1
Export Your Model
- Go to Exports → Create Export
- Select your trained model
- Choose export format:
- GGUF: For local deployment with llama.cpp
- Adapter: For Hugging Face transformers
- Merged: Complete model ready for deployment
- Select quantization level (4-bit, 8-bit, or 16-bit)
- Click “Create Export”
Export typically takes 5-15 minutes depending on model size and format.
2
Download Your Model
- Once export completes, click “Download”
- Save the model file to your local machine
- Your model is now ready for deployment!
Next Steps
Congratulations! You’ve successfully fine-tuned your first Gemma model. Here’s what to explore next:Advanced Training
Learn about DPO, GRPO, and advanced training techniques for better model performance.
Model Deployment
Deploy your models to production using Google Cloud Run or other platforms.
Evaluation Techniques
Learn comprehensive evaluation methods to assess your model’s quality.
Dataset Best Practices
Master dataset preparation for optimal fine-tuning results.
Troubleshooting
Training fails to start
Training fails to start
- Check your dataset is properly processed
- Ensure you have sufficient credits/quota
- Verify your training parameters are valid
Poor model performance
Poor model performance
- Try a larger model size (1B → 4B → 12B) - Increase training steps - Check your dataset quality and size - Consider data augmentation
Export issues
Export issues
- Ensure training completed successfully
- Try a different export format
- Check your available storage quota
Need more help? Check out our comprehensive tutorials or
contact support at facet.gemma@gmail.com.