The LLM training platform
A desktop app and CLI for the complete LLM training lifecycle. 13 training methods, 7 evaluation benchmarks, interpretability tools, and HuggingFace Hub, all running on your own compute. Local GPUs, Slurm clusters, Kubernetes, or AWS.
The problem
Each tool has its own interface, config format, and way of tracking state. The result: wasted GPU hours, no reproducibility, and no way to trace how a model was trained. Whether you're a two-person research lab or a 500-person enterprise, the workflow is the same: fragmented, manual, and slow.
The Solution
Pull data from local files, S3, or HuggingFace. Parse Parquet, JSONL, CSV. Auto-dedup, language detection, and perplexity-based quality scoring with full lineage tracking.
13 training methods: SFT, LoRA, QLoRA, DPO, RLHF, KTO, ORPO, GRPO, RLVR, distillation, domain adaptation, multimodal, and pretraining. Distributed across multiple GPUs and nodes on Slurm, Kubernetes, or AWS.
Run 7 industry benchmarks: MMLU, HellaSwag, ARC, WinoGrande, GSM8K, TruthfulQA, and HumanEval. Compare multiple models on the same evaluations side by side.
Talk to your model interactively. Then go deeper with 6 interpretability tools: logit lens, activation PCA, activation patching, linear probes, SAE analysis, and steering vectors.
A/B test two models side-by-side. Rate responses. Export your preferences as training data for DPO, KTO, or ORPO.
Ship your model in the format you need. ONNX, SafeTensors, GGUF, or HuggingFace. One click from the UI or one command from the CLI.
Feed preference data back into alignment training. Or build a full pipeline that automates the entire loop. The cycle closes.
Who it's for
Graduate students and academic labs with university Slurm clusters. Submit jobs, track experiments, and run interpretability analysis, without writing sbatch scripts or YAML configs.
Startups and small teams fine-tuning their own models. 13 training methods, 7 benchmarks, and HuggingFace Hub in one platform instead of six open-source tools duct-taped together.
Train on your own data without sending it to a third party. Connect your Kubernetes clusters or AWS instances. Full lineage tracking, reproducibility bundles, and multi-format export.
We're onboarding teams now. Tell us about your use case and we'll be in touch.
Get in Touch