LiteRunner
LiteRunner streamlines the execution and tracking of generative model runs by declaratively defining parameters, outputs, and metrics in a Python script. It automates CLI argument parsing, interactive prompts for missing values, subprocess management, metric extraction from stdout, and experiment logging to Weights & Biases—ideal for ML researchers and practitioners who need reproducible, tracked model experiments without boilerplate.
LiteRunner is a Python framework that automates ML experiment tracking by declaratively defining parameters, outputs, and metrics in a simple script that handles CLI parsing, interactive prompts, subprocess execution, and Weights & Biases integration. It eliminates boilerplate code for ML researchers who need reproducible experiments with automatic logging and file uploads.
- ✓Excellent declarative API with Param and Metric classes that cleanly separate configuration from execution logic
- ✓Comprehensive integration with Weights & Biases including automatic code snapshots, git metadata, and file uploads based on parameter types
- ✓Sophisticated interactive TUI that prompts for missing parameters while supporting both interactive and non-interactive modes for different use cases
- →Add comprehensive test coverage and error handling for edge cases like network failures, invalid file paths, or malformed regex patterns in metrics
- →Implement configuration file support (YAML/TOML) to reduce repetitive parameter definitions across similar experiments and enable parameter inheritance