Module Documentation
Detailed documentation of voria's core modules and their APIs.
Python Core Modules
terminalllm/
- LLM Provider Integration
llm/Purpose: Abstract interface to multiple LLM providers.
Key Classes:
- ▶- Abstract base classterminal
BaseLLMProvider - ▶- Runtime model discoveryterminal
ModelDiscovery - ▶- Interactive configurationterminal
ProviderSetup - ▶- Model metadataterminal
ModelInfo - ▶Provider implementations:
- ▶- Modal Z.ai backendterminal
ModalProvider - ▶- OpenAI APIterminal
OpenAIProvider - ▶- Google Geminiterminal
GeminiProvider - ▶- Anthropic Claudeterminal
ClaudeProvider
- ▶
Main Methods:
python# Discover available models models = await LLMProviderFactory.discover_models("openai", api_key) # Create provider instance provider = LLMProviderFactory.create("openai", api_key, "gpt-5.4") # Use provider await provider.plan("Issue description") await provider.generate_patch(issue_context, plan) await provider.analyze_test_failure(test_output, code)
Configuration:
- ▶Stored in terminal
~/.voria/providers.json - ▶Supports environment variable fallback
- ▶Interactive setup via terminal
python3 -m voria.core.setup
Token Tracking:
pythonresponse = await provider.call_llm(prompt) print(response.token_usage) # {"used": 1000, "max": 4000}
terminalpatcher/
- Code Patching
patcher/Purpose: Parse and apply unified diffs.
Key Classes:
- ▶- Parse diff formatterminal
UnifiedDiffParser - ▶- Individual hunk dataterminal
PatchHunk - ▶- Apply patches with rollbackterminal
CodePatcher
Main Methods:
python# Parse diff hunks = UnifiedDiffParser.parse(unified_diff_string) # Create patcher patcher = CodePatcher(repo_path) # Apply patch result = await patcher.apply_patch(diff_content, strategy="fuzzy") # Rollback if needed await patcher.rollback_patch(file_path, backup_path) # Cleanup old backups await patcher.cleanup_backups(keep_count=10)
Features:
- ▶Auto-backup before applying
- ▶Strict/fuzzy matching strategies
- ▶Automatic rollback on failure
- ▶Backup retention management
- ▶Located in terminal
~/.voria/backups/
terminalexecutor/
- Test Execution
executor/Purpose: Detect and run test suites.
Key Classes:
- ▶- Main coordinatorterminal
TestExecutor - ▶- Result enumterminal
TestStatus - ▶- Individual test resultterminal
TestResult - ▶- Full suite resultsterminal
TestSuiteResult - ▶Framework parsers:
- ▶- Python pytestterminal
PytestParser - ▶- JavaScript Jestterminal
JestParser - ▶(extensible for others)
- ▶
Main Methods:
python# Create executor executor = TestExecutor(repo_path) # Detect framework framework = await executor.detect_framework() # "pytest"|"jest"|None # Run tests result = await executor.run_tests() # Format results output = executor.format_results(result)
Results Structure:
pythonTestSuiteResult( framework="pytest", total=25, passed=24, failed=1, skipped=0, duration=2.5, results=[ TestResult(name="test_api", status=TestStatus.PASSED, duration=0.1), TestResult(name="test_db", status=TestStatus.FAILED, message="timeout") ] )
terminalagent/
- Orchestration
agent/Purpose: Main agent loop for issue fixing.
Key Classes:
- ▶- Core orchestratorterminal
AgentLoop - ▶- State trackingterminal
LoopState - ▶- Action enumterminal
LoopAction
Main Methods:
python# Create agent loop = AgentLoop( provider_name="openai", api_key="sk-...", repo_path="/repo" ) # Initialize (setup provider) await loop.initialize("gpt-5.4") # Run full loop result = await loop.run( issue_id=42, issue_description="Fix bug in parser" )
Loop Stages:
- ▶- Generate fix strategyterminal
_step_plan() - ▶- Generate diffterminal
_step_patch() - ▶- Apply changesterminal
_step_apply() - ▶- Run teststerminal
_step_test() - ▶- Analyze if failedterminal
_analyze_failure() - ▶Loop back or succeed
Result Structure:
python{ "status": "success"|"failure"|"timeout", "iterations": 3, "plan": "Generated plan...", "patch": "Generated diff...", "test_results": {...}, "errors": [] }
terminalgithub/
- GitHub Integration
github/Purpose: Fetch issues and create PRs.
Key Classes:
- ▶- Main clientterminal
GitHubClient - ▶GitHub operations:
- ▶- Get issue detailsterminal
fetch_issue(id) - ▶- Create PRterminal
create_pr(head, base, title, body) - ▶- Add issue commentterminal
add_comment(issue_id, text) - ▶- Get issuesterminal
list_issues()
- ▶
Usage:
pythongithub = GitHubClient(token="ghp_...") issue = await github.fetch_issue(42) pr = await github.create_pr( head="voria-fix-42", base="main", title="Fix issue #42", body="Automatic fix by voria" )
terminaltoken_manager/
- Cost Tracking
token_manager/Purpose: Track LLM spending and enforce budgets.
Key Classes:
- ▶- Main trackerterminal
TokenManager - ▶- Per-provider budgetsterminal
TokenBudget
Usage:
pythonmanager = TokenManager() # Log usage manager.log_usage( provider="openai", tokens_used=1000, cost=0.05 ) # Check budget if not manager.within_budget("openai"): raise BudgetExceededError() # Get stats stats = manager.get_stats() # Total cost today, etc.
Budget Defaults:
terminalmodal: $0.00/hr (free until Apr 30) openai: $5.00/hr gemini: $1.00/hr claude: $3.00/hr
terminalsetup/
- Configuration
setup/Purpose: Interactive provider setup.
Key Classes:
- ▶- Configuration managerterminal
ProviderSetup
Usage:
pythonsetup = ProviderSetup() # Interactive flow config = await setup.setup_provider() # → Choose provider # → Enter API key # → Select model # → Save to ~/.voria/providers.json # Get saved config cfg = setup.get_provider_config("openai") # List configured providers providers = setup.list_configured()
Rust Core Modules
terminalmain.rs
- Entry Point
main.rsResponsibilities:
- ▶Parse CLI arguments
- ▶Initialize logging
- ▶Dispatch to subcommands
- ▶Exit code handling
Key Functions:
rustasync fn main() -> Result<()> async fn handle_plan(issue_id: u32) -> Result<()> async fn handle_issue(issue_id: u32) -> Result<()> async fn handle_apply(plan_id: &str) -> Result<()>
terminalcli/mod.rs
- Command Dispatch
cli/mod.rsResponsibilities:
- ▶Parse subcommands (plan, issue, apply)
- ▶Validate arguments
- ▶Route to handlers
Subcommands:
- ▶- Plan a fixterminal
plan <issue_id> - ▶- Full automationterminal
issue <issue_id> - ▶- Apply saved planterminal
apply <plan_id>
terminalipc/mod.rs
- NDJSON Protocol
ipc/mod.rsResponsibilities:
- ▶Spawn Python subprocess
- ▶Send NDJSON requests
- ▶Receive NDJSON responses
- ▶Timeout detection
Key Structs:
rustpub struct ProcessManager { child: Child, stdin: ChildStdin, stdout: BufReader<ChildStdout>, } impl ProcessManager { async fn send_request(&mut self, req: &Value) -> Result<()> async fn read_response(&mut self) -> Result<Value> async fn with_timeout(&mut self, req: &Value, secs: u64) -> Result<Value> }
terminalorchestrator/mod.rs
- Coordination
orchestrator/mod.rsResponsibilities:
- ▶Coordinate Rust-Python workflow
- ▶Handle multi-step commands
- ▶Error recovery
terminalconfig/mod.rs
- Configuration
config/mod.rsResponsibilities:
- ▶Load config files
- ▶Override with CLI flags
- ▶Merge environment variables
terminalui/mod.rs
- Terminal UI
ui/mod.rsResponsibilities:
- ▶Colored output (Blue/Green/Red)
- ▶Progress display
- ▶Error formatting
Key Functions:
rustfn print_info(msg: &str) // Blue [i] fn print_success(msg: &str) // Green [✓] fn print_error(msg: &str) // Red [✗] fn print_warning(msg: &str) // Yellow [!]
Plugin Architecture
Language Plugins
Location:
python/voria/plugins/Plugin Structure:
pythonclass PythonPlugin: async def parse_code(self, source: str) -> AST async def run_tests(self, path: str) -> TestResult async def format_code(self, source: str) -> str
Supported Languages:
- ▶Python (pytest)
- ▶JavaScript/TypeScript (Jest)
- ▶(Extensible)
Data Structures
LLM Response
pythonclass LLMResponse: content: str # Response text tokens_used: int # Tokens consumed finish_reason: str # "stop"|"length"|"error" metadata: Dict[str, Any] # Provider-specific
Patch Hunk
python@dataclass class PatchHunk: old_file: str # File path new_file: str old_start: int # Starting line (1-indexed) old_count: int # Num lines before new_start: int # Starting line after new_count: int # Num lines after lines: List[str] # Diff lines
Test Result
python@dataclass class TestResult: name: str # Test identifier status: TestStatus # PASSED|FAILED|SKIPPED|ERROR duration: float # Seconds message: str # Fail message (if any) error_type: Optional[str] # Exception type stacktrace: Optional[str] # Full trace
Usage Patterns
Using an LLM Provider
python# 1. Create provider provider = LLMProviderFactory.create("openai", api_key, "gpt-5.4") # 2. Call methods plan = await provider.plan(issue_description) patch = await provider.generate_patch(context, plan) # 3. Check tokens print(f"Used: {plan.tokens_used} tokens")
Using the Agent Loop
python# 1. Create loop loop = AgentLoop("openai", api_key, repo_path="/repo") # 2. Initialize await loop.initialize("gpt-5.4") # 3. Run result = await loop.run(issue_id=42, issue_description="...") # 4. Check result if result["status"] == "success": print("Issue fixed!") else: print(f"Failed: {result['errors']}")
Using Patching & Testing
python# 1. Create patcher patcher = CodePatcher("/repo") # 2. Apply patch result = await patcher.apply_patch(diff_text) # 3. Create executor executor = TestExecutor("/repo") # 4. Run tests test_result = await executor.run_tests() # 5. Check results if test_result.passed == test_result.total: print("All tests pass!")
Dependencies
Python
- ▶httpx 0.24.0 - Async HTTP
- ▶aiofiles 23.0 - Async file I/O
- ▶pytest - Testing
Rust
- ▶tokio 1.51 - Async runtime
- ▶serde_json - JSON
- ▶colored - Terminal colors
- ▶clap - CLI args
See Also:
- ▶ARCHITECTURE.md - System design
- ▶DESIGN_DECISIONS.md - Why these choices
Join our WhatsApp Support Group: Click Here