Macha is now a standalone NixOS flake that can be imported into other systems. This provides: - Independent versioning - Easier reusability - Cleaner separation of concerns - Better development workflow Includes: - Complete autonomous system code - NixOS module with full configuration options - Queue-based architecture with priority system - Chunked map-reduce for large outputs - ChromaDB knowledge base - Tool calling system - Multi-host SSH management - Gotify notification integration All capabilities from DESIGN.md are preserved.
6.4 KiB
6.4 KiB
Macha Autonomous System - Configuration Examples
Basic Configurations
Conservative (Recommended for Start)
services.macha-autonomous = {
enable = true;
autonomyLevel = "suggest"; # Require approval for all actions
checkInterval = 300; # Check every 5 minutes
model = "llama3.1:70b"; # Most capable model
};
Moderate Autonomy
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-safe"; # Auto-fix safe issues
checkInterval = 180; # Check every 3 minutes
model = "llama3.1:70b";
};
High Autonomy (Experimental)
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-full"; # Full autonomy
checkInterval = 300;
model = "llama3.1:70b";
};
Monitoring Only
services.macha-autonomous = {
enable = true;
autonomyLevel = "observe"; # No actions, just watch
checkInterval = 60; # Check every minute
model = "qwen3:8b-fp16"; # Lighter model is fine for observation
};
Advanced Scenarios
Using a Smaller Model (Faster, Less Capable)
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-safe";
checkInterval = 120;
model = "qwen3:8b-fp16"; # Faster inference, less reasoning depth
# or
# model = "llama3.1:8b"; # Also good for simple tasks
};
High-Frequency Monitoring
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-safe";
checkInterval = 60; # Check every minute
model = "qwen3:4b-instruct-2507-fp16"; # Lightweight model
};
Remote Ollama (if running Ollama elsewhere)
services.macha-autonomous = {
enable = true;
autonomyLevel = "suggest";
checkInterval = 300;
ollamaHost = "http://192.168.1.100:11434"; # Remote Ollama instance
model = "llama3.1:70b";
};
Manual Testing Workflow
- Test with a one-shot run:
# Run once in observe mode
macha-check
# Review what it detected
cat /var/lib/macha-autonomous/decisions.jsonl | tail -1 | jq .
- Enable in suggest mode:
services.macha-autonomous = {
enable = true;
autonomyLevel = "suggest";
checkInterval = 300;
model = "llama3.1:70b";
};
- Rebuild and start:
sudo nixos-rebuild switch --flake .#macha
sudo systemctl status macha-autonomous
- Monitor for a while:
# Watch the logs
journalctl -u macha-autonomous -f
# Or use the helper
macha-logs service
- Review proposed actions:
macha-approve list
- Graduate to auto-safe when comfortable:
services.macha-autonomous.autonomyLevel = "auto-safe";
Scenario-Based Examples
Media Server (Let it auto-restart services)
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-safe"; # Auto-restart failed arr apps
checkInterval = 180;
model = "llama3.1:70b";
};
Development Machine (Observe only, you want control)
services.macha-autonomous = {
enable = true;
autonomyLevel = "observe";
checkInterval = 600; # Check less frequently
model = "llama3.1:8b"; # Lighter model
};
Critical Production (Suggest only, manual approval)
services.macha-autonomous = {
enable = true;
autonomyLevel = "suggest";
checkInterval = 120; # More frequent monitoring
model = "llama3.1:70b"; # Best reasoning
};
Experimental/Learning (Full autonomy)
services.macha-autonomous = {
enable = true;
autonomyLevel = "auto-full";
checkInterval = 300;
model = "llama3.1:70b";
};
Customizing Behavior
The config file lives at:
/etc/macha-autonomous/config.json (auto-generated from NixOS config)
To modify the AI prompts:
Edit the Python files in systems/macha-configs/autonomous/:
agent.py- AI analysis and decision promptsmonitor.py- What data to collectexecutor.py- Safety rules and action executionorchestrator.py- Main control flow
After editing, rebuild:
sudo nixos-rebuild switch --flake .#macha
sudo systemctl restart macha-autonomous
Integration with Other Services
Example: Auto-restart specific services
The system will automatically detect and propose restarting failed services.
Example: Disk cleanup when space is low
Monitor will detect low disk space, AI will propose cleanup, executor will run nix-collect-garbage.
Example: Log analysis
AI analyzes recent error logs and can propose fixes based on error patterns.
Debugging
See what the monitor sees:
sudo -u macha-autonomous python3 /nix/store/.../monitor.py
Test the AI agent:
sudo -u macha-autonomous python3 /nix/store/.../agent.py test
View all snapshots:
ls -lh /var/lib/macha-autonomous/snapshot_*.json
cat /var/lib/macha-autonomous/snapshot_$(ls -t /var/lib/macha-autonomous/snapshot_*.json | head -1) | jq .
Check approval queue:
cat /var/lib/macha-autonomous/approval_queue.json | jq .
Performance Tuning
Model Choice Impact:
| Model | Speed | Capability | RAM Usage | Best For |
|---|---|---|---|---|
| llama3.1:70b | Slow (~30s) | Excellent | ~40GB | Complex reasoning |
| llama3.1:8b | Fast (~3s) | Good | ~5GB | General use |
| qwen3:8b-fp16 | Fast (~2s) | Good | ~16GB | General use |
| qwen3:4b | Very Fast (~1s) | Moderate | ~8GB | Simple tasks |
Check Interval Impact:
- 60s: High responsiveness, more resource usage
- 300s (default): Good balance
- 600s: Low overhead, slower detection
Memory Usage:
- Monitor: ~50MB
- Agent (per query): Depends on model (see above)
- Executor: ~30MB
- Orchestrator: ~20MB
Total continuous overhead: ~100MB + model inference when running
Security Considerations
The autonomous user has sudo access to:
systemctl restart/status- Restart servicesjournalctl- Read logsnix-collect-garbage- Clean up Nix store
It CANNOT:
- Modify arbitrary files
- Access user home directories (ProtectHome=true)
- Disable protected services (SSH, networking)
- Make changes without logging
Audit trail:
All actions are logged in /var/lib/macha-autonomous/actions.jsonl
To revoke access:
Set enable = false and rebuild, or stop the service.
Future: MCP Integration
You already have MCP servers installed:
mcp-nixos- NixOS-specific toolsgitea-mcp-server- Git integrationemcee- General MCP orchestration
Future versions could integrate these for:
- Better NixOS config manipulation
- Git-based config versioning
- More sophisticated tooling
Stay tuned!