Every product we build starts here.
So could yours.
Every product we build starts here. So could yours.
Model manager with GPU metrics
How it works
From zero to running model in four steps
Launch Hermes
Start the app. Hardware is detected and capacity calculated automatically.
Browse models
See compatible models with memory requirements and use cases, filtered for your hardware.
Start a model
One click to download and launch. Configure context length and hardware allocation.
Connect your tools
Hermes announces itself on your network. All Athena AI tools discover it automatically.
Capabilities
Everything you need to manage local AI
Browse models, manage instances, and monitor hardware from your browser.
Model registry
39+ models with compatibility info, memory requirements, and use cases. Filtered for your hardware.
Start, stop, and monitor models
Control model servers from the dashboard or API. Each runs independently.
Real-time GPU metrics
Live utilisation, temperature, and memory usage. Updated in real time.
Automatic network discovery
Running models announce themselves on your network. Every Athena AI tool connects automatically.
Simple authentication
Shared password with secure hashing. Straightforward security for a trusted network.
REST API
Full programmatic access. Start models, check status, and read metrics from any language.
Hermes is currently in development
We are actively building Hermes. Book a demo to learn more about our product roadmap and early access opportunities.
Have feedback on Hermes? Feedback