The foundation of Athena AI

Every product we build starts here.
So could yours.

Every product we build starts here. So could yours.

01

Model manager with GPU metrics

How it works

From zero to running model in four steps

01

Launch Hermes

Start the app. Hardware is detected and capacity calculated automatically.

02

Browse models

See compatible models with memory requirements and use cases, filtered for your hardware.

03

Start a model

One click to download and launch. Configure context length and hardware allocation.

04

Connect your tools

Hermes announces itself on your network. All Athena AI tools discover it automatically.

Capabilities

Everything you need to manage local AI

Browse models, manage instances, and monitor hardware from your browser.

Model registry

39+ models with compatibility info, memory requirements, and use cases. Filtered for your hardware.

Start, stop, and monitor models

Control model servers from the dashboard or API. Each runs independently.

Real-time GPU metrics

Live utilisation, temperature, and memory usage. Updated in real time.

Automatic network discovery

Running models announce themselves on your network. Every Athena AI tool connects automatically.

Simple authentication

Shared password with secure hashing. Straightforward security for a trusted network.

REST API

Full programmatic access. Start models, check status, and read metrics from any language.

39+
Models in registry
<2s
Network discovery time
0
Cloud dependencies
1-click
Model launch
Coming Soon

Hermes is currently in development

We are actively building Hermes. Book a demo to learn more about our product roadmap and early access opportunities.

Have feedback on Hermes? Feedback