Building a Live Stats Page for My Homelab
I wanted a page on my blog showing live stats from my homelab - uptime, storage health, network traffic per VLAN, DNS queries. The blog runs on Cloudflare Workers (Astro with the Cloudflare adapter), and the homelab has Prometheus with all this data. Here’s how I connected them.
The Problem
The homelab runs on a NixOS server behind a dynamic IP. The blog is hosted on Cloudflare Workers. I needed a way to:
- Expose a subset of Prometheus metrics publicly (without exposing Prometheus itself)
- Authenticate requests so random scrapers can’t hammer the API
- Handle the dynamic IP situation
Architecture
The key pieces:
- homed: A Go service meant to evolve into a home daemon with custom logic - currently just queries Prometheus
- SigV4 auth: Requests to homed are signed with AWS-style signatures (using
aws4fetchin the worker) - Public VM: A tiny VPS with a stable IP that forwards traffic over Tailscale to the homelab
The Go Service
homed is minimal - it just queries Prometheus and formats the response:
func handleStats(cfg *Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
stats, err := queryPrometheus(cfg.PrometheusURL)
if err != nil {
http.Error(w, "failed to fetch stats", 500)
return
}
json.NewEncoder(w).Encode(stats)
}
}
The interesting part is the SigV4 middleware. I’m using allaboutapps/aws4 to validate signatures:
func SigV4Middleware(next http.Handler, creds Credentials, skipPaths []string) http.Handler {
signer := aws4.NewSignerWithStaticCredentials(
creds.AccessKeyID,
creds.SecretAccessKey(),
"",
)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
for _, p := range skipPaths {
if strings.HasPrefix(r.URL.Path, p) {
next.ServeHTTP(w, r)
return
}
}
if _, err := signer.Validate(r); err != nil {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
next.ServeHTTP(w, r)
})
}
Health and metrics endpoints skip auth (Prometheus needs to scrape metrics, and the load balancer needs health checks).
Cloudflare Worker
The worker signs requests using aws4fetch and caches responses at the edge:
import { AwsClient } from 'aws4fetch';
const CACHE_TTL = 30; // seconds
export async function handleStats(request: Request, env: Env) {
// Check edge cache first
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;
// Sign and fetch from origin
const client = new AwsClient({
accessKeyId: env.HOMED_ACCESS_KEY,
secretAccessKey: env.HOMED_SECRET_KEY,
region: 'home',
service: 'api',
});
const response = await client.fetch('https://api.home.iodev.org/stats');
// Cache successful responses
if (response.ok) {
const headers = new Headers(response.headers);
headers.set('Cache-Control', `public, max-age=${CACHE_TTL}`);
const toCache = new Response(response.clone().body, { headers });
await cache.put(request, toCache);
}
return response;
}
Edge caching means all visitors share the same cached response for 30 seconds. The homelab only sees one request per 30s regardless of traffic.
NixOS Integration
The service is defined declaratively:
{ config, pkgs, ... }:
let
homedPkg = pkgs.callPackage ../../../pkgs/homed { };
configFile = pkgs.writeText "homed-config.yaml" ''
listen: ":8081"
metrics: ":9199"
credentials:
access_key_id: "HOMED001"
secret_access_key_env: "HOMED_SECRET"
'';
in {
sops.secrets."homed/secret" = { };
systemd.services.homed = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${homedPkg}/bin/homed -config ${configFile}";
DynamicUser = true;
LoadCredential = "secret:${config.sops.secrets."homed/secret".path}";
};
};
}
The secret key is managed with sops-nix and loaded via systemd’s LoadCredential.
The Frontend
The Svelte component just polls every 30 seconds - caching is handled at the edge:
<script lang="ts">
async function fetchStats() {
const response = await fetch('/api/homed/stats');
stats = await response.json();
}
onMount(() => {
fetchStats();
refreshTimer = setInterval(fetchStats, 30000);
});
</script>
The stats render in a network topology diagram and cards showing CPU, storage, DNS queries, and per-VLAN traffic.
Result
The /homelab page now shows live data: uptime, ZFS pool health, traffic per VLAN, DNS cache hit rates, connected clients. It refreshes every 30 seconds, and the edge cache means even a traffic spike won’t overwhelm the homelab - it just sees steady, predictable load.
Once you have a proper Nix flake set up, everything converges nicely. The Go service, systemd unit, secrets, and Caddy routes are all defined in the same repo. Deploying to the homelab is one nixos-rebuild switch command. The blog updates via GitHub push. SigV4 is probably overkill for this use case, but I wanted to learn the signing flow and might reuse it for other internal APIs later.