<aside>
✋🏽
Full-time position, remote work.
</aside>
The Role
We’re hiring a Python/ML engineer to own SN21 validator infrastructure and SN21/SN24 evaluation. You’ll fix scoring, ship dashboards, run ablations, and keep systems fast, observable, and reliable—high autonomy, fast-shipping culture.
What you’ll do
- Own SN21 validator/scoring infrastructure: scoring flow, API endpoints, subnet codebase.
- Build and maintain dashboards (SN21.ai, leaderboards, visualizations).
- Raise reliability and observability: alerts, tracing, rollbacks, CI/CD; uphold SLOs.
- Design and run evals/ablations and offline→online correlation studies for SN21/24.
- Curate datasets; build data loaders/pipelines; prototype model/routing improvements.
- Define robust metrics (e.g., WER/CER, BLEU/ROUGE, CLAP, CIDEr/SPICE, R@k, temporal grounding accuracy) and instrument them end-to-end.
- Track success: uptime, p95 latency, deploy frequency, MTTR, scoring correctness; eval stability and correlation to live outcomes.
Our stack
- Backend/Infra: Python, FastAPI, Celery/Redis, PostgreSQL, Docker, CI/CD, Grafana/Prometheus, GCP/AWS.
- ML: PyTorch/Lightning, Hugging Face, Weights & Biases.
- Plus (nice): Bittensor/TAO rails, Triton/CUDA kernels, “full-stack-lite” for dashboards.
What makes you a fit