What We’re Building
Dispatcher is an a16z-backed startup creating the Agentic Interface for Drones and Robotics—a command system where an operator can say “secure this perimeter and send an alert if there are intruders” and robotic systems determine and execute the best approach.
We’re bridging an ecosystem of APIs, SDKs, and services, from LLM reasoning and video streaming to mission logging and authentication. Under the hood, we’re building a distributed platform that connects language models, robotic hardware, and human operators through real-time telemetry and WebRTC video.
As a founding engineer, you’ll be designing the infrastructure that powers those interactions, from authentication flows and flight-log ingestion to CI/CD, database design, and session replay.
Responsibilities
• Build and maintain core backend services for the Dispatcher Control Platform, including APIs, logging pipelines, and mission session handling
• Implement authentication, role-based access, and secure data management for operators and enterprises
• Design and optimize real-time streaming infrastructure (WebRTC, RTMP, or WebSocket) to support live telemetry and video
• Develop and test integrations with external APIs (OpenAI, Vertex AI, LiveKit, DJI SDK, Pipecat).
• Design scalable storage solutions for structured and unstructured robotics data (telemetry, chat, mission reports, images)
• Build and maintain CI/CD pipelines, automated tests, and observability tools
• Collaborate with frontend engineers to shape user-facing dashboards and chat interfaces that visualize real world data
• Help define system architecture as we scale from prototype to production
Minimum Qualifications
• 2+ years of hands-on experience building production-grade full stack or backend systems
• Understanding of relational databases (Postgres preferred) and data modeling
• Experience with cloud platforms (GCP or AWS), especially serverless, IAM, storage, and CI/CD setup
• Familiarity with WebSockets, WebRTC, or other real-time data streaming paradigms
• Strong integration skills with complex APIs and authentication
• Working knowledge of frontend systems (React, TypeScript) to collaborate effectively across the stack
• Excellent debugging, testing, and documentation skills
Preferred Experience
• Robotics or drone-related systems, especially integrating SDKs or controlling hardware
• Video streaming and telemetry ingestion (FFmpeg, RTMP, SRT, etc.)
• Experience with LLMs or “agentic” architectures (OpenAI function calling, Vertex AI, LangChain)
• Hands-on work with CI/CD pipelines (GitHub Actions, Cloud Build, etc.)
• Side projects that show curiosity and drive—the kind you can’t help but build
Why Join
You’ll be part of the team turning natural language into physical action and shaping the software layer that allows AI to operate in the real world.
This is deep, technical work at the edge of what’s possible, with equal parts LLM orchestration, robotics integration, and interface development.
If you want to help invent how humans interact with machines—not just build another SaaS app—you’ll fit right in.
