Founding Software Engineer, Data Infrastructure
Airweave
We're looking for a founding engineer to own Airweave's data and infrastructure layer, the systems that make our distributed search and data pipelines scalable, reliable and observable.
At Airweave, you'll build and operate the platform that thousands of AI agents depend on. That means distributed sync pipelines pulling data from dozens of sources, vector databases powering LLM search, and the orchestration layer that keeps it all running. You'll work closely with the product team, but your focus is on the foundation: making sure data flows reliably at scale, LLM inference stays fast, and the whole system holds up under real production load.
This is early-stage infrastructure work. The architecture is still being shaped, and your decisions will define how we scale.
What you'll work on
- Design and scale distributed data pipelines that sync hundreds of millions of documents from dozens sources into advanced search indexes
- Build and improve Temporal workflows for parallel sync orchestration: retries, backpressure, and failure recovery across workers
- Own our Kubernetes deployments with Helm charts: autoscaling, and resource management for bursty search, sync and LLM workloads
- Scale PostgreSQL for high-throughput; connection pooling, read replicas, partitioning (we ask a lot from this database)
- Manage vector database (Vespa) infrastructure: sharding, replication, backup strategies for large-scale agentic search
- Orchestrate and optimize LLM inference pipelines: batching, caching, provider failover
- Build monitoring and alerting with Prometheus, Grafana, and custom instrumentation for cluster health
- Infrastructure as code for the base with Terraform
You might be a fit if
- You've built or operated data pipelines at scale: ETL, event processing, streaming, or sync infrastructure
- You're comfortable with Kubernetes, Terraform, and infrastructure as code
- You've scaled databases and understand the tradeoffs (pooling, replication, sharding)
- You have experience with distributed systems: workflow orchestration, message queues, eventual consistency
- You're interested in LLM infrastructure: embeddings, vector search, inference optimization
- You like building reliable systems and have opinions about observability
- You're drawn to early-stage environments where you own the whole problem
Bonus points:
- Experience with Temporal, Airflow, or similar workflow engines
- Background in scaling search (Elastic, Qdrant, Pinecone, Weaviate)
- Familiarity with LLM inference
What we offer
- Customers including one of the world's leading AI labs
- Competitive salary ($120K–$160K) with meaningful equity (0.25%–1.00%)
- Health, dental, and vision coverage
- Work in-person in San Francisco with a highly-skilled, technical team
- Direct impact on architecture and infrastructure decisions from the first week