Technical Architect
Mode Analytics
About the Role
We are looking for a Staff Engineer & Technical Architect to serve as the foundational leader of our Agentic Platform Team. This is a high-impact "player-coach" role designed for a seasoned expert who can navigate the ambiguity of the evolving AI landscape while maintaining a rigorous focus on production-grade engineering.
As a technical North Star, you will bridge the gap between high-level product vision and low-level system execution. You will be responsible for the "brain" and "nervous system" of our platform—architecting how AI agents reason, remember, and securely access enterprise data. If you have a decade of experience building distributed systems and are now obsessed with the intricacies of agentic workflows and RAG at scale, this is your next challenge.
What You’ll Do
- Architect Agentic Infrastructure: Lead the design of high-performance Vector Database architectures and long-term agent memory systems to power efficient, high-context AI reasoning.
- Scale Production RAG: Build a multi-use-case Retrieval-Augmented Generation system. You will define chunking strategies, embedding pipelines, and retrieval ranking logic that ensure accuracy and freshness across thousands of customers.
- Build the Connectivity Layer: Design an extensible, developer-friendly Enterprise Connectors Platform to ingest real-time data from Slack, Jira, and Workday, ensuring secure multi-tenant isolation and robust error recovery.
- Modernize Platform Delivery: Lead the transition to GitOps-driven deployments using Argo CD and Kubernetes, ensuring our AI services are as reliable as they are innovative.
- Define AI Observability: Implement deep-trace visibility and benchmarking frameworks using tools like Langfuse to monitor agent performance, planning, and tool-use interactions.
- Provide Technical Direction: Act as the final decision-maker for complex design trade-offs, providing mentorship to the team while remaining deeply hands-on in the codebase.
What You Have
- 10+ Years of Experience: A proven track record of designing and implementing high-volume distributed systems or consumer-grade AI platforms.
- AI/LLM Specialization: Deep familiarity with agentic patterns (memory, planning, tool-use) and orchestration frameworks like LangChain or LlamaIndex.
- Infrastructure Mastery: Expertise in Kubernetes and Argo CD, with the ability to manage sophisticated CI/CD pipelines in a cloud-native environment.
- Data Engineering Prowess: Deep understanding of Vector Data management (indexing, similarity search) and event-driven ingestion pipelines (Kafka, Kinesis).
- Security & Multi-tenancy Mindset: Experience building for high-growth SaaS environments where data privacy, rate limiting, and tenant isolation are non-negotiable.
- Exceptional Coding Skills: Strong proficiency in modern backend languages such as Go, Python, Java, or C++.
- Academic Foundation: A Bachelor’s in Computer Science is required; a Master’s or PhD in CS, Machine Learning, or a related field is highly preferred.