AI Systems & Inference Frameworks Engineer
Adaption
Location
San Francisco, New York, United States, Canada
Employment Type
Full time
Location Type
Hybrid
Department
Platform
Deadline to Apply
February 28, 2026 at 2:00 AM EST
About us
Most AI is frozen in place - it doesn't adapt to the world. We think that's backwards. Our mandate is to build efficient intelligence that evolves in real-time. Our vision is AI systems that are flexible, personalized, and accessible to everyone. We believe efficiency is what makes this possible - it's how we expand access and ensure innovation benefits the many, not the few. We believe in talent density: bringing together the best and most driven individuals to push the boundaries of continual adaptation. We're looking for builders and creative thinkers ready to shape the next era of intelligence.
The Role
You’ll work directly with our founders to design and build the inference and optimization systems that power our core product. This role bridges research and production, combining deep exploration of inference techniques with hands-on ownership of scalable, high-performance serving infrastructure. You’ll own the full lifecycle of LLM inference—from experimentation and performance analysis to deployment and iteration in production—thriving in a zero-to-one environment and helping define the technical foundations of our inference stack.
Responsibilities
Inference Research & Systems: design and build our LLM inference stack from zero to one, exploring and implementing advanced techniques for low-latency, high-throughput serving of language and multimodal models.
Frameworks & Optimization: develop and optimize inference using modern frameworks (e.g., vLLM, SGLang, TensorRT-LLM), experimenting with batching strategies, KV-cache management, parallelism, and GPU utilization to push performance and cost efficiency.
-
Software–Hardware Co-Design: collaborate closely with founders and model developers to analyze bottlenecks across the stack, co-optimizing model execution, infrastructure, and deployment pipelines.
Qualifications
Strong experience building and optimizing LLM inference systems in production or research environments
Hands-on expertise with inference frameworks such as vLLM, SGLang, TensorRT-LLM, or similar
Deep performance mindset with experience in GPU-backed systems, latency/throughput optimization, and resource efficiency
Solid understanding of transformer inference, serving architectures, and KV-cache–based execution
Strong programming skills in Python; experience with CUDA, Triton, or C++ a plus
Comfort working in ambiguous, zero-to-one environments and driving research ideas into production systems
Nice to have: experience with model quantization or pruning, speculative decoding, multimodal inference, open-source contributions, or prior work in systems or ML research labs
Above all, we're looking for great teammates who make work feel lighter and aren't afraid to go out on a limb with bold ideas. You don't need to be perfect, but you do need to be adaptable. We encourage you to apply, even if you don't check every box.
Benefits
Flexible work: In-person collaboration in the Bay Area, a distributed global-first team, and quarterly offsites.
Adaption Passport: Annual travel stipend to explore a country you've never visited. We're building intelligence that evolves alongside you, so we encourage you to keep expanding your horizons.
Lunch Stipend: Weekly meal allowance for take-out or grocery delivery.
Well-Being: Comprehensive medical benefits and generous paid time off.