Machine Learning Engineer: Perception
Bedrock Robotics
Location
San Francisco, CA
Employment Type
Full time
Location Type
Hybrid
Department
Engineering
Join the team bringing advanced autonomy to the built world
At Bedrock, we’re moving AI out of the lab and into the real world. Our team is composed of industry veterans who helped launch Waymo, scaled Segment to a $3.2B acquisition, and grew Uber Freight to $5B in revenue. Today, we’re deploying autonomous systems on heavy construction machinery across the country, accelerating project schedules of billion-dollar infrastructure projects and improving safety on job sites. Backed by $350M in funding, we’re working quickly to close the gap between America's surging demand for housing, data centers, manufacturing hubs, and the construction industry's growing labor shortage.
This is where algorithms meet steel-toed boots. You’ll collaborate with construction veterans and world-class engineers to solve physical-world problems that simulations can’t touch. If you're ready to apply cutting-edge technology to solve meaningful problems alongside a talented team—we'd love to have you join us.
Machine Learning Engineer: Perception
Bedrock is bringing autonomy to the construction industry! We’re a group of veterans from the autonomous vehicle industry who are passionate about bringing the benefits of automation to areas in the construction industry currently underserved by the market.
We are looking for engineers with expertise in shipping production 3D perception systems at scale. Successful candidates have architected systems, trained models from scratch, understand the full stack (clustering, detection, classification, and tracking), and have shipped at scale. We use both computer vision and LIDAR-based approaches, so knowledge of either or both is key. Models are just part of the system: you understand data and have good intuition about why models fail. You know how to evaluate corner cases, manage or build data pipelines, use autolabels (or not), and have a strong understanding of statistical properties of these systems.
What You’ll Do:
Design Early Fusion Architectures: Develop and train state-of-the-art models (e.g., BEV-based transformers) that fuse raw Lidar and Camera data to solve for object detection and semantic segmentation.
Tackle "Messy" Physics: Build perception systems robust enough to handle dynamic occlusion (seeing the robot’s own arm/bucket), particulates (dust, snow, rain), and high-vibration conditions.
Deploy to the Edge: Optimize models for inference on embedded hardware. You will debug system-level issues, such as sensor calibration drift and latency bottlenecks.
Collaborating with other teams to create state-of-the-art representations for downstream use cases.
What we're looking for:
Production ML Experience: 3+ years of experience taking deep learning models from research to real-world production using PyTorch.
3D Geometry & Calibration: You have a deep understanding of SE(3) transformations, homogeneous coordinates, and intrinsic/extrinsic sensor calibration. You understand the math required to project a 3D Lidar point onto a 2D image pixel accurately.
Early Fusion Expertise: Practical experience with architectures that fuse modalities at the feature level (e.g., BEVFusion, TransFuser, PointPainting) rather than just fusing final bounding boxes.
SOTA Object Detection experience with modern transformer-based architectures (DETR, PETR, etc…) including similar temporal models (PETRv2, StreamPETR, …)
Systems Fluency: You are an expert in Python, but you are also comfortable reading and writing systems code in C++ or Rust. You understand memory management and real-time constraints.
Data Intuition: You understand that in robotics, better data alignment often beats a bigger model. You are willing to dig into the data infrastructure to ensure ground truth quality.
Ways to stand out:
Bonus: Voxel/Occupancy Experience: Experience working with occupancy grids, NeRFs, or voxel-based representations for terrain mapping.
Bonus: Top-Tier Research: Published work in conferences such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS
Our roles are often flexible. If you don't fit all the criteria, or are in another location (especially one where we have an office like SF or NY) please apply anyway! We'd love to consider you.