Vision experience: VIO/SLAM, camera models, optical flow/feature tracking; comfort with deep learning for detection/seg/pose (PyTorch) and on-edge deployment
Ship and fly: Proven research-to-production delivery and field testing on real platforms
BS/MS/PhD in Computer Science, Robotics, Electrical/Aerospace Engineering, or related field, or equivalent practical experience
Preferred Qualifications
Experience with CUDA/TensorRT/ONNX Runtime; NVIDIA Jetson pipelines
Exposure to ROS 2, PX4/ArduPilot integration
Strong data practices: data validation in CI, SQL/Parquet, reproducible datasets
Experience in contested/denied RF, low-light/night, high-vibration environments
Rust for systems tooling; Docker for reproducibility
Responsibilities
Prototype and productionize vision navigation and targeting features end-to-end from sim to HITL to flight with production C++
Turn detections (EO/IR/RF/radar) into well-posed measurement models with latencies/covariances; make the estimator decision-aware without corrupting state
Stabilize GNSS to VIO handover (adaptive covariances, gating, hysteresis, reset-less alignment) to eliminate jumps and estimator resets
Build and optimize real-time software on Linux/embedded; profile CPU/GPU, vectorize hot paths; optional CUDA/TensorRT on Jetson hardware
Own calibration and time-sync across IMU/cameras/radar/LiDAR/GNSS; validate in flight
Create evaluation pipelines and dashboards for drift, handover stability, relocalization, track quality
Implement fault detection and graceful degradation for harsh conditions (blur, low-light, vibration, RF denial)
Integrate global aids (maps, magnetics, radar) for long-term consistency and loop-closure robustness