As a Perception Engineering Intern / Apprentice at 10xConstruction, you will help our autonomous drywall-finishing robots 'see' the job-site. You'll design and deploy perception pipelines—camera + LiDAR fusion, deep-learning vision models, and point-cloud geometry—to give the robot the awareness it needs.
Key Responsibilities
- Build ROS 2 nodes for 3-D point-cloud ingestion, filtering, voxelisation and wall-plane extraction (PCL / Open3D)
- Train and integrate CNN / Transformer models for surface-defect detection and semantic segmentation
- Implement RANSAC-based pose, plane and key-point estimation; refine with ICP or Kalman / EKF loops
- Fuse LiDAR, depth camera, IMU and wheel odometry data for robust SLAM and obstacle avoidance
- Optimise and benchmark models on Jetson-class edge devices with TensorRT / ONNX Runtime
- Collect, label and augment real & synthetic datasets; automate experiment tracking (Weights & Biases, MLflow)
- Collaborate with manipulation, navigation and cloud teams to ship end-to-end, production-ready perception stacks
Qualifications & Skills
Solid grasp of linear algebra, probability and geometry; coursework or projects in CV or robotics perceptionProficient inPython 3.x and C++17 / 20; comfortable with git and CI workflowsExperience withROS 2 (rclcpp / rclpy)and custom message / launch setupsFamiliarity withdeep-learning vision(PyTorch or TensorFlow)—classification, detection or segmentationHands-on work withpoint-cloud processing(PCL, Open3D); know when to apply voxel grids, KD-trees, RANSAC or ICPBonus : exposure to camera-LiDAR calibration, or real-time optimisation libraries (Ceres, GTSAM)Why Join Us
Work side-by-side with founders and senior engineers to redefine robotics in constructionBuild tech that replaces dangerous, repetitive wall-finishing labor with intelligent autonomous systemsHelp shape not just a product, but an entire company—and see your code on real robots at active job-sitesRequirements
Python 3.xC++17 / 20ROS 2PyTorchOpen3DRANSACSkills Required
Pytorch, pcl