In contrast, humans and animals—using only vision—navigate complex spaces with ease. Compound Eye simulates nature’s depth perception capabilities to enable robots to do the same, using RGB cameras and advanced software.
Compound Eye’s system fuses parallax and semantic cues to compute real-time depth at every pixel: they point two or more regular cameras at a scene, determine distance using both parallax and semantic cues, and fuse the results to get accurate depth at every pixel—all in real time.
• Dense Depth + Per-Pixel Semantics + Optical Flow
• Operates in real-world, unstructured environments
• Embedded, low-latency, vehicle-agnostic system
• Developer access via the VIDAS SDK
The custom-built annotation tools supported the rapid development of Compound Eye’s 3D perception models, allowing internal teams to generate training data efficiently and accurately—without relying on external vendors.
Depth / cloud point editing UI
Cuboid creation and annotation UI
"After researching more than 200 commercially available annotation tools, the team found that most were built for sparse 3D datasets. Instead of buying off the shelf, they decided to build a tool to power their state-of-the-art perception platform. But even with this valuable resource, the company’s small team was still constrained by in-house capacity. And they didn’t want to spend time on tedious annotation tasks; they wanted to focus on the company’s mission of building a full 3-D perception solution using cameras. Compound Eye tried to outsource the annotation work to other vendors but, due to poor quality, high costs, and restrictive tooling, decided not to do so." - CloudFactory Case study