[ad_1]
The world is filled with depth detection camera technology now: in 3D body scanners, drone, AR helmets such as HoloLens and Magic Leap, and iPhone. But Structure Core, the first integrated depth detection camera at Occipital, based in Boulder, Colorado, aims to be an autonomous component that could allow researchers or creators to quickly create their own solutions.
The module connects with Windows, Linux, Android and Mac (sorry, iOS developers). It also features key enhancements over the original 2013 structural sensor. A version of this model connected to the iPad, offering a quick overview of the future evolution of AR technology.
Occipital has developed this original structural sensor with PrimeSense, the Israeli company whose technologies have also powered the original Microsoft Kinect. Apple acquired PrimeSense in 2013, which led to the TrueDepth camera with ID face detection on iPhones and iPad Pro. According to Occipital, the original Structure sensor has sold nearly 100,000 units and nearly 9,000 developers, which has led to 120 applications for the platform.
Structure Core is available for pre-order, both in a self-closed version and in a module designed to be integrated with another electronic part, such as an AR helmet or a prototype robot. Occipital will sell the sensor in waves: an advance-access race that will arrive in "the next two weeks" will cost $ 599; a second striking wave in January will cost $ 499. The main exit, scheduled for March of next year, will cost $ 399.
Up-close and personal with Structure Core
I've had a quick demonstration of what Structure Core could do with Vikas Reddy and Adam Rodnitzky from Occipital. Compared to the first Structure camera, the new Core structure allows for much better motion detection and retrieve depth information more quickly.
Core Structure detects 3D detail and depth up to about 5 meters (16 feet) away via infrared projection, and can detect even further with its other cameras. There are two infrared cameras, a 160 degree black and white camera and an 85 degree color camera, as well as a six degrees of freedom autonomous motion tracking with a built-in six-axis IMU unit. This makes this new sensor, essentially, an autonomous computer vision camera with motion detection.
In a few moments, co-founder Vikas Reddy connected Structure Core and quickly swept the lobby of our office, creating 3D badyzes of the floor, walls and furniture, and even building a map of the essential features of the building. piece.
Occipital imagines that Structure Core is used in drones, where the connected USB 3 camera is light and fast enough to be part of a fast navigation system. Or in robots: Misty, the residential robotics company from Sphero (and also based in Boulder), uses a version of the Occipital Core Structure in the head of its Misty II robot. Occipital is already collaborating with farmers who are exploring the use of the camera to enhance robotic picking of bays or drones to create flying sweeps from difficult access locations such as repair jobs.
For those who dream of creating a mixed-reality headset, of playing robots, autonomous vehicles or drones, or researchers looking for advanced computer vision cameras, a module like Structure Core could prove a useful and fascinating tool.
And, finally, it must be. Competition in space is intensifying, as Google (owner of the Lytro light field camera technology and the current leader of the mainstream AI camera-based), Apple (with iPhone's TrueDepth camera and ARKit's progress), Microsoft (HoloLens 2 rumors as well as small businesses such as Skydio (autonomous consumer drones) are developing more and more impressive computer vision products as the calendar comes to an even more competitive year in 2019.
Read more: The best gifts for 2018
Read more: The best gifts under $ 50
Source link