AR-VR construction technology disruption is inevitable
AR-VR construction technology may very soon no longer be the exclusive pastime of gamers and designers: it may be put to much more practical uses.
Last weekend, I had the opportunity to road-test a Google Tango tablet with Lucas Stephane, PhD, Assistant Professor, of the Florida Institute of Technology Human Centered Design Institute, using some of the early area-learning and mapping freeware developers have uploaded to the Play Store, as well as Google’s own APIs. These were variously (not in-depth):
Project Tango Explorer (by Project Tango)
Space Sketchr (by Left Field Labs)
3D Scanner for Project Tango (by Voxxlr)
Project Tango Constructor (by Project Tango)
RoomScanner for Project Tango (by 7 Inch Worlds)
My interest was in how Tango could be used to enhance and facilitate AR-VR construction technology. To begin with, It helps to discern between area-learning, and motion-tracking capabilities. Google states
“Using area learning, a Project Tango device can remember the visual features of the area it has visited and use them to correct errors in its understanding of its position, orientation, and movement. This differs from motion tracking alone, which has no memory of the environment. This memory allows the system to perform drift corrections, also called loop closures. When the device comes back to a place it has already visited, it realizes it has traveled in a loop and adjusts its path to be consistent with its previous observations. These corrections can be used to adjust the device’s position and trajectory within your application.
Additionally, I cite another excellent resource for developers who want to understand the dynamics and specifications of Tango’s many on-board hardware “CHISEL: Real Time Large Scale 3D Reconstruction Onboard a Mobile Device using Spatially-Hashed Signed Distance Fields.” The authors’ CHISEL system provides for real-time on-board 3D reconstructions, at 2cm -3cm resolution, which is well researched; but then those authors may be moving on to less mundane things, such as flying robots. A chart of the CHISEL system architecture is provided in the frontispiece of this post.
“- we’re talking Uber Gigantor, Iron Man, and Optimus Prime.
AR-VR constuction technology mastery
The Tango device is packaged with 4GB of ram, a quadcore CPU, an Nvidia Tegra K1 graphics card, an identical tracking camera to the phone device, a projective depth sensor which refreshes at 3Hz, and a 4 megapixel color sensor which refreshes at 30Hz. As impressive as the Tango is, its depth sensors are far more noisy than commercially available devices.
Tango Explorer is a good place to start learning how the tablet deploys area learning and multiple point-cloud tools, as it handily demonstrates the core functions of the tablet. Explorer will output its scans into exportable ADFs, or work with other APIs. AR-VR construction technology apps will evolve using this platform
Space Sketchr utilizes the tablet’s motion sensing and depth perception tools (laser and cameras (4)) to generate a 3D point-cloud model of the space it travels through.
Take a look at a tablet in action
In earlier point-cloud modeling apps, the number of points to a scan are limited, and the reading is subject to drift – two attributes the Leftfielders have overcome.
3D Scanner, is a voxel-building (think Rubik’s cube deconstructed) area-mapper. This app had on-screen user interfaces to set various parameters: System/Resolution/Camera/Scan, which is quite a bit of operability. In the test drive, it seemed to work nicely with the tablet, slowly building surfaces and angles from data it receives from the laser and cameras. It did build some impressive voxel models – which I may add to this post, however, a visit to the deveoper’s site showed signs of mass extinction, only last summer.
Project Tango Constructor uses the device’s core tools to generate mesh renderings of area-mapping input. These renderings can be output and exported for integration with other tools. These were similar to the Voxxlr models.
RoomScanner is another point-cloud area-mapping tool. It combines depth perception and motion tracking into a single application. It has an error-correction feature that controls drift and refines accuracy. At present, Scanner can read up to 1M points, and save up to 500K in the Cloud.
The market is ripe for AR-VR construction technology disruption
However; RoomScanner, like other motion tracking apps, puts burdens on the hardware, and requires special handling, lest it should freeze up, or drain your lithium:
Less than accurate are some of the ‘holiday’ or omitted areas; but I sense that will come in time, as will range dimensioning enabled by enhanced lasers. This may seem rather primitive, however, it’s only a matter of time before mobile GPUs have greater capacities. It will be necessary for developers to optimize SLAM techniques before this can happen.
SLAM that Scan
Heretofore; SLAM was the M.O. of motorized area-mapping robots. The process of disseminating external information is the same in mobile devices, described as follows, in Blas and Riisgaard.
A significant challenge to area-mapping and depth-perception fine calibration lies in arming the Tango with sensing devices, that can perform as well as say a LIDAR, Trimble, or other photogrammetry camera. Anything on the market that can is far too big to mount on a Tablet. The other challenge is the delay of real-time modeling that relies on a server pipeline.
In order for the device to reach its full potential, both hardware and software will have to undergo many refinements and upgrades before early applications for diverse uses can be developed for Android smartphones. Early on, the expectation is, of course, that most VC will be thrown at AR/VR gaming apps. However; there will be myriad practical uses for the technology.
Tango’s visual-inertial odometry, or “localization,” and the ability to model that data in real-time has potential that is great and far reaching across industries, such as transportation, space, and energy.
To explore such AR/VR constuction technology possibilities, consider a few scenarios:
- Exploratory activities, such as spelunking, deep-diving, mountain climbing, could be a lot safer under adverse conditions, like cold, darkness, and loss of bearings. Complicated routes would be easily retrievable, and able to be shared with others in real-time, or on another occasion.
- Visually impaired and legally blind people could benefit from a library of models that guided them through the routes they desired to take.
- More finite remote robotic operations can take place when conditions dictate.
- Miners equipped with the device could track the paths of their movements as a safety precaution in the event of a collapse. Search and rescue operations would be greatly enhanced if responders had an idea of where people were, and how to get to them (and they had a wireless signal – not always a gimme after an accident).
- HAZMAT teams could arm a rover with the device to model affected areas, so to discern the safest and most expeditious course of action.
- Tracking in 3 dimensions can be effected, creating a wider avenue for monitoring and surveillance.
Applications like these will require much bigger point-cloud capacities, and/or on-board real-time processing torque, GPU, and system memory, in order get out into the real-world, i.e., other than on a screen. That’s going to take a lot of hard work. Thus, practical application developers may as well let the AR/VR gamers work out the kinks and the heavy lifting before they even think about developing life-critical deployments. Otherwise, they’ll be spinning their wheels.
AR-VR construction technology will ride the backend of disruption, as it always has
It will be interesting to see how the technology develops. Mr. Stephane has already put some of the technology to practical use and in theoretical applications as critical as cruise ship-building, nuclear plant maintenance, and catastrophe modeling (New Orleans levees). The immediate rewards are there for the taking to the most dedicated and resourceful developers.