To enter immersive 3D worlds (Virtual Reality) or to display objects into the user’s field of view (Augmented Reality), devices and applications are needed that combine the user’s position in the room and the display of virtual objects. To understand the technical challenges in combining position data with rendered content, I challenged myself to implement an Augmented Reality phone app.
I am using React Native for the application in combination with the Expo framework. Using the framework, I can access the phones sensors, such as location (GPS), motion (gyroscope) and orientation (acceleromenter and magnetometer). These sensor reading then are combined to place and orient the camera object of a 3D rendered scene.
For the rendering, I am using the THREE.js library which lets me place objects on the virtual room that are displayed to the user if they are looking into the direction of the objects. As the grid of the 3D scene is independent of the real-world, I am linking them by converting GPS coordinates into grid-coordinates. That way I can render objects at fixed positions in the real-world.
The user of the app can see them when getting close in the real-world (and in future extensions interact with them).
The biggest challenge of this project, was to stabilize the rendered objects when not moving the phone: the phone’s sensors returned quite flaky values, resulting in an unstable AR view. To reduce the yitter, I have added a simple low-pass filter to remove the spikes in the sensor values.
The result was much more stable and the rendered objects more in accordance with the user.
As inside buildings, GPS was not accurate enough to track a user, I implemented a simple step detection logic, to detect when users move to update their position in the VR grid. The position is updated once GPS data is available. This technique (called Dead Reckoning) helps overcome position inaccuracies.
The project can be found on Github: https://github.com/brakid/AugmentedReality