M2mobi - These are the possibilities of Apple’s ARKit in iOS 11
All blogs

These are the possibilities of Apple’s ARKit in iOS 11

June 5th, Apple announced its ARKit for iOS 11 during its annual Worldwide Developer Conference (WWDC). What is ARKit and what are the possibilities of this technique? I’ll explain in this blogpost.

Apple’s ARKit is a framework that allows us to easily implement Augmented Reality (AR) in our apps. With ARKit, it has not only become easier to implement AR, but also the way this technology is displayed has significantly improved. Before continuing on Apple’s ARKit, I’ll first explain more about AR.

What is augmented reality?

Augmented reality - also 'added reality' - adds digital information to a user’s environment in real time. Through the camera of your smartphone or tablet you will see the actual environment, which shows additional digital information. This combination creates the illusion that the content is placed in the real world. A technology with many possibilities, which is being applied in more and more apps.

One of the best-known examples is Pokémon Go. This app lets users place pokémons into reality through their smartphone. Also more and more companies are choosing for an implementation of AR. Take IKEA for example, with the app users can easily place furniture in their living room and walk around the actual items to see exactly how they’ll look from different angles. In this way, users can see if a particular color or style fits into their home.

ikea pokemon img

The possibilities of Augmented Reality

As you may have guessed, the possibilities of Augmented Reality lay beyond placing pokémons and furniture. The examples below show the added value of AR.

A first example of this added value can be used with navigation. In the video below, a user is following a blue line that is placed in reality through his phone. With AR, a line or even multiple arrows can navigate a user to his destination - Starbucks in the example below. Through GPS or WiFi, the location of a user can be determined and in this way the distance to his location can be displayed as well.



Using this technique would also be a great solution for Indoor Wayfinding. Visitors of large venues such as hospitals and airports can easily find their department or gate. This form of Indoor Wayfinding is even more personal than the well-known blue line on a map, as we currently see in Apple and Google maps.

With the combination of AR and location determination, you can easily find a store or a person. This opens doors to even more functionalities. Take an exhibition stand at a big conference for example. Users who are searching for a specific stand, will only have to point their camera at the exhibition hall to see where their stand is. This technique is shown in the image above, where a user can easily see where and how far important buildings are.

Augmented Reality before ARKit

Augmented Reality is basically nothing new under the sun. Prior to the arrival of ARKit, we already saw this functionality implemented in many applications. However, the possibilities were limited and the implementation was a lot more complicated.

Previously, smartphones needed so-called markers to make use of AR. A marker is a recognition point - such as a QR code or a logo - and determines where a 3D object should be displayed on the screen. As soon as the marker is no longer visible or no longer recognized, the position of the 3D object can not be determined. This is detrimental to the user experience, because a user may need to point his camera at the marker several times again.

In addition, AR was not native supported by apple devices. We had to use external libraries to support AR. A ‘library’ is a term for a piece of code that allows specific features (such as AR) that we (app developers) can use within our own project or code. Developing a own library for AR takes a lot of time. We thus had to choose for using external libraries. Not an ideal solution, as we want to have everything in-house and don’t want to be dependent on third parties.

img ARKit marker

How does ARKit work?

Thanks to Apple's ARKit, using and implementing Augmented Reality becomes a lot more accessible to developers. The reason is that ARKit is native and already supported by many devices (see below in this blogpost). We will no longer need to use external libraries - and thus be dependent on third parties - to implement Augmented Reality in their app.

ARKit uses Visual Inertial Odometry (VIO). With this technique, the smartphone tracks its surroundings world precisely and striking surfaces are followed per frame. In this way, markers are no longer needed. VIO combines the camera sensor data with the motion sensors of the phone. The result is an accurate model of the position and movement of the device within its environment. In other words, the 3D object has an even more accurate position on your screen, making it look like it’s actually placed in reality.

Interpretation of the environment

With the use of Augmented Reality, the interpretation of the environment is very important. If your smartphone won’t be able to recognize a specific environment, it won’t be able to virtually place objects in it as well. Apple’s ARKit uses a technology - World-tracking - which allows the phone to recognize horizontal surfaces, such as a table or floor. These surfaces are then used as anchor points to determine where an object should be placed and displayed.

Finally, ARKit also recognizes the lightning conditions within a room or environment. As a result, augmented objects can be darkened or illuminated so they better match with the surroundings and thus the reality.

The limitations of ARKit

The arrival of ARKit is obviously accompanied by a number of limitations. One of them is that your device has to calibrate in order to recognize a flat surface. This is not always as smooth and sometimes you even have to move your smartphone around for a while, before it has found enough anchor points. These are not the only limitations. With ARKit, your smartphone won’t detect complex surfaces. A smooth floor won’t work, but a wooden table will do. As long as there are enough recognition points. Also ARKit won’t detect vertical surfaces like walls. This so-called vertical plane-detection would be an improvement and allow even more possibilities.

Finally, ARKit has no object recognition. For example, when placing an object in front of your augmented object, the ‘real’ object will disappear behind the augmented object. Even if you place an AR object on a table and then point the camera under the table, you’ll still see the AR object. In other words, the table is not recognized as actual surface.

Which devices are compatible with ARKit?

For a good performance of ARKit, you’ll need quite some computing power. According to Apple, iOS devices need to have an A9, A10 or A11 chip, in order to use ARKit. Not all iPhones and iPads meet that criteria, but still there are plenty of devices which does:

  • iPhone X
  • iPhone 8 (Plus)
  • iPhone 7 (Plus)
  • iPhone 6s(Plus)
  • iPhone SE
  • iPad (2017)
  • iPad Pro

Stay up-to-date

Within the iOS team of M2mobi we continuously look for new possibilities with ARKit. Want to know more about Augmented Reality and ARKit or stay informed about this new technique? Then check our blog post or contact us.

More articles

Be the first.

Get first access to special content such as whitepapers and blog posts.