Google I/O is Google’s annual developer conference held in San Francisco. This year it was a three-day event showcasing some of Google’s technical advancements and roadmaps for the company’s different platforms, products, and open-source projects.
Here are some of the highlights.
Android O is Google’s next version of its Android platform. Here are some of the new features that will be available on new phones during the summer:
- Picture-in-picture support allows users to minimise a video and continue to surf through other apps while the video and sound continue to play. This feature has been available on the iPad since iOS 9. Offering picture-in-picture support should help increase video engagement on Android.
- Autofill with Google makes it easier to set up a new device, log in, and remember credit cards. It’s a complementary solution to the Google Smart Lock feature that makes it easy for users to remember passwords and automatically log in. The autofill API/SDK will be available for app developers to use in their apps.
- One of the challenges with developing for Android has been the slow adoption rate of the latest operating systems by device manufacturers. It sometimes takes years before the majority of users are using the latest Android operating system version that has been released. Google’s project Treble attempts to improve this: “We’re re-architecting Android to make it easier, faster, and less costly for manufacturers to update devices to a new version of Android.” If the company succeeds, it may ultimately result in users adopting the latest versions of the operating systems relatively quickly, thus making it possible for you to offer the latest OS features quickly.
- Notification dots/badges have been available on iOS devices for quite some time and will be available in Android O. The notification dots allow you to indicate some form of status on the icon. For example, when something is new within your app. Users can long press on an app icon to view a contextual menu and glance at the notifications.
- Android Instant Apps functionality is now open to all developers to use. Instant Apps allow users to run your apps instantly without installation. The Instant Apps can be initiated from any URL including search, social media, messaging, and other deep links without needing to install the app first.
- Android Vitals is a project focused on optimising battery life, faster boot times, graphic rendering time, and stability. The developer tools have improved to give developers an insight into the performance of their apps.
Virtual Reality (VR)
Google announced the next version of its Daydream VR headset called Euphrates. The new standalone headset is made specifically for VR and will not require cables, a telephone, or a personal computer. Everything is built into the headset.
The portability of the device and the fact the hardware and software are tailored specifically to deliver the ultimate VR experience makes this a convenient and user-friendly VR headset. This may help to increase the adoption rate of VR headsets and make hardware more accessible to users.
One of the killer features of this new device will be the ability to share what the headset user is seeing on a TV using Chromecast.
Augmented/Mixed Reality (AR)
Google has also worked on advancing AR capabilities. New mobile devices supporting Google’s AR computing platform called Tango are being released this year. Devices using Tango use sensors, GPS, and machine learning to try and understand the space around them in relation to us.
One of the examples given at the keynote was using a combination of Google Maps and computer vision to offer indoor navigation, which is being called a visual positioning service (VPS).
Artificial Intelligence (AI)/Machine Learning
AI and machine learning advancements were highlighted throughout all of Google’s announcements. Product and platform improvements support Google’s shift from a mobile first to “AI first” strategy, creating services and interfaces that allow humans to interact with computers in more natural and immersive ways.
Some of the highlights included:
- A continued improvement in voice recognition software. Computers are getting much better at understanding speech. Products are evolving beyond keyboard and mouse input to use voice and computer vision. Google Assistant and Google Home are spearheading this evolution in interaction interfaces. The Google Assistant will also be available for iPhone users. An update to Google Home and Chromecast allow users to see visual responses to conversations with Google Home on their TVs.
- The company also showcased some of its computer vision technology through Google Lens. The technology will be available through Google Assistant and photos first. It uses machine learning to understand the world around you. Some of the examples given were using your camera to identify flowers, identifying and understanding text in images, and translating the text if necessary. Soon you will be able to have a conversation with Google Assistant about what you and it see. Google Assistant SDK is available to developers to use in their devices .
- TensorFlow, Google’s open-source machine learning library, will soon be available to use in apps. A lightweight version called TensorFlow Lite will be made available to app developers. Developers can build lean deep-learning models that can run entirely within their Android apps.
As the tech giants like Google offer smarter and more advanced products and services based on AI and machine learning, users will expect the same level of interaction and personalisation from content publishers. Algorithms are now, in many instances, smarter than humans.
For more information about Google’s upcoming plans for 2017, check out the Google developer channel on YouTube and the keynote presentation.