Home   Courses   Blog   About

Google I/O 2018 - Why we should be scared

AI is taking over!

Google I/O 2018 is here! Watching the keynote made me think a lot about what is our future going to look like. Why? I must say, that I was impressed with demos of AI and the conversation with the Assistant. However, until I test it in person, I will stay sceptical. Still, it sounds promising. 
One of the demos was making an appointment for a hairdresser, and the Assistant made the call, it sounded like a proper conversation. For a user, it all looked like a simple instruction, but behind the scenes, a lot happened. It sounded just like a real assistant would call and make the appointment for you. Reactions were quick and most importantly very smart.

A real person in Hair saloon

Google Assistant

That makes you think. Don’t we want to talk to people anymore? Will we allow AI to take over? Watching advancements like this, makes me miss that red button we can use when we need to stop AI before it takes over. I don’t think that we are there yet since most of the algorithms are still based on large databases, which they are learning from. However, humans as social beings are already at risk. I do not know about you but us not having conversations anymore scares me a lot. 
Leaving all the decisions to the AI. Is that smart? I know it helps, but we must take care of our culture, and I don’t think that AI should be part of it. Social media without AI made us very isolated, with AI, how can we know if a person is talking back at the other end.

No more, “Hey, Google!”

Google is working a lot on having AI as natural as possible this is why “Hey, Google” had to go. We still have to wake up the device using "Hey, Google!". However, the next piece of the conversation is possible by just talking further with the Assistant. So this is great since we do not have to break up the conversation anymore. It feels more human-like by asking addon questions and making new instructions based on current context.

The conversation has improved with smart context and sentence structure detection. This way we can instruct the Assistant with more than one thing at the time.
At the end of the conversations presentation, there was an experiment that was just crazy, and I liked it a lot. Google had added Opening times for business a long time ago, but sometimes this opening times just aren't accurate and you end up going to a closed restaurant. Having improved their speech, AI and the conversation, it is now possible for Google to call all these businesses with AI doing all the work by just asking a real person what the opening hours are. Just imagine, how many things Google can index now using this feature. If you are a business, be prepared for a call from Google. :)

 

Intelligent maps and augmented reality

They do not emphasise this a lot, but what impressed me was, that Google Maps in combination with AI can make a prediction, where new buildings are, and guess the new address. We are probably still a little bit away from this being exact, but not that far away as well. How you might ask. Google Compares satellite pictures in time and sees the difference when a building is being built, and that is the indication, that there is a new address. Ability to predict new buildings is excellent because maps can already prepare everything for a new business, having indexed it in advance.
Maps have a few new great features, and “For You” is one of the things I look forward to. This is supposed to be a gathering point for new information around your location. If a new restaurant pops up, you know it. New offers in your favourite restaurant. Boom, you get a notification. :)
We are so used to using smart maps for everyday navigation, that nobody knows where streets are anymore but do you remember that time when you were late for a meeting, watching maps to see where to go left or right. No more! You have the option, to use augmented reality with arrows showing you the way. Moreover, they are already working on an experiment, where you have a “tourist guide” in the form of a cute animal.
Google Lens are leaping into reality with Search for real objects, using your phone's camera and pointing to objects. It will recognize elements and even put overlay on that object, so scanning a music album cover or a poster brings up youtube videos. Search also provides options for you, for example while looking at a beautiful dress, Search offers you colour options you can choose from.

 

Pedestrians are safer with Waymo

Waymo is a company that is most exciting since it combines all the knowledge Google possesses. Breakthroughs are visible here as well. Waymo needs a large database of samples to make their AI work, and they are making their cars drive around like crazy. 6 million miles on public roads does get you a lot of good examples, that you are not able to predict on your own. Even more amazing are the simulated journeys with 5 billion miles driven till now.
Results of such a big database are great. Traffic will be safer in the future. Having analysed more date, Waymo has reduced the error rate for detecting pedestrians by 100x. With this improvement, many lives are saved, even before cars have started to self-drive all around us. When have human drivers improved their driving skill by that factor?

 

You must code for our Assistant (use actions and slices)

Assistant, Actions, Slices,... The number of places where you can show parts of your application is growing by the minute. Google is doing a lot to make you rewrite apps to leverage this features. The only thing that makes me think is Android P. Many manufacturers are still lagging behind with implementation for the last OS and leveraging all the great new features is useless if you do not have enough users. Assistant is mentioned so many times that I already feel assisted :D We are getting new ways of interacting with our apps, Action links are a great example since you now have a way of providing a link, that takes the user directly to your action, and this happens where it is supposed to. Maps, home screen, dashboard,…
For developers, they are adding more built-in intent for us to work with in the coming months. Routines are now the way to make more complex intents come to life. Imagine having context for what your user does. If you are making a scheduling app, you can play the timetable, while a user is doing his daily routines of brushing teeth, putting on a dress...Traffic apps can already use routines, for example switching between walking and driving is already recognized as a routine and other routines are on their way.

 

We will all want to use Material Design

When I was researching what’s new before I/O 2018 and what could the changes be when the keynote starts, I had one thing in mind. Material lacks themes! I still miss the ability to split development of functionalities and design. I think that with Material Theming we are getting a step closer to that. Google has now given us Material Components (more components for us to use so that we do not have to search GitHub for libraries) and it is great, that you can implement most of the app using one style and use the theme to roll up the final design. I love the step toward a better conversation between the designer and the developer, with the Material Gallery that is a collaborative tool for uploading design work and getting feedback. Implementing design is not straightforward, and you need to communicate with the designer a lot, any help is welcome. Zeplin now has to step up if they want to compete with Google, and that is great for us, because more features provides us with an easier implementation of the design.

Prototypes are a thing of the past (Android Jetpack)

I do love the incremental building of apps, and Android Jetpack promises us a faster start of the application. Jetpack is a set of libraries that help us develop apps in a better way. They are linked in normal development and from time to time, you will not even know that you are using Jetpack.
Just imagine, that you can use a mock application and then add functionalities, polish all the glitches without dropping the whole code base.
The reality is different but every step towards saving time is very welcome. Google is promising backward compatibility as well. So we do not have to worry about older devices, and we can use new technologies straight away.
We can eliminate a lot of boilerplate code, and what is best, the code has more meaningful parts, with this it is more testable, more robust and easier to maintain.
A great example would be the ability to have a static data app showcase, to work on the  best user experience and then add a call to REST APIs or cached data for offline mode all without changing the core of the app. This way, the app will grow with every feedback.

 

Android Studio is even better

Android Studio is growing fast with every version adding neat new features. They have sped up the emulator this time with snapshots. Starting a complex emulator snapshot is just too fast if you are used to getting coffee while your emulator starts. You may need to use your old computer now :D
Navigation Editor is a new way of designing an app in a way that we are getting new control over the flow within apps with conditional paths and all. 
Kotlin is getting a boost with KTX and new improvements of code and even less boilerplate code. Kotlin code is shorter than Java, but KTX makes even more cuts. Below you can see just a sample for SharedPreferences. For comparisson check our Java blog about SharedPreferences

vs.

Look for more examples of Kotlin KTX on their page.
With the new Android Studio, you are getting even more control over app performance with the Network Profiler, Layout Inspector and a better CPU profiler.




So this is the first day. We are adding more articles as we go and point out what is new and what is great about Google I/O 2018