Solstice & AI: A Look Ahead with Google

As we turn the chapter on a new year, it's a great time to lean into the promises of AI from 2018 and the vision it has in 2019. Although several AI visionaries are aiming to make an impact, one organization that has cut through the noise and made AI real is Google.

To kick this year off, I had the opportunity to sit down for a quick chat with Greg Mikels, Machine Learning Engineer at Google, to discuss the impact of AI in 2019, and some retrospective looks on 2018. Here are some of the top highlights from our Q&A session.

 

Ryan Maguire: As you look ahead into 2019, what resolutions have you or Google made around AI and machine learning?

Greg Mikels: I’d like to get hands on with an up-and-coming method in machine learning called reinforcement learning. A new Tensorflow-based framework called Dopamine was introduced last year, and it’s meant to provide flexibility and reproducibility for reinforcement learning. A good way to think about how it works is to consider teaching a program how to play a game. We can define rules that reward or penalize the player and simulate the game many times in such ways that an algorithm will begin to learn what the best moves are. Reinforcement learning was used to create a new version of AlphaGo called AlphaGo Zero. The system starts off with a neural network that knows nothing about the game of Go, and plays the game against itself, tuning the neural network so that it is able to predict moves, as well as the winner of the game.

RM: That will be interesting. So, as you think about 2019 trends, what factors do you think are going to contribute to the enterprise success and growth of AI?

GM: I meet many analysts and application developers who are interested in machine learning and want to leverage it in their applications but don’t necessarily know where to get started. BigQuery Machine Learning, AutoML, and our AI and machine learning APIs are great tools that make it easier to get off the ground quickly. I know a lot of people are looking to see what new AI products might be released in 2019. Traditionally, with a problem like image classification you might need hundreds of thousands of labeled images to build an accurate model from scratch. But with AutoML, which leverages transfer learning behind the scenes, you can get started with as little as a few hundred. Another one of our products, BigQuery ML, is also a great tool for programmers familiar with SQL to get started with ML. Currently, you can train logistic and regression models using SQL directly in BigQuery.

RM: So speaking of industry trends, I came across an article on Wired, that recently stated that investor enthusiasm for AI has waned with the first big failures, "And it will be up to the industry to redefine the problem it's trying to solve." So if venture capitalists and investors are becoming cautious about AI, how do you see this impacting the enterprise decision on how to engage AI in 2019?

GM: I think there's always a level of hype that will wane. But there is definitely promise in what ML and AI can do. It's just a matter of time before businesses learn the best path forward for adoption. We are developing industry solutions for AI and machine learning which include call center modernization, document understanding, and the creation of smarter recommendation engines. Solutions like these can be a path forward for enterprises who have become more cautious.

RM: So I came across another interesting article in Forbes, where it talked about trusting AI the same way that you trust your doctor in 2019. Do you feel that enterprises and customers are ready to "trust" AI? How do you think that participation will change in support of this or not?

GM: I think customers will continue to gain trust as more tools are developed around model explainability and fairness. We have folks at Google working in this area to ensure that models are explainable and free from bias that could reinforce unfair treatment or prejudice. But, it is important to remember that not all AI and ML workloads have the same accuracy requirements, and in some cases, the AI or ML application will be a tool that requires human intervention at times.

RM: According to a recent study noted in Forbes, amongst the top challenges to AI adoption in 2018, 43% of clients interviewed lacked clear AI strategy and 42% lacked AI talent. But at the same time, the same interview population said that 58% of those businesses say that less than one-tenth of their company's digital budget went towards AI in 2018 and that 71% of them expect AI investments will increase in 2019. So, looking at that, how are enterprises going to bridge that gap of lack of talent and strategy to drive an upward AI investment?

GM: For many use cases, Google offers APIs and AutoML solutions that can help bridge the gap by making it easier for traditional application development teams to leverage machine learning without getting into the weeds. As an organization becomes more adept with machine learning and acquires talent, there is often a desire to develop more custom models and deploy production pipelines to maintain and serve these models. When creating a custom ML pipeline, you may have a data engineer responsible for the ETL and preprocessing of data, a data scientist who is researching the best model to use to achieve the needed accuracy level for your workload, and a machine learning engineer involved in getting the system into production.

RM: So when Google thinks about 2019, what is one thing that Google is planning on doing in that space that you're most excited about?

GM: I’m really excited about the roadmap for a couple new products, AI Hub and Kubeflow Pipelines. AI Hub is a catalog of plug-and-play AI components, including end-to-end pipelines and out-of-the-box algorithms. You can think of it like the Google Play Store for AI and machine learning. AI Hub also provides enterprises with the ability to privately host their AI content and foster reuse and collaboration. Kubeflow Pipelines is an open-source product that allows enterprises to create portable ML pipelines that run on top of Kubernetes. Kubernetes runs everywhere, and I really like that I can develop a pipeline that doesn’t rely on any given cloud provider. Kubeflow Pipelines can also be used to deploy ML pipelines in hybrid cloud environments.

RM: Thank you Greg, it's been a very eye-opening discussion and I agree with many of your points. If I were to leave you with one last question, what do you think will be the three key successes the enterprise will deploy to gain the advantage of AI in 2019?

GM: The cloud is making it so much easier to manage large datasets and train machine learning models. Organizations can leverage cloud resources like GPUs and Google TPUs for training machine learning models without needing to make a big hardware commitments. Google offers a Deep Learning VM which easily connects to GPUs and TPUs and can be a great resource for data science teams looking to speed up model development and training times. I also think that it’s important to know the capabilities of out-of-the-box ML solutions like Google’s Machine Learning APIs, AutoML, and BigQuery ML solutions. You can do a lot with these toolsets, and it could be helpful to think of them as building blocks. Lastly, for data science teams who may be getting started with machine learning and seeing exciting results, there are a lot of great serverless options for deploying production models like Google Cloud Machine Learning Engine, which make it easier to get models into production with model management, monitoring, and highly available API endpoints.

RM: This chat gives me great hope for 2019 and the world where AI will bring us from automating ML workloads and deployments to ML model expectations and frameworks like Google’s Dopamine and ML at the Edge. For more ideas on how to navigate your AI journey, click here.