In Google I/O Fair held on May 17-19, 2017, CEO Sundar Pichai claimed their AutoML to be the flag-bearer of “AI Inception.” Google also announced to apply its Artificial Intelligence research to all of its products, which compliments Pichai’s claim.
AutoML creates layers of Artificial Intelligence (AI) which interact with each other to conceive a better AI system. The interaction uses Deep Learning technique by passing data through layers of neural networks inspired by biology. Programming neural network layers is a complicated and time-consuming mathematical task. AutoML is an AI system that automates the machine learning software by creating multi-layered neural networks and providing an algorithm for performing a particular task. Deep learning technique is also incorporated in image, facial, and speech recognition software.
“In our approach (‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” says on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”
Google is also taking steps towards the democratization of its AI research including studies, tools, and applied AI to invite researchers, hobbyists, and techs to build an ecosystem. Google.ai is a platform used specifically for this purpose.
Neural networks are rapidly transforming the technological architecture and operation all the way down to hardware. Neural networks must be trained, unlike the conventional software. Traditional CPU processors consume too much time and power to process such huge amount of data for neural networks.
Google addressed this problem by launching 2nd generation tensor processing unit chips called Dubbed TPU 2.0. Unlike the first generation, it can also train neural networks, not just run them once they’re trained. TPU 2.0 is also designed specifically for an open source software created by Google called TensorFlow to run neural networks.
This breakthrough also brings opportunities for Internet of Things (IoT). Google is leading the way in interconnecting the devices while making them smarter.
Image recognition will enable the cameras to provide search engine expertise. Facial recognition will make cameras better at tracking. Google lens incorporates these features and is a way to search the internet using smartphone camera. “It is a set of vision-based computing capabilities that understands what you are looking at.”, explained Pichai at the launch.
Google assistant and Google photos will have increased capabilities after inclusion of Google lens and neural network system.
Google is also launching a neural network powered platform for job searchers. “Google for Jobs” will help companies procure talented and skilled professionals.
Google Home also gets a dose of AI. Along with playing music, it can also provide proactive assistance, hands-free calling, and visual responses now.
AI systems are better architect and designer than humans. If AutoML succeeds, it will open a way for new kinds of neural networks tailored to particular needs. AI has the potential to impact every aspect of our life including healthcare, finance, banking, manufacturing, robotics, clean energy production, data analytics, privacy and cyber security. AutoML will not only overcome the skill shortage issue but also help achieve the possible progress in making computers smart.
Google’s other AI research division DeepMind and Elon Musk’s OpenAI are also working along the same lines.