Google brain AMA 2017 TL;DR

The google brain team did an AMA (Ask me anything) on Reddit. This is the tl;dr:

  • They think PyTorch (made by people at Facebook) is great and that they did a good job with it. And that it is good that many people make Machine Learning libraries. You also learn from each other when developing your library.
  • Some of the hurdles in machine learning is to make deep networks stable and that many of the new breakthroughs in ML such as GANs or DeepRL are still to have their ‘batch normalization’ moment (that one idea that makes everything work without having to fight it). Also moving away from supervised learning will be difficult. Another challenge is to make systems that solve many problems instead of one.
  • Geoffrey Hintons capsules are coming along fine. They have a paper in nips on it.
  • They talked about some failures and stuff that hadn’t worked.
  • Their work days involve a lot of reading papers.
  • They recommend using the highest level API that solves your problem, then you get best practices for free
  •  The line between AI engineer and research scientist is blurry.
  • Give researchers access to more computation power and they will accomplish more.
  • PhD scientists go through the same interview pipeline as all devs
  • Robotics will benefit from the fact that we now have perception
  • A good way to learn is to read papers and re-implement them. If you want to lear a variety of ML topics, pick papers that cover different topics such as image classification, language modeling, GANs etc. If you want to become an expert in one subfield, pick a bunch of related papers
  • People are excited about: efficient large-scale optimization, building a theoretical foundation for deep learning, Human/AI Interaction, bridging the gap between real world and simulation, imitation learning, generatin long structured documents with long term dependencies in them, tools.
  • To g.co/brainresidency people from many different backgrounds can come, stated that you have an interest in AI/ML
  • Learning tips: *TensorFlow tutorials *Geoff Hinton’s Coursera course *Vincent Vanhoucke’s Udacity course *Kaggle, a great site with lots of ML competitions *Deep Learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville
  • You should probably use a GAN if you want to generate samples of continuous valued data or if you want to do semi-supervised learning, and you should use a VAE or FVBN if you want to use discrete data or estimate likelihoods.
  • In bilogy and genomics, they are involved in a variety of research projects in biology and genomics, such as predicting diabetic retinopathy status from fundus imagesidentifying cancerous cells in pathology images, using deep learning to call genetic variants in next-generation DNA sequencing data. We even have a recently-created Genomics team focused on applying TensorFlow, and extending it where necessary, to genomics problems. Other teams around Google and Alphabet, such as Google Accelerated SciencesVerily Life Sciences, and Calico, also apply deep learning techniques to biological data.
  • They like fast.ai and would complement it with the Deep Learning textbook, Elements of statistical learning. – Hugo Larochelle online course, the deep learning summer series, Blog posts like distill.pub, Sebastian Ruder’s blog.
  • You are welcome for Tensorflow
  • They keep up on what’s happening in the field by: Papers published in top ML conferences, Arxiv Sanity, “My Updates” feature on Google Scholar, Research colleagues pointing out and discussing interesting pieces of work, Interesting sounding work discussed on Hacker News or this subreddit

Leave a Reply

Your email address will not be published.