Figuring It Out

At the end of a recent conversation about climate change, Donald Trump, human consciousness, and other things, a friend asked me to let him know if I figured it out. Having reached the age of 63, and experiencing the limits of my aging brain, I doubt my ability to do so.

The human brain has been estimated to function at something approximating 20 quadrillion operations per second (100 billion neurons with each neuron having about 1,000 connections with other neurons, and each neuron firing at an average rate of 200 times per second).

The second generation Tensor Processing Unit (TPU), developed by Google for training and running machine learning software, functions at 180 trillion floating point operations per second (TFLOPS). A pod of 64 TPUs networked together has a total capacity of about 11 PFLOPS (P for peta which is quadrillion), which is getting close to the 20 PFLOPS approximate speed of the human brain.

Google announced in the Spring of 2017 that they are making 1,000 TPUs, called the Tensorflow Research Cloud, available to machine learning researchers for free as long as the results of the research are published on the open Internet. This resource has a capacity of something approaching 180 PFLOPS.

One might say that computer hardware has reached a level of performance that is close to that of a human brain. How about the software? The software is improving at a rapid rate as well; witness the success of AlphaGo, which is the tip of the iceberg with developments in machine learning.

Researchers who are developing the software further have a growing number of building blocks. There are Convolutional Neural Networks (CNN) that are especially good at processing images. There are Recurrent Neural Networks (RNN) that are especially good at processing text. There are Relation Networks (RN) for learning how to identify relationships between entities. There are neural networks that can learn how to store information in a structured format to make it easier to retrieve and process.

At the Google I/O 2017 conference for developers, something called AutoML, for automated machine learning, was announced. AutoML is a neural network that has been trained to design other neural networks.

Think about AutoML being used to design neural networks that are better at continuous learning, language translation, and operating autonomous vehicles, and also a better version of AutoML.

Then this improved version of AutoML is used to design a model that can help read and review papers published in scientific journals, and engage in simultaneous text and audio conversations with millions of people, and also a better version of AutoML.

Then this improved version of AutoML is used to design a better version of AutoML which is used to design a better version of AutoML which is, you get the idea.

Before the conclusion of this article, we can take this to yet another level. What would happen if there was a machine learning equivalent to SETI@home, the software that allows millions of people to download radio data to their personal computers to help search for extraterrestrial intelligence? It already exists. Google has developed something they call federated machine learning.  

The newest, top-of-the-line smartphones have something like half a teraflop of processing power. If one million people ran the federated machine learning software as a background task on these smartphones, it would amount to about 500 PFLOPS of combined processing power.  If one billion people ran this software on these smartphones, it would amount to 500,000 PFLOPS.  Remember that a human brain has been estimated to operate at something approximating 20 PFLOPS.

I don't know what the conclusion is.  I can see the possibility, by the year 2020, of billions of people around the world having the ability to engage in conversations with an app on their smartphones in which they can express their desire to convert the resources that are currently used for the military to be used instead for meeting basic human needs and for mitigating the effects of our greenhouse gas emissions, and this collective app helping us to figure it out.

That is the vision that keeps recurring in my brain.  I love that vision.

John Kintree
September 24, 2017

Disclaimer: I do not own any stock in Google.  I just observe what is happening, and think about the direction I would like for it to go.
*