Saturday, December 12, 2020

Stupid Stuff Leads Ignorant People To Believe Machine Learning Is AI Is Robbie The Racist Robot

Scientific American featured an article by LANL physicist and neuroscientist Garrett Kenyon, who wrote that one of the “distinguishing features of machines is that they don’t need to sleep, unlike humans and any other creature with a central nervous system,” but someday “your toaster might need a nap from time to time, as may your car, fridge and anything else that is revolutionized with the advent of practical artificial intelligence technologies.”

NOPE! 

What Machine Learning (So-Called AI) Really Is
The vast majority of advances in the field of "machine learning" (so-called AI) stem from a single technique (neural networks with back propagation) combined with dramatic leaps in processing power.
 
Back-propagation is the essence of neural net "training". It is the method of fine-tuning the weights of a neural net based on the error rate obtained in the previous iteration. Proper tuning of the weights allows you to reduce error rates and to make the model reliable by increasing its generalization.
 
The learning mechanism is very generic, which makes it broadly applicable to almost everything, but also makes it ‘dumb’ in the sense that it doesn’t understand anything about context or have the ability to abstract notable features and form models.
 
Humans do this non-dumb "abstraction from feature and form context" stuff - all the time. It’s what enables us to do higher reasoning without a whole data center worth of processing power.
 
Google and other big-tech/big-data companies are interested in neural networks with back propagation from a short term business perspective. There's still a lot to be gained from taking the existing technique and wringing every drop of commercial potential out of it.
 
Google is engineering first and researching second, if at all. That means that any advances they come up with tend to skew towards heuristics and implementation, rather than untangling the theory.
 
I’ve been struck by how many so-called ‘research’ papers in AI boil down to “you should do this because it seems to work better than the alternatives” with no real attempt to explain why.