If you’re a data scientist thinking about expanding your career options into AI you’ve got a forest and trees problem. There’s a lot going on in deep learning and reinforcement learning but do these areas hold the best future job prospects or do we need to be looking a little further forward?
To try to answer that question we’ll have to get out of the weeds of current development and get a higher level perspective about where this is all headed.
The roots of AI are actually in the behavioral sciences migrating eventually into biology and neurology. Since the earliest imaginings of what AI might be like those thoughts have focused on machines that could behave and make decisions like humans.
To those early thinkers that meant only one thing, that the computer must be made to work in the same way that the human brain does, not simply do human-like things, but to actually approach sentience. If it works like a brain, it must be a brain, right?
From this school come all the great robot fantasies from Rosie the Robot to the Terminator. It might be more accurate to call this AI not ‘artificial intelligence’ but ‘anthropomorphic intelligence’; think like a human. In fact, this school of thought is so pervasive in the history of AI that we call this the ‘Strong’ theory of AI versus the ‘Weak’ school of AI where we are satisfied if our robots can do human-like things even if they aren’t modeled on human thought.
We wrote recently what a relatively dark period was the study of AI during most of the 20th century. This period was marked by the ‘strong’ school of AI, the behaviorists and the biologists trying to duplicate the wiring of neurons with the relatively crude silicon switches of their day. As exciting as AI was to contemplate in science fiction, the reverse impact on academic research was that AI was a dead end which would sink the careers of the bright researchers necessary to make it happen. And so it did not happen.
It’s worth a short divergence to examine exactly what we mean by ‘intelligence’ and whether what we really want is Anthropomorphic (human-like) Intelligence or merely Artificial Intelligence.
Of course, we want ‘intelligent’ solutions which more correctly should be labeled ‘rational’. Anthropomorphic Intelligence, decisions that mimic those made by humans combines what is rationally blended with the idea of what is feasible. It is possible to imagine decisions which would be judged ‘intelligent’ but not ‘feasible’ and therefore not ‘rational’. So let’s keep in mind that human-like AI, whether achieved by Strong or Weak methods has to be ‘intelligent’ but only in a human-like way of rational and feasible. This might be akin to your self-driving car deciding to drive over the top of the cars ahead to escape a bottleneck. Intelligent but not feasible.
It should not be controversial to say that our modern data science grew out of the confluence of statistics and computer science. Yes, we have innovated new forms of algorithms but as much or more of the credit goes to Moore’s Law and the ever-faster-ever-cheaper computational resources that make data science possible. Also and equally to Hadoop and massively parallel processing including previously unavailable data volumes, types, and speeds.
So the pragmatic weak AI solutions, not modeled on brain function but ‘simple’ workarounds that mimic the same results have become increasingly successful and increasingly describe mainstream AI. And these solutions have gone from ‘narrow’ AI (Nest thermostats and Roomba vacuums) to ‘broad’ AI (image, text, and speech general purpose platforms).
Broad AI is today largely defined by Deep Learning (mostly convolutional neural nets) and Reinforcement Learning. Particularly with Deep Learning, in the space of fewer than 24 months we went from capabilities that were barely acceptable (how many ways did we make fun of Siri?) to 99% accuracy, and now into a battle for whose generalized open source AI platform will dominate the market.
Now that the ‘Weak’ school has won the battle with human-analog ‘Strong’ AI it’s probably time for a name change that will promote better understanding. Given our cultural bias for things labeled ‘strong’ and our antipathy to things labeled ‘weak’ we should recognize this pragmatic winning approach with a new name. Something like ‘Engineered’ AI or ‘Designed’ AI would be more appropriate.
What’s interesting about Engineered AI versus Strong AI is that Strong relies on data and rules while Engineered relies on human understanding and interpretation of those rules. Seems backward, but Engineered relies more on human understanding of the rules.
When applied to algorithms such as convolutional neural nets we may actually not be able to fully understand how the algorithm reached its conclusion but we can see verifiable results that its conclusion is correct.
To come back to our original question of opportunity and direction, Engineered AI has prevailed. If you are interested in working with AI are there growth opportunities in Deep Learning, or Reinforcement Learning, or should you focus on whatever is coming down the road?
Deep Learning (DL) has made great strides in the last 24 months such that it dominates most of the AI literature and has huge
backing from the majors like Google, Microsoft, Facebook, and Amazon. Especially with the release of almost all of this IP into open source since just the beginning of this year, this is a technology ripe for exploitation. By this, we mean that while capabilities will continue to evolve in both academia and industry, that the platforms are just now sufficiently industrial strength to begin to roll out this capability across industries and down market to smaller users.
For data scientists and start-up entrepreneurs, this means the beginning of a golden age of exploitation. This is no longer a research project dependent on the R&D budgets of major players, but a capability ready for rollout. The size of this opportunity, both in market scope and duration in time is an open question but surely one that will go on for some years.
The other leg of the Engineered AI stool is Reinforcement Learning (RL). RL is the central data science tool in self-driving cars, autonomous aircraft, and all types of process control and optimization (that includes gameplay like winning at chess). As an implementable tool, it is not nearly as far along as DL. There are not yet any standard RL platforms and RL algorithms are still all largely custom developments. (The best known generalized algorithm in RL is Q-Learning. Also, look at PyBrain).
Given the critical importance of RL applications there is a huge effort in academia and among a few large corporations (and surely some startups as well) to capitalize on it. Keep in mind that the key characteristic of RL is that it does not require training data, labeled or unlabeled. It learns from its environment as it goes along. This really turns our historical thinking about analytics on its head since it’s been assumed all along that predictive analytics would only learn by example. Not so for RL.
My sense is that if you are a research-oriented data scientist or an aspiring DS just entering a Masters or Ph.D. course RL will be hot within the time frame of 12 to 24 months and then like DL enjoy an extended rollout.
Sharing is caring
Member since Jan/2019
© 2019 AdvanceTalk, Inc. All rights reserved