These three categories—expert systems, traditional machine learning, and frontier machine learning—are organized along this spectrum according to two distinguishing attributes:
- Their autonomy as assessed by the degree of human guidance they require to function
- Their explainability—meaning the degree to which humans can examine how an algorithm is coming to a particular prediction or output2
These attributes are inversely related: More autonomous and fine-tuned algorithms require less human guidance; however, it’s more difficult to understand what the computer is doing and why.
Don’t assume “moving to the right” on the spectrum is optimal. More advanced algorithms aren’t always better. Most companies should be assessing and using a variety of techniques. For example, Amino, a San Francisco-based company with an online platform that provides healthcare provider recommendations, cost transparency, and appointment booking, is constantly testing algorithms to find the optimal mix of techniques. They recently tested deep learning algorithms to surface trends on physician specialties and evaluated their technique against two guiding questions:
- What degree of accuracy is necessary to make the product successful?
- What is the incremental improvement from using a more expensive, sophisticated method? Is a simpler technique available?
Against this framework, they decided the added specificity from the deep learning techniques was not worth the added cost in terms of the development time and computing resources needed. Every company using AI/ML should demonstrate an iterative, flexible, yet rigorous mindset in which they seek a desired level of predictive power using the simplest, most affordable techniques available. Investors, enterprise leaders, and others evaluating AI/ML-powered startups can use the Spectrum of Algorithms to guide conversations about the techniques each startup is using, and the utility and intent of using those particular algorithms.