What Is Artificial Intelligence?
source: forbes.com | image: pexels.com
Artificial intelligence (AI) has become a red-hot topic, with record levels of investment in “AI” companies and promises of capabilities that will revolutionize our lives. Many are puzzling through how AI can add value, and a growing number of vendors claim to be “AI-powered.” Given the buzz and rush to wrap the mantle of AI around any new technology, it makes sense to ask the basic question, “What exactly is AI?”
Start with the practical definition that artificial intelligence is any technology that tries to replicate some broader aspect of human intelligence. I emphasize “broader,” as that’s where a fair amount of confusion emerges. Think, for example, of the ability to perform arithmetic. Most people would agree that this capability is uniquely human. But I doubt anyone would conclude that a calculator is built on artificial intelligence.
It’s the broad capabilities that involve higher-order compound intelligence that makes the difference. Karen Hao’s freehand flow chart offers a great visual of the distinction that emerges around these more complex cognitive capabilities. One of the key elements is that AI does more than just capture data and perform a predetermined task. A camera cannot identify an object, and optical character recognition (OCR) cannot read a document. AI goes a step beyond this by applying some degree of “intelligence” to interpret the data and offer a next level of insight. But that’s still short of a far more complex and elusive element of human intelligence.
Establishing Realistic AI Parameters
Human intelligence enables us to capture knowledge, apply it in context and to relationships (the reasoning of “this causes that”) and learn in an iterative loop. This is the big prize of artificial “general” intelligence and the promise that powers the hopes (and fears) of a true revolution in how machines change our lives.
But the current state of AI still represents what is known as Moravec’s paradox: What’s easy for machines is hard for people — and vice versa. Many current AI techniques are strong at identifying patterns in huge volumes of numerical data to predict outcomes. Machine learning — a subset of artificial intelligence and the most dominant current practice — is built on this pattern detection capability using various sophisticated mathematical algorithms. It can predict optimal pricing for Uber rides during rush hour in Manhattan or recommend movies that someone might enjoy on Netflix based on what they previously watched.
But ML is still brittle when it comes to everyday examples of human intelligence, like identifying objects (ask Tesla about that) or understanding language. The context, variability and ambiguity inherent in these tasks challenge even the most sophisticated ML techniques. Again, machine learning is not synonymous with AI, and the good news is there are other AI approaches that do address tasks like language understanding more effectively. More about that later.
The Knowledge Problem
There’s a fundamental element of AI that is critical to understand. AI can replicate some of the elements of human intelligence but, as with humans, needs relevant external information to form a point-of-view. In this sense, all AI is knowledge-dependent, and the way in which it handles this dependency is important. For example, ML is “trained” on representative data, building and refining solutions over time with more data. That makes solutions sensitive to the type, availability and quality of the data and to any changes that occur in the real world.
Shock events like Covid-19 are particularly challenging. Existing machine learning models trained on data from a pandemic-free world struggled to adapt. And attempts to use ML to assist in Covid-19 diagnoses and treatment failed because of the poor quality of the data used.
As noted earlier, there are other approaches (generally called symbolic or knowledge-based AI) that address this problem explicitly by embedding knowledge within the system. Dominant in the early days of AI, symbolic and knowledge-based approaches are the focus of renewed interest. Many leading researchers in AI see them as a critical component of the AI toolkit, particularly for use cases like language understanding.
Natural Intelligence And The Path Forward
AI is a broad category, encompassing different techniques, each with their own strengths and weaknesses. Fortunately, there’s a growing and healthy realism about the limitations of some of AI’s dominant trends across elements of performance, cost, transparency and complexity.
Many have realized the importance of addressing the knowledge dependency inherent in delivering practical and scalable AI solutions. A recent Google AI research paper, “Everyone wants to do the model work, not the data work,” highlighted the need for this balance. It points to the critical requirement of matching the problem at hand with the most appropriate AI approach — or combination of approaches.
At the most fundamental level, this is where our intelligence drives the real-world likelihood of AI making a difference. We are the ones defining the problems to solve, the data to use and the approach to take. In some cases, that means knowing when a simpler and often cheaper approach might work better — even if it’s not AI. In others, it’s knowing how to ensure an AI solution complements our capability — encompassing strengths (the things we know and are good at) and compensating for weaknesses (the insights we’re missing and the mistakes we make).
We’re the ones who live with the consequences of how we deploy AI, from basic requirements of fairness and transparency to more profound ones about whether we should even be using AI. We’re accountable for any technology deployed, particularly one that is both as complex and ambitious in its reach as AI. The human is, at least for the foreseeable future, the key to successfully applying AI. As The Economist noted in its recent summary about AI trends: “AI can do a lot. But it works best when humans are there to hold its hand.”