(Gary Weingarden, Privacy Officer & Director IT Security | Published February 2024)
It seems like everything comes with a fancy “AI” feature these days. I get lots of questions! This post will explain some of the basics; later we’ll explore some of the risks, challenges, and really cool features of AI and related systems.
What is Artificial Intelligence? What is Generative Artificial Intelligence?
AI has been around for a long time, and ChatGPT and other Generative AI are only the most recent, trendy example. A popular definition of AI is: a computer system “that can perform tasks typically requiring human intelligence, such as problem-solving, decision-making, language understanding, and perception.” There are more elaborate definitions, and definitions that are more technical (see discussion of Russell and Norvig), but AI is a broad category that, depending on the definition can include everything from the Antikythera mechanism, to an ATM, to Google’s autocomplete, to Deep Blue (which defeated the chess world champion, Garry Kasparov back in 1997). Frankly, the concept AI is often unhelpful because how much it covers.
Relationship among various categories of AI. Source: Lance Spitzner
Most of the time, when people say “AI,” these days, they mean “Machine Learning,” which is one kind of AI, which “learns” by forming inferences and making predictions based on its consumption of large amounts of data. A popular kind of machine learning is deep learning, which uses a layered, interconnected approach that is based on the structure of the human brain (called “neural networks,” among other things).
As you can see from the infographic above, Generative AI is a subcategory of AI that is trained on large datasets and is responds to prompts with “new” content. As the International Association of Privacy Professionals puts it “Generative AI makes predictions on existing data rather than new data. These models are capable of generating novel outputs based on input data or user prompts.”
Source: Wikimedia. https://creativecommons.org/licenses/by-sa/3.0/deed.en
Does that mean AI “thinks” like a human?
Nope. There are really two strands of Machine Learning research: One strand tries to get computers to learn how to produce results that humans can (or maybe can’t) produce. The IBM Watson DeepQA that defeated two champions on Jeopardy is an example of this kind of research. The other strand tries to teach computers to think like a human. This strand has made little progress.
But ChatGPT seems to understand my questions and produce original responses.
Well, it’s not magic, but it’s also nothing like a live person reading your prompts, performing research, and authoring a response. Believe it or not, it’s all math. ChatGPT and most other “AIs” works by prediction. ChatGPT has been trained on a massive data set—anything it could access on the Internet. The training created a system that identifies the “best” answer by calculating probabilities; it asks the question “what’s the most likely text to follow this prompt?” Because of the way the model works—there’s some randomness added—the output can be different. Because “what’s true?” is a different question from “what’s the most likely response?”, sometimes ChatGPT “hallucinates.”
Is AI going to take over?
Most AI is only good at a narrow set of tasks. AI still does some weird stuff and makes obvious mistakes. AI is unlikely to become super intelligent in the immediate future. While we’re waiting, though, we’ll have to deal with imperfect AI, and imperfect humans—would you rather deal with artificial intelligence, or artificial stupidity? Each poses its own set of risks. We’ll talk about those next time.