What is generative AI and how does it work?
AI, which stands for artificial intelligence, is a broad term encompassing a wide range of technologies including everything from Google’s autocomplete to an ATM to the AI chess champion Deep Blue. The AI most talked-about these days is called generative AI.
Generative AI can create many different kinds of content, including text, images, code, and others, so it has a lot of applications in the workplace, in education, and in many other fields.
Tufts Guidelines for Use of Generative AI Tools
What's the technology behind generative AI?
Any given generative AI draws from a vast set of text and images—known as a data set—to generate new content.
It does this by doing a whole lot of predictive math with a very large computer network known as an artificial neural network (ANN).
Artificial neural networks are modeled loosely after the human brain which can store, act on, and transmit billions of pieces of data in myriad patterns.
When a generative AI is referred to as a Large Language Models (LLM) that’s because the computer network is modeling a certain kind of mathematically-driven, predictive activity that human and animal brains do called pattern recognition.
In the case of generative AI what is being modeled is the comprehension and creation of language.
Brain scan adapted from "Self Reflected" by Will Drinker and Greg Dunn. Artificial Neural Network adapted from "But what is a Neural Network?" by Grant Sanderson.
What is generative AI good at?
Generative AI is good at recognizing and replicating patterns in the various languages contained within its data set.
Some AIs, such as ChatGPT, have been trained on an enormous data set which contains many different kinds of languages, including, for example human languages like English, Spanish, and Hindi, computer languages like Python and C++, and visual languages, like x-rays and sports photos.
Because ChatGPT has been trained with data that includes all these languages, it develops the ability to generate content in all of those languages too.
Why can these AIs generate text, code, and images?
Pattern recognition is something we do constantly as we listen for patterns in what we hear in the form of word usage and sentence structure, and look for patterns in what we see in the form of repeated shapes, colors, and textures
Our brains can do this, in part, by having a very large store of examples of how sentences are built, or how things look. So, for example, you may have heard sentences like “the weather is rainy today” many thousands of times. All those comments about the weather are stored as data in your brain, including individual words, and words grouped into phrases. When you say something about the weather you reference your own stored data of weather words and phrases, and can form a coherent sentence.
Similarly, a generative AI acquires the ability to detect and replicate meaningful patterns when it is given a very large set of text and images to reference, and is provided with a very large amount of computational power.
The drawing on the left was made by a human looking at an apple and referencing thousands of memories of actual apples and images of apples. The drawing on the right was made by an AI referencing millions of drawings and images of apples. (Drawings courtesy of TTS staff and DALL-E 3).
What are some key strengths and weakness of generative AI?
👍 Brainstorming: Because generative AI draws from huge data sets, it is often good at juxtaposing and mixing and matching concepts and scenarios. This makes it useful for brainstorming and generating ideas.
👎 Accuracy: Because generative AI lacks the ability to fact-check the content it creates, it is often inconsistent, inaccurate or flat-out wrong. The only way to insure that AI-generated content is accurate is to have it vetted and revised by an expert human.
👎 Reliability: Hackers are constantly finding ways to get a GPT to do something it wasn’t designed to do. So be skeptical of whatever it produces.
See: Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds (Fortune)
A word of caution about generative AI security
No private data should be input into a generative AI.
AIs are new and experimental technology. Even AIs which claim to be secure may have security vulnerabilities which have not yet been discovered or addressed. For this reason we strongly recommend that you only share content with an AI that you would be willing to share publicly online. Never share confidential data about individuals or organizations, including names, email addresses, or other identifying or sensitive information.
Before you use AI for work-related activities see: Tufts Guidelines for Use of Generative AI Tools,