(Gary Weingarden, Privacy Officer & Director IT Security Compliance | Published May 2024)
As promised in the Quick Primer on AI, this post is about AI costs and risks. While AI offers a lot of benefits, it also comes with costs and risks. The costs and risks vary among use cases and types of AI. Check some of the links at the end of this post for more resources.
Costs and Risks
- Death and Dismemberment. If an AI is embodied, it can interact with the physical world—for better or for worse. From a chess robot breaking a child’s finger , to a robo-taxi dragging a pedestrian who was hit by another (human driven) vehicle, to fatalities caused by industrial robots, to surgical robots that can kill, to deadly driving advice, and beyond, the ways to die or suffer physical injury from AI are just starting to be counted along with other harms. You might reply that rates for non-AI versions of the same injuries aren’t much to write home about, and you’d be right. Nevertheless, AI can present physical risks; just wanted to get that out of the way.
Upshot: Robots and autonomous vehicles and drones, oh my!
- Environmental impact. AI consumes water and electricity. Some statistics:
- Training ChatGPT-3 used 185,000 liters of water
- Every ChatGPT session (5-50 prompts) uses 500 ml of water, the volume of a bottle of water
- It takes the same amount of electricity for an AI to write you a poem as it does to charge your cell phone
- Training a generative AI uses roughly the same amount of electricity as 130 households; it also generates almost five times the lifetime carbon emissions of an average US car.
Upshot: AI requires a lot of storage and compute, which has an oversized environmental impact.
- Economic impact. Much of the early work refining training data for well-known generative AIs was done by underpaid “ghost” workers like those on Amazon’s Mechanical Turk and elsewhere. Also, generative AI may eliminate jobs. And though it may create new jobs, it’s not clear that those who lose their jobs will be selected for the new ones.
Upshot: It’s not clear whether AI will represent an economic win or loss, but it will leave winners and losers in its wake.
- Privacy.
- Many AIs are trained from data that is scraped from the internet, including personal information about most of us. You could say that AI is based on massive intellectual property and privacy violations.
- AI outputs can leak information. For example, there’s a classic story about Target’s pregnancy score generating pregnancy-related advertising, which supposedly resulted in Target sending her pregnancy- and baby-related coupons, which revealed her pregnancy to her father. And there’s the time Facebook shared a user’s purchase of a diamond engagement ring. In the case of Generative AI, a leak of your prompts could be just as bad.
- AI can also create deepfakes and spread mis- and dis-information. In one case, a generative AI falsely accused a professor of being involved in a harassment episode that never occurred.
This article reviews the full spectrum of privacy risks posed by AI.
Upshot: AI challenges a lot of privacy norms.
- Off-label use or misuse. In the pharmacy world, an “off-label” prescription is one that orders an approved drug for an unapproved use. Using an AI for tasks for which it was not trained and tested can have unexpected results. It’s important to read the documentation and terms of service for whatever AI you’re using so you can understand what uses were designed and tested for, interpret the output, and follow any instructions.
Upshot: Avoid off-label AI uses, especially if the results can have significant impact.
- Hallucinations. Generative AI is not good at distinguishing fact from fiction. For example, several lawyers have been disciplined by courts for submitting briefs that included case citations that they got from an AI—cases that didn’t exist. Hallucination appear to be a permanent feature of AI Chatbots and Generative AI. And it has implications for the future of search engines and the internet. This, in turn, may cause AIs that are trained on AI content to deteriorate.
Upshot: Review the output like you found it in the back of an Uber.
- Bias. Bias in AI refers to AI that either treats individuals differently because of a characteristic that’s not relevant to the task or produces results that result in an unfair distribution of outcomes to different groups. Biased algorithms can cost people their freedom, their money, and their entitlement to benefits. Bias can also be built into sensors or facial recognition software or other kinds of access points.
Upshot: Be on the lookout for bias and its sources.
- Security. There are many ways generative AI can be exploited or misused, both by well-intentioned users and those with bad intent. These can include:
- Prompt injections where hackers create prompts which circumvent guardrails built into the AI.
- Data breaches where hackers access content in the original data set.
- Membership attacks where a hacker uses available data to determine whether someone’s personal data was included in an AIs training data.
- Data set poisoning where data in the training set is corrupted.
There is a lot to AI security risk, and this is a small sample. You can read a lot more about it here and here.
Upshot: The AI security risk profile involves existing risks of the technology (for example, cloud computing and mobile) plus AI-specific risks.
- Intellectual Property. Works produced by AI are not protected by intellectual property laws. In some cases, the output of AI may infringe copyrights and patents held by others.
Upshot: Courts are just starting to sort out the legalities and much is unknown.
- Personification/Excessive trust. People can form relationships with a generative AI, sometimes forgetting it’s just a computer program. There’s also a tendency to place too much trust in technology, called “automation bias,” which can be a big problem when the AI is making predictions based on linguistic probability rather than on a genuine understanding of the real world.
Upshot: Remember, AI is far from perfect. If you don’t read the output and check the facts, you may regret it.
- AI output as input. As more and more content is created by generative AI and distributed online, this content will end up in some training data sets. Some experts predict this could cause “model collapse,” resulting in AI models losing capabilities they currently have. For example there are already reports that github’s codebase is “downward pressure” on code quality attributed to the increased use of Github’s Copilot feature. Github’s Copilot seems to replicate vulnerabilities and insecure code and possibly worse.
Upshot: AI may not thrive on a diet of AI-generated data
- Frequency Becomes a Proxy for Value. In The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do (2022) Erik Larsen explains the "Frequency Assumption," which is "more is better." In other words, AI detects patterns and patterns are based on frequency. This assumption has some unfortunate side-effects, including not recognizing, or processing well, languages that lack a written component or for which limited examples exist on the internet. In addition to cultural unfairness, this has led to vulnerabilities in Generative AIs, and as Kyle Chayka tells us in Filterworld: How Algorithms Flattened Our Culture (2024), AI recommendation systems flatten our culture by making it less likely that the system will recommend anything that's not already popular; and everything starts to look the same.
Upshot: Hipsters beware! It's going to get more difficult to discover things before they're popular, and in the AI future, chicken may taste like everything, or vice versa.
For more information. You can find out more at the following resources:
-
- Collections of AI-related Incidents, Risks, and Harms
- AI laws: Global AI law and Policy Tracker (IAPP)
- AI Risk Frameworks and Principles
- Environmental
- AI Privacy Risks
- AI Security Risks
- AI Futuristics