Generative AI Guidelines, Costs and Risks

Generative AI comes with its own risk and cost profile. Keep in mind that AI is not magic, nor is it an excuse or legal defense. In most cases, when it comes to content creation the same rules apply, whether you use AI or not.


It’s important to consider what content creation task you’re trying to accomplish and determine what requirements apply to that, and THEN think about how your approach might change because it involves AI.

Guidelines for Use of Generative AI

Other Tufts resources:

Procurement of generative AI Tools

It is important that you connect with TTS before procuring generative AI tools to ensure that tools procured on behalf of Tufts have the appropriate privacy and security protections and provide the best use of Tufts funds. 

  1. If you have procured or are considering procuring generative AI tools or have questions, please contact TTS using these instructions.  
  2. Vendor generative AI tools must be approved by TTS as described here.

Costs and Risks

Here are some of the costs and risks to be aware of as you consider using generative AI for work tasks —

  • Off-label use. In the pharmacy world off-label use means using a drug for a purpose that’s different from the one for which it was approved. Using an AI for tasks for which it was not designed can have unexpected results. It’s important to read the documentation and terms of service for whatever AI you’re using, and understand what it’s been designed and tested for. Don’t use generative AI for things that weren’t considered during design and testing.

  • Hallucinations. Generative AI is not good at distinguishing fact from fiction. For example, several lawyers have been disciplined by courts for submitting briefs that included case citations that they got from an AI—but these were cases that didn’t exist. Best bet: review the output like you found it in the back of an Uber.

  • Bias. Bias is a legal problem embedded in generative AIs, especially those which do grading, scoring, or make important decisions about people (like visual profiling in a venue). A significant risk from bias is that generative AI may produce results which omit or misrepresent diversity, or ignore works that appear less frequently in its training data.

  • Privacy.

    • Most generative AIs were (and are) trained from data that was/is being scraped from the internet, including social media and other places where your information was/is probably available.

    • Some generative AIs may also access additional data about you, or use your prompts and results to improve future models.

    • This may result in exposure of information about you if the AI fails in a specific way.

    • AI can also create deepfakes and spread mis- and dis-information. In one case, a generative AI falsely accused a professor of being involved in a harassment episode that never occurred.

  • Security. There are many ways generative AI can be exploited or misused, both by well-intentioned users and those with bad intent. These can include:

    • Prompt injections where hackers create prompts which circumvent guardrails built into the AI

    • Data breaches where hackers access unfiltered content in the original data set

    • Membership attacks where a hacker uses available data to determine whether someone’s personal data was included in an AIs training data

    • Data set poisoning where data in the training set is corrupted

  • Intellectual Property. Works produced by AI are not protected by intellectual property laws. In some cases, the output of AI may infringe copyrights and patents held by others. Courts are just starting to sort out the legalities and much is unknown.

  • Contractual Limitations. If you don’t have a negotiated contract, you’re subject to whatever the online Terms of Service are for the generative AI service. If anything goes wrong, you’re probably on your own.

  • Personification/Excessive trust. People can form relationships with a generative AI, sometimes forgetting it’s just a computer program. There’s also a tendency to place too much trust in technology, which can be a big problem when the AI is making predictions based on linguistic probability rather than on a genuine understanding of the real world. Remember, AI is far from perfect. If you don’t read the output and check the facts, you may regret it.

  • AI output as input. As more and more content is created by generative AI and distributed online, this content will end up in some training data sets. Some experts predict this could cause “model collapse,” resulting in AI models losing capabilities they currently have. For example there are already reports that github’s codebase is now being subject to “downward pressure” due to a rapid influx of poorly written code attributed to the increased use of Copilot there.

  • Economic impact. Much of the early work refining training data for well-known generative AIs was done by underpaid “ghost” workers like those on Amazon’s Mechanical Turk. Also, generative AI may eliminate jobs. And though it may create new jobs, it’s not clear that those who lose their jobs will be selected for the new ones.

  • Environmental impact. Generative AI consumes water and electricity. Some statistics:

    • Training ChatGPT-3 used 185,000 liters of water

    • Every ChatGPT session (5-50 prompts) uses 500 ml of water, the volume of a bottle of water.

    • It takes the same amount of electricity for an AI to write you a poem as it does to charge your cell phone

    • Training a generative AI uses roughly the same amount of electricity as 130 households do in a year.

See: How Much Electricity Does AI Consume?