Ethical Storytelling
Harnessing AI for Nonprofits: Risks & Responsibilities
No, you aren’t imagining it. Artificial Intelligence (AI) is everywhere and its new applications seem to grow every day. AI tools allow organizations to work more efficiently and effectively and are already reshaping how nonprofits can advance their missions.
AI tools offer nonprofits unprecedented opportunities to save time, reduce costs, and improve their outreach. From writing copy to editing videos, these tools enhance efficiency and free up staff time for mission-critical tasks. However, successful adoption of AI requires awareness of its potential risks. In this article, we’ll explore the risks and responsibilities to consider when using AI for nonprofits.
The Risks of Using AI for Nonprofits
Privacy Concerns
AI tools work by allowing the user to input data and receive a corresponding output. Let’s say you need help drafting a caption for an Instagram post. You can input your draft caption into ChatGPT and ask it to refine the wording. AI is able to perform tasks typically thought to require human intelligence by analyzing and learning from the data that users input. That includes your data. Before using any AI tool, you need to know what happens with your input data. This means reading the terms and conditions for every AI tool that you use.
Intellectual Property Ownership
If AI is helping you create content, it is important to understand who owns the rights to that content. Let’s say you use AI to edit a video. Before using the tool, you need to make sure that the organization is the owner of that edited video.
Inaccurate and Biased Information
An AI tool is only as good as the information provided to it. If there is inaccurate data, the tool can produce inaccurate output. It is important to understand that, due to its inputs, AI can produce biased information.
For example, in a 2023 study, Bloomberg used a text-to-image AI tool to generate images related to job titles. The study found that the images generated for high-paying jobs were of people with lighter skin tones while the images generated for lower-paying jobs were of people with darker skin tones. While biases like these cannot be avoided in full, make sure your team is aware that they exist.
How to Use AI Responsibly for Nonprofits
While AI comes with risks, it is still a highly valuable tool that nonprofits can and should utilize, provided the right safeguards are in place. AI needs to be a part of your organization’s risk management process. To learn more about risk management for nonprofits, check out my earlier post breaking it down.
To manage the risks associated with AI, consider creating an “AI Use Policy” at your organization and provide AI use training to all employees.
Craft An AI Use Policy
An AI Use Policy should establish clear rules and guidelines for nonprofit employees regarding the use of AI tools. Here are three most important things your policy should include:
- The purposes for which AI can and can’t be used.
As mentioned, it’s likely that your nonprofit will find great success using AI for certain tasks, while others must be done manually by staff members. Make a list of “acceptable” and “unacceptable” uses for AI. Here are a few examples of each:
Acceptable Uses of AI for Nonprofits
- Generating drafts for social media posts, newsletters, or blog articles
- Editing videos or enhancing images for promotional materials
- Analyzing trends in donor behavior or campaign performance
Unacceptable Uses of AI for Nonprofits
- Making decisions about beneficiary eligibility without human oversight
- Creating content (without verifying its accuracy and appropriateness)
- Using AI to generate outreach materials with sensitive or personal data
- What information absolutely cannot be used as an input.
Nonprofits should never input sensitive information into AI tools, including:
- Donor information (e.g., names, addresses, donation amounts)
- Personally identifiable information (PII) of beneficiaries or clients
- Financial records or confidential internal data
These types of data could be stored or used by the AI tool in ways that compromise privacy and security. Make sure your staff is aware of the risks associated with sharing these inputs and give space for them to ask clarifying questions.
- A requirement that all output be reviewed before use.
Never copy, paste and post directly from AI. It is absolutely essential, due to inherit biases and potential for inaccurate information, that all of your AI generated content be reviewed by someone at your organization.
Take AI Training
All employees should receive training on the use of AI tools, including the limitations of such tools. Because technology is rapidly evolving, designate someone on your team to provide updates for your staff and provide ongoing training.
With the proper safeguards in place, you can effectively manage the risks associated with AI and incorporate it into your work. When used responsibly, AI is a powerful tool that can help your nonprofit save time, reduce costs, and amplify your impact. By understanding the risks of AI for nonprofits, and creating a strategy for use, you can harness its potential to advance your mission while maintaining ethical and operational integrity.
About the Author
Allie Levene
Attorney, Levene Legal
Allie Levene is an attorney and the founder of Levene Legal, which specializes in providing affordable and accessible legal services to small businesses and nonprofits. Before founding Levene Legal, Allie served as in-house and outside counsel for a number of Connecticut businesses. In those roles, she represented clients in litigation and administrative proceedings related to employment, civil rights, and general liability. Connect with her!