
Why does it seem so frightening now?
I recently had the opportunity to work with board members and leaders in New Buffalo, Michigan, where I shared some information on technology and data in the nonprofit sector. When I asked participants what came to mind when they heard “AI,” the responses were all over the map. From “‘B’ careful” (get it?) and “scary!” to “endless possibilities” and “streamlining processes.”
The truth is that all those reactions can be valid when we’re talking about something that feels new. But one participant asked a question that stuck with me: Why does it seem so frightening now?
When we think about AI, it feels brand new. In reality, AI has been quietly helping systems—and people—for quite some time by recognizing patterns, learning from data, and making predictions. Think about autocorrect finishing your sentences, a streaming service like Netflix suggesting your next binge-watch, or your GPS rerouting you around traffic. Those are all everyday examples of AI working behind the scenes. Many of us have been benefiting from it for years without even realizing it.
What’s different now is that AI has shifted from being a passive tool to something we actively interact with. Instead of simply running in the background, users are now engaging with AI directly - asking questions, generating content, analyzing information, and shaping how work gets done. As a result, people are becoming more aware of its presence in our lives and more curious (and cautious) about what it can do. That awareness is also increasing the need for thoughtful, responsible engagement.
For nonprofits, the possibilities are significant. Organizations large and small can use AI to assess donor behavior, optimize email marketing, enhance social media engagement, detect potential fraud, assist with grant writing, provide website chat support, and much more. Tools like these can save time on repetitive tasks, analyze information quickly, improve communication, and surface insights hidden in data, freeing up more time for the work that matters most our missions.
Of course, the concerns are real too. AI systems can perpetuate bias, generate misinformation, create privacy risks, and encourage overreliance if we’re not careful. For some organizations, there are also valid questions about workforce disruption and how teams adapt to new ways of working.
That’s why the conversation around AI in the nonprofit sector shouldn’t be about fear or blind enthusiasm, it should be about intention. By establishing clear guidelines and boundaries for how we use AI, we ensure that human judgment, ethics, and oversight remain at the center of decision-making. AI can help us move faster, analyze smarter, and communicate better, but it is still our values, our relationships, and our commitment to community that guide the work. If we approach it together with curiosity, responsibility, and a little courage, AI doesn’t replace the human side of our work, it can help to strengthen it.