Learn how nonprofits are creating safe, practical AI policies with real examples, expert tips, and a free AI Readiness Guide.
More and more YMCAs and nonprofits are asking the same question: How can we use AI in a way that’s smart, safe, and aligned with our mission? To help answer that, we brought together a panel of experts who’ve been on the front lines of AI policy - from legal, nonprofit, and tech perspectives.
Catherine Lake, Partner at Dorsey & Whitney LLP, brought clarity from the legal side, helping organizations understand risk, compliance, and what AI policies should cover.
John Merritt, Senior Vice President & CIO of the YMCA of San Diego County, shared real-world lessons from implementing AI policy at one of the largest Ys in the country.
Jonny Power, Chief Technology Officer at Traction Rec, offered practical advice to help nonprofit teams use AI confidently and responsibly.
Here’s what we covered:
How to spot and support existing AI use on your team.
The importance of guidance surrounding AI use and policy implementation.
Steps to get started with building your AI policy, including how to vet AI vendors.
Lastly! We’ve created a AI Readiness Guide to help your team assess where you are and get started building your AI policy. Reach out and we’ll send it right over!
Your staff is already using AI, let’s help them do it safely
AI tools have quietly made their way into nonprofit workflows. Whether it’s ChatGPT or other generative tools, many staff are already using AI, sometimes without leadership even realizing it. At a recent conference of 150 nonprofit organizations, over 60% of attendees said they were using AI in some form, but when asked how many had a policy in place... not a single hand went up. That’s a gap with real implications. Without a clear framework, organizations risk missteps around data privacy, misinformation, and trust with their community members. Thats why the YMCA of San Diego created a clear AI policy that includes approved AI tools and how to safely use them for daily workflows.
<hr>
"About 6 months ago it became really clear that AI was really starting to make it into everyday workflows [...] the concern became that, that was happening without any kind of formal approval. You know, no guidelines or guardrails.”
- John Merritt, SVP & CIO, YMCA of San Diego County
<hr>
<hr></hr>
Policy and guidance: The perfect pair to build confident AI use in your organization.
With AI rapidly evolving, a one-and-done policy won’t cut it. As Catherine Lake noted, the real work begins after the policy is written - through ongoing communication, staff support, and practical application. John Merritt echoed this sentiment, pointing out that it’s not about limiting staff, it’s about giving them the tools to use AI safely and confidently in their daily work. By outlining what tools staff can use, it allows them to explore the different ways AI can support their work without the fear of the unknown, therefore empowering them to innovate and ultimately serve their community even better.
Some of the most effective ways organizations are bringing policy to life include:
Framing policy as empowerment, not restriction
Using real-world scenarios, like writing job descriptions, newsletters, or donor messages
Setting clear do’s and don’ts
A simple way to help staff know what’s safe is making it clear with a model like Traction Rec’s three-tiered data rule:
Public – Free to share (e.g., marketing copy)
Restricted – Approved vendors only (e.g., meeting notes)
Confidential – Highly sensitive, never shared (e.g., donor or client info)
<hr>
“And these are tools. They're not replacements, which again, I think is helpful to giving people comfort, that they're not taking over everyone's job. They're just adding, you know, hopefully, a tool so that you can spend more time surfing or whatever you want to be doing.”
- Catherine Lake, Partner, Dorsey & Whitney LLP
<hr>
<hr></hr>
Start small, start now: What every nonprofit can do today
Let’s be honest...AI and AI policy can feel like a lot. But creating a safe, responsible foundation doesn’t mean writing a 10-page document today (or ever). Start simple. Sit down for a coffee chat with a staff member you know is already using AI and ask them: What AI tools they are currently using at work, where AI is actually helping them day-to-day, and if there are any other AI tools they think would be beneficial to them and their team?
This kind of open conversation creates space for safe experimentation and gives you clarity on what’s already happening in your organization. From there, it’s about building a lightweight framework that balances innovation and trust. Traction Rec and the YMCA of San Diego County, use a trusted tools approach, that encourages the use of vetted vendors like Salesforce’s Agentforce or ChatGPT with a commercial license, which guarentees the safety of your data.
<hr>
“As a business, it’s impossible to use a tool if they’re going to use your data for training.”
- Jonny Power, CTO, Traction Rec
<hr>
Tips for vetting AI tools at your nonprofit:
Check if the tool uses your data to train its models
Use tools with commercial agreements and clear data protections
Vet your vendors - if your partners don’t have a strong AI policy, your data could still be at risk
<hr></hr>
Start small. Talk to your staff. Understand what’s already in use. Then build a policy that protects your people, your data, and your mission.
Want help getting started?
Traction Rec has created an AI Readiness Guide to help nonprofit leaders build practical policies, identify risks, and create a foundation for safe experimentation. Reach out and we'll send it right over!
Watch the complete webinar
<hr></hr>
Meet our panelists.
Catherine Lake
Partner, Dorsey & Whitney LLP
John Merritt
Senior Vice President & CIO, YMCA of San Diego County
We can't be reacting anymore. We need to be innovating — getting ahead of the issues. With Traction Rec, I can dream and then we can build it and make it happen.