Local news organizations are moving quickly to adopt artificial intelligence, but audiences and advertisers are moving just as quickly to ask hard questions about trust, transparency and accountability.
That tension was the focus of a recent Local Media Association webinar featuring leaders from the Alliance for Audited Media (AAM), who outlined a practical framework for responsible AI use in journalism and media operations.
The session centered on a core idea: AI is no longer optional in newsrooms — but trust remains the currency that determines whether innovation strengthens or undermines local journalism.
Why ethical AI has become a local media imperative
Speakers emphasized that AI adoption is accelerating across newsrooms, from workflow automation to content summarization and audience engagement. At the same time, pressure is increasing from multiple directions:
- Audiences want clarity on when and how AI is used in journalism
- Advertisers want safeguards against errors, bias, and brand risk
- Regulators are introducing a growing patchwork of AI-related laws
Research cited during the webinar included findings from a recent Local Media Association and Trusting News study showing strong public demand for human oversight and clear explanations of AI’s role in news production.
The takeaway: openness to AI exists, but only when guardrails are visible and enforced.
The AAM ethical AI framework: Eight pillars for responsible use
AAM introduced its Ethical AI Framework, developed with input from media companies, advertisers, and industry partners. The framework is built around eight core pillars designed to help organizations manage risk while maintaining credibility.
The eight pillars include:
- Ethical AI policies
- Transparency and disclosures
- Rights and permissions
- Accountability and human oversight
- Bias, balance, and fairness
- Privacy and data protection
- Training and education
- Risk management and continuous review
Together, the pillars provide a scalable structure that can apply to local publishers, national media organizations, agencies and technology platforms.
Real-world examples from news organizations
Throughout the presentation, speakers highlighted how media organizations are already applying these principles in practice:
- Associated Press publishes public standards outlining how and why AI is used, with an emphasis on human oversight
- The Guardian has developed internal working groups focused on rights, permissions and fair use
- The Wall Street Journal clearly labels AI-assisted summaries and explains how they are reviewed by editors
- Financial Times uses internal checklists to assess bias, data integrity, and risk
- Graham Media Group has built internal AI tools to reduce reliance on third-party platforms
- Radio-Canada has implemented company-wide AI training and cross-functional collaboration
The examples underscored a consistent theme: responsible AI use is less about tools and more about governance.
What advertisers are asking — and why it matters
Advertiser trust was a major focus of the discussion. Research from the Interactive Advertising Bureau shows that AI-related incidents — such as hallucinations, biased outputs or off-brand content — are already common.
Despite that risk, only a minority of brands, agencies, and publishers have formal AI governance in place.
For local media organizations, speakers noted, ethical AI practices are increasingly becoming part of the sales conversation — not just an internal newsroom concern.
A practical starting point for local media
The webinar concluded with an action-oriented roadmap for organizations at any stage of AI adoption:
- Form a cross-functional AI committee that includes editorial, legal, tech, and leadership
- Establish clear policies defining acceptable AI use
- Develop consistent disclosure practices for audiences
- Keep humans in the loop for review and accountability
- Train staff regularly as tools and standards evolve
- Revisit risk assessments as laws and technology change
AAM positioned its Ethical AI Certification as one option for organizations seeking third-party verification, ongoing guidance and a standardized way to demonstrate accountability to audiences and advertisers.
The big picture
AI will continue to reshape how journalism is produced, distributed and monetized. What remains constant, speakers emphasized, is the need for transparency, human judgment and trust.
For local media organizations, ethical AI practices are no longer just a defensive measure—they are an opportunity to lead, differentiate and reinforce credibility at a moment when trust matters more than ever.
The webinar recording is available here: The eight pillars of ethical AI: A roadmap for responsible AI implementation
(Passcode: 9Crv48.4)
Previous webinars:
- How hyperlocal newsrooms can become the most trusted voice in their communities
- 3 social media moves local publishers can make now to grow traffic in 2026
- Building strong business operations for local media: Key takeaways from TAPinto founder Mike Shapiro
- What is GEO for SEO — and how newsrooms can create content for it
- 10 prompts every sales team should be using today
- From pageviews to loyalty — the KPIs that grow audience and revenue
- Back-to-school strategies that grow audience and revenue
- Publishers turn to AI to reclaim local ad market from big tech
- From brainstorm to optimization: How local media can use AI to save time, scale smarter and delight advertisers
- How Make It Free helps publishers unlock new revenue from non-subscribers
- Strategies to boost newsletter engagement and revenue
- 5 important ways to use AI to empower your editorial teams
- Instagram Reels revenue opportunities for publishers
- Election Fact-Checking Tools and Best Practices
- Unlocking subscription success
- Make a media kit that sells
- The power of AI in storytelling
- Local media’s advantages when selling digital
Editor’s note: Artificial Intelligence was used to transcribe and create an initial summary of this article, which was then edited by LMA staff.
