Lede AI was founded in 2018 inside a small local newsroom as an experiment. Reporters, editors and product designers worked together with developers to build a tool that didn’t yet exist. The goal was to cover more stories, faster, and more accurately than possible before AI.

Jay Allred, co-founder of Lede AI, spoke recently at the LMA Local News Summit about the keys to creating a culture of innovation around artificial intelligence. In this Q&A, Allred shares what he’s learned about how to leverage the potential of AI for news, while managing the threats.


How can news leaders best cultivate a culture of AI innovation? Can it be done top-down? To what degree does it need to be ‘bottom-up’?

Jay Allred
Allred

Jay Allred: The most important thing leaders can do to create a company where experimentation and innovation is woven into the culture is to set the objective and the desired outcomes. As a leader, it’s important you’re explicit about the outcome you want. Each organization will have different goals they shoot for, but clarity around where to aim is critical.

Once your aim is set, make experimentation safe, transparent, and reportable. There are lots of ways to do this. LIttle things like a Slack channel for AI experiments, discoveries and pro tips from early adopters can go a long way. One of our teammates pitched an idea we’re going to try. She suggested we have AI experimentation days or an all-hands that’s a two-hour hack-a-thon where employees try to solve a problem using AI tools at their disposal.

What are some of the ways you’ve created and rewarded innovation around AI in your own team?

Allred: We try to make the whole “AI thing” a series of small things that feel less intimidating and way more actionable. What that means in real life is we’re doing things like giving people time to experiment, calling out their work and pointing to the results, and when we see success we quickly implement what we’re learning.

We also work to “pass the mic” and encourage team members to take the lead in areas where they’ve developed some knowledge or found a great hack.

AI creates many opportunities, but there are also perils. How do you manage the risks – and build in guardrails for AI use – without stifling innovation?

Allred: We’re not experts in policy-making in our company. What we tend to do is fall back on established ethical norms and use those as the guardrails: Tell the truth. Be trustworthy. Tell great stories. Applying the values and rules we already have in place to AI experimentation has served us well.

In addition, we talk about it with one another. We ask a lot of questions and aren’t afraid to ask for help. Doing that takes decision-making out of a vacuum and makes it easier to find the best path.

Culture often is reinforced more in moments of failure than success. How have you used AI ‘failures’ as teaching moments to spur further AI innovation?

Allred: One way we’ve done that is to have folks try a particular tool or process and then report back on what they found … no matter what that was. The PG version of the saying is “fiddle around and find out.”

The objective we set was “fiddle around.” Success came from “finding out.” Finding out that what we tried didn’t work wasn’t a failure. It was a win.

For leaders concerned about being ‘too early’ or ‘getting AI wrong,’ how would you advise them to approach their AI strategy?

Allred: If you are worried about being too early in 2024, you’re late. If you’re worried about getting AI wrong, I would say to let your existing norms and ethics be the guideposts.

“If you are worried about being too early in 2024, you’re late. If you’re worried about getting AI wrong, I would say to let your existing norms and ethics be the guideposts.”

Jay Allred, co-founder of lede ai

With those things in mind, my advice is to start as soon as possible and start small. Build a little at a time. Integrate frequent, low-friction feedback loops all over the place. Lead by example. Make it all about learning and teaching one another up, down, and across the org chart.

Make adoption easy by paying for a “pro license” for one generalist tool like Chat GPT, DALL-E or Google Gemini. Give everybody the login and go. We do this and have found that the general login had unintended benefits of being able to see the way others are using the tool when you log in. Makes it easy to share techniques as well.