By Jay Small • LMA chief operating officer

The NYC Media Lab and Knight Foundation brought together 140 media, academic and technology leaders last week (Jan. 24-25) to contemplate the intersection of artificial intelligence and local news. Neither unbridled awe of AI, nor starry-eyed optimism, nor apocalyptic “Skynet” fears, ruled here — but important conversations developed about the positives and negatives of the technology.

Steven Rosenbaum, NYC Media Lab (Photo: Jo Chiang / NYC Media Lab)
Steven Rosenbaum, NYC Media Lab (Photo: Jo Chiang / NYC Media Lab)

NewsLab ’20, held Friday and Saturday in new NYC Media Lab facilities in Brooklyn, combined scene-setting panel discussions on the current state of the art in artificial intelligence with small-group brainstorming exercises to identify possible applications for AI in news media, and to suggest collaborators who might help make those ideas real over time.

“This is not a typical crowd for a NYC Media Lab event,” said Steve Rosenbaum, managing director of the lab. “Everyone in this room is extraordinarily interesting. We want you to be optimistic. There’s plenty of doom and gloom and dark stories, but we’re here to solve problems.”

Paul Cheung, Knight Foundation (Photo: Jo Chiang / NYC Media Lab)
Paul Cheung, Knight Foundation (Photo: Jo Chiang / NYC Media Lab)

Paul Cheung, director of journalism and technology innovation at Knight Foundation, described organizers’ aspirations for the event: “We’re interested in collaboration between local media, technologists and academics. We should ask ourselves the question, ‘How can we use AI for good, rethink the production of journalism, and find new expressions of journalism?'”

And Justin Hendrix, executive director of NYC Media Lab, tempered the definition of AI for the group.

“We try not to be hypesters but talk reasonably,” Hendrix said. “When you look at the ambitions of the big tech companies, the money they’re spending on R&D, what are they working on? Certainly AI, natural language processing, machine learning, computer vision, these are all key things these companies are putting effort into.

“But we’re not here to talk about crazy killer robots or [artificial] general intelligence,” he said. “Estimates on general intelligence are still pretty far off.”

Justin Hendrix, NYC Media Lab, discusses R&D spending at technology companies.
Justin Hendrix, NYC Media Lab, discusses R&D spending at technology companies.

If that’s not it, what is meant by “AI?”

Definitions

These three related terms rang out early and often at NewsLab ’20: artificial intelligence, machine learning and natural language processing. Many definitions exist for these terms; here are some that include practical examples:

Artificial intelligence: The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from experience. (Adapted from an Encyclopedia Britannica entry written by B. Jack Copeland, professor of philosophy and director of the Turing Archive for the History of Computing, University of Canterbury, New Zealand.)

Machine learning: A subset of AI that focuses on using statistical techniques to build intelligent computer systems to learn from databases available to it. (Adapted from an article written by data scientist Mandeep Kaur, which includes several examples of real-world machine learning applications in medical diagnosis, image and speech recognition, and prediction systems.)

Natural language processing: Another subset of AI that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, to fill the gap between human communication and computer understanding. (Adapted from an introduction to NLP by SAS, the data science/business intelligence company. Tableau, the well-known data visualization platform, provides some everyday examples of NLP: think email filters and smart assistants, for starters.)

Journalism practitioners already rely on many instances of artificial intelligence, machine learning and natural language processing in their work. These applications include programs that streamline workflows, automate routine tasks, parse large data sets, may detect or reduce occurrence of falsified news reports, and more. Corinna Underwood provides examples of each in an article written late last year for Emerj.

That list leaves a lot of needs unfilled, and room for new applications. What should the priorities be?

Applying AI in news and media

“Start by breaking it out in terms of using technology to do journalism, or AI to do journalism, vs. using AI to make money,” said Meredith Broussard, data journalism professor at New York University. “I use AI for investigative reporting. … That’s very different from the way you would use AI on the business side of an organization.

“AI in a paywall, that’s a fantastic use with a business goal to help make more money by understanding the audience better,” she said. “But it’s not necessarily tied to the content side of the organization.”

Day 1 panelists, from left: Meredith Broussard, John Keefe, Mark Hansen, Jennifer Choi, and moderator Justin Hendrix.
Day 1 panelists, from left: Meredith Broussard, John Keefe, Mark Hansen, Jennifer Choi, and moderator Justin Hendrix.

Broussard, author of Artificial Intelligence: How Computers Misunderstand The Worldspoke as part of an opening panel discussion on AI in local news. Others on that panel focused on the impact of different aspects of AI on journalists’ day-to-day work.

“I come from the perspective of using machine learning and AI-type technologies inside the newsroom for journalists doing reporting,” said John Keefe, investigations editor at Quartz. “There’s a long tradition of sharing tools and tech resources in newsrooms, through NICAR [National Institute for Computer-Assisted Reporting, part of Investigative Reporters and Editors] and other organizations.

Unlike something like when online maps first came out, and everyone started using them, machine learning technology and initiatives in investigative reporting come up less often,” Keefe said. “We’re not doing machine learning investigations every couple of weeks.”

“Beyond authorship, reporting on, or reporting with, there’s kind of a ‘black-boxification’ that can happen with these tools and the shiny, mysterious ways they do things for you.” said Mark Hansen, professor, Columbia University. “I think we need not lose our journalistic sense when reporting using these tools. Treat them as a source, and maybe one we don’t completely understand. Trust what is there, understand what isn’t there, and know there could be big chunks of a story left behind, some important element tucked in a corner somewhere.”

The panelists pondered the uneven distribution of AI-related technologies and tools across news organizations at different scales.

“We find computation resources unevenly spread around the network,” “We found newsrooms with five people, two of whom are data journalists, and others with 100 people but only one very tired person who is the data journalist.”

Jennifer Choi, managing director of the CUNY News Integrity Initiative, noted the impact that disparity can have in collaborations among news organizations. “If you are looking at an organization that has capacity, vs. a smaller organization serving a smaller community, understand what the negotiation is. It’s not just the big organization ‘bigfooting’ the smaller organization, and the resources they have, including a trust relationship with the community they serve.”

Fitting AI into the framework of journalism ethics

The opening panelists, as well as those on a Day 2 panel titled “Complicating Questions,” delved into ethical considerations of AI and related technologies as applied in journalists’ work.

“[How do we] avoid some of the ethical traps,” Keefe said, “when we have to take a part of the world and reduce it to a line in a data set? Reductions have subjectivity in them, need a human component guarding against some of those things so we are not repeating some of the values problems we are accusing the big platforms of.”

Broussard agreed. “We need humans in the loop. We can’t just turn over responsibility for journalism to computers. Look at civic engagement, and the structural properties of humans to create positive civic engagement.”

The Day 2 panel, from left: Noelle Silver, Shweta Jain, Irwin Chen, Chris Wiggins, moderator Steven Rosenbaum.
The Day 2 panel, from left: Noelle Silver, Shweta Jain, Irwin Chen, Chris Wiggins, moderator Steven Rosenbaum.

Where do news organizations find these humans? On the Day 2 panel, Noelle Silver, vice president of digital technology for National Public Radio, said it’s most likely from inside.

“NPR has a bunch of developers trying to solve the problems we surfaced here,” she said. “The problem is these are not home-grown, longtime machine learning people. We have to train them.

“And ethics is not a required subject for any of these people,” Silver continued. “More importantly, we’re not going to hire new people. We’re going to train these people. It’s not even academia that is going to handle this. It is a company. How many of us in executive levels of leadership are realizing we are going to have to train these people?”

Advanced computing technologies in the practice of journalism come with other potential threats, including algorithmic biases and unethical uses by bad actors.

“Algorithmic bias is being addressed. No more is it the elephant in the room — everybody’s looking at it,” said Shweta Jain, associate professor, John Jay College of Criminal Justice. “As an academic I can tell you, if we present something it goes through peer review and we will be asked about [bias]. It’s not solved but it is being addressed, a first step.

On the other side of that, we build a thing, and I do it all the time, not knowing what I’m building,” Jain said. “We solve a problem and are very excited about it, but we have to take a step back and ask, ‘How are the bad guys going to use it?'”

To the job at hand: Applying design thinking to AI applications in news

The two panel discussions set the stage for the core work of NewsLab ’20: Organizers put people of disparate backgrounds and experience together in 10 small groups, with instructions to follow a design-thinking framework that works toward a “how might we …?” question, for which the group attempts to come up with answers that may rely on advanced computing technologies.

Our goals together are to generate new ideas for novel collaborations,” Hendrix said. “And we must consider the implications for local and regional organizations that have fewer resources.

We’re looking for not just technology solutions. We see a handful of technologies coming down the pike that are going to change the way we do journalism and the business of news media,” Hendrix said. “How do we create collaborative structures so that we can better take advantage of those opportunities?”

A board full of sticky notes — result of one NewsLab '20 group's design thinking exercise to develop a "how might we ...?" question. (Photo: Jo Chiang / NYC Media Lab)
A board full of sticky notes — result of one NewsLab ’20 group’s design thinking exercise to develop a “how might we …?” question. (Photo: Jo Chiang / NYC Media Lab)

With those instructions, each group spent the bulk of both days working through:

  • System mapping the local media landscape, with a potential primary beneficiary for AI applications in the center
  • Developing a problem or opportunity to address (the “how might we …?” question)
  • Describing and sketching possible solutions, and potential collaborators toward the solutions
Members of one of the small groups review their systems map work. (Photo: (Photo: Jo Chiang / NYC Media Lab)
Members of one of the small groups review their systems map work. (Photo: Jo Chiang / NYC Media Lab)

Themes that emerged

The NewsLab '20 experts panel offers feedback on the group presentations. From left, Jed Williams, Ben Monnie, Mark Hansen, Noelle Silver. (Photo: Jo Chiang / NYC Media Lab)
The NewsLab ’20 experts panel offers feedback on the group presentations. From left, Jed Williams, Ben Monnie, Mark Hansen, Noelle Silver. (Photo: Jo Chiang / NYC Media Lab)

At the end of NewsLab ’20, all 10 groups reconvened and presented their ideas to a panel of industry leaders (Cheung; Hansen; Silver; Ben Monnie, director, Google News Initiative; and Jed Williams, chief strategy officer of LMA). Far from a Shark Tank-like “invest-or-out” feel, the feedback panel considered and debated possible permutations and implications of the ideas as the groups shared them.

Eight of the 10 groups focused on journalists and news organizations as main beneficiary or user of the ideas they devised, including:

  • Using AI to surface information or story possibilities that news organizations might otherwise miss
  • Improving diversity and representation in sources for news reporting
  • AI to enrich background information and context for news stories
  • Trust certifications for AI tools and the news organizations using them
  • Several groups arrived at collaborative consortiums for economy of scale and better governance. These groups aimed at concepts such as collaborative development of large sets of local data, and providing and sharing advanced news technology resources and consulting.

The other two groups focused on information consumers:

  • Using play experiences (toy- or game-like) enabled by AI to deepen engagement in local news, and improve critical thinking skills
  • Forming communities across industries and interests, centered on data journalism tools and using libraries as the “town halls”

What comes of these ideas?

“Our hope is that, after this, we will continue to try to cultivate some of these,” Hendrix said. “Some of you will finish today and say, ‘That was nice. Nice to meet you.’ Some may want to see what becomes of this, and pursue the process further.”

Jed Williams, LMA chief strategy officer, contributed to this report.