Facebook says it’s using artificial intelligence to help it combat terrorists’ use of its platform. The social media giant also is asking for help from the public in debating controversial subjects.

The company’s announcement comes as it faces growing pressure from government leaders to identify and prevent the spread of content from terrorist groups on its massive social network.

Facebook officials said in a blog post Thursday that the company uses AI to find and remove “terrorist content” immediately, before users see it. This is a departure from Facebook’s usual policy of only reporting suspect content if users report it first.

They also say that when the company receives reports of potential “terrorism posts,” it reviews those reports urgently. In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.

Earlier, Elliot Schrage, Vice President for Public Policy and Communications at Facebook, published a blog seeking feedback from the public about how to deal with terrorism and what he called “hard questions.”

“As more and more of our lives extend online, and digital technologies transform how we live, we all face challenging new questions — everything from how best to safeguard personal privacy online to the meaning of free expression to the future of journalism worldwide,” he wrote.

“We debate these questions fiercely and freely inside Facebook every day — and with experts from around the world whom we consult for guidance.”

Schrage added that Facebook is “starting a new effort to talk more openly about some complex subjects. We hope this will be a place not only to explain some of our choices but also explore hard questions.”

The “hard questions” include:

  • How should platforms approach keeping terrorists from spreading propaganda online?
  • After a person dies, what should happen to their online identity?
  • How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
  • Who gets to define what’s false news — and what’s simply controversial political speech?
  • Is social media good for democracy?
  • How can we use data for everyone’s benefit, without undermining people’s trust?
  • How should young internet users be introduced to new ways to express themselves in a safe environment?

Send your suggestions to hardquestions@fb.com.