Live streamed murders. Terrorists recruiting new members. Hate groups organizing. Liberals and conservatives sealing themselves off in echo chambers.

With nearly 2 billion people around the world checking in monthly, it makes sense that Facebook is dealing with some very sticky issues. The social network is facing increasing pressure to address them head on.

On Thursday, CEO Mark Zuckerberg announced a new vision for the company. He’s shifting its focus from connecting individuals to building communities, namely by getting people to join more Facebook groups.

The change is summed up in the company’s new mission statement: “Give people the power to build community and bring the world closer together.”

But how much of the company’s problems can really be fixed by the new direction?

  • Filter bubbles

Facebook has been accused of contributing to filter bubbles – where people only see news and opinions that reinforce their existing beliefs and biases. It’s not just Facebook’s algorithms. They’re created by the friends we choose to have, the people we decide to mute, and the stories we click.

Striving to get people more involved in groups could possibly exacerbate the problem. Users of Facebook could end up spending more time in groups organized around a shared political view or belief.

Zuckerberg has denied that filter bubbles are widespread. He also believes memberships in groups will expose people to more opinions, not fewer, by helping “people meet new people and get new perspectives and broaden their horizons.”

  • Terrorism recruiting

On Facebook, groups can be set to Secret, meaning users don’t see them in search results. There are good reasons for secrecy – namely safety and privacy – but dangerous organizations can also use the groups as bases for recruiting new members.

Facebook recently outlined its plans to combat terrorism on the social network. It’s using artificial intelligence to scan images, posts and profiles to identify and remove bad actors. The company also employs 150 people focused on counter-terrorism.

“Terrorist recruiting. That is something that we want zero of. We try to make it as difficult as possible,” Zuckberberg said. “Even if no one reports it, we have systems that go out and try to flag that content for our community [monitors] … we’ll do more and more of that over time, as AI gets better.”

  • Fake news

The now overused phrase fake news was originally about made-up news stories that floated around Facebook. Facebook has taken multiple steps to crack down on questionable news stories. It’s working with fact-checking organizations, hiding spammy links, and using AI to identify fake accounts spreading propaganda.

The move to a more groups-based experience for Facebook users could mean people get fewer articles from their news feed, where many publishers post directly. They might see less news overall, including fake news, or a more curated selection of stories from their groups. Facebook has not said how or if its tools for fighting fake news carry over to stories posted in groups.

  • Hate speech

The focus on groups as a positive tool with the power to change the world overlooks how people use them for negative causes. Hate groups like white power organizations use Facebook groups openly, and will continue to exist in the future. Zuckerberg has said he values free speech on the platform and Facebook only interferes if something goes “way over the line,” like bullying or the threat of real world violence. Facebook often relies on regular people flagging objectionable content, but that’s less likely to happen in closed Facebook groups.

  • Murder, violence and self-harm

In April, a Cleveland man used Facebook to share a video of himself shooting a 74-year-old man. The video was viewable for two hours before it was taken down.

People have used Facebook Live, the company’s live video streaming tool, and regular uploads to share videos of murders, beatings, police violence and suicide. It’s a tricky issue for the company, especially when the site is used to document potential civil rights violations.

Facebook is planning on using artificial intelligence to identify violent videos early. It is also deploying 7,500 human content moderators to monitor videos as they’re flagged. As with hate speech, videos shared in private groups might evade Facebook’s moderators for longer than if they were in a news feed.