I was recently invited, along with other local news publishers and editors, to participate in a round of Beer & Politics at Liberation Brewing. The topic: The fourth estate in the age of doubt. We talked about practices as media organizations, and what it’s like to be working in the news during a time when the media is so often the subject of ire in national discourse.

At the event, I emphasized that social media is both a boon and a burden for news outlets. While it provides a platform to reach greater audiences than ever before, it is also frustratingly siloed. Social media algorithms feed users the sort of information they are most likely to click on based upon what they’ve clicked on or mentioned in the past. Content they’re already interested in, and more prone to agree with, is what they see first. Thus, social media echo chambers are born. And we each live in our own, largely through the social media apps through our phones, day by day.

Last week, as I watched the two-part Frontline documentary on Facebook, I felt a twinge of nausea when Frontline’s reporters confirmed, to a dizzying degree, that Facebook’s algorithms do in fact create “social media echo chambers,” and that these siloed information pools, coupled with the interference of malevolent actors, have been directly linked to (if not flat out created) massive cultural divisions and even violence, in multiple countries.

Wael Ghonim, an Egyptian who used Facebook as a tool to help spur an uprising in his country in 2011, told Frontline that soon after the success of the uprising, the tool he had used to drive it backfired. According to Ghonim and detailed reporting by Frontline, Facebook engendered the rapid spread of fake news and inflammatory rhetoric in Egypt, which ultimately led to mass violence.

Frontline also linked Facebook to the inflammation of cultural divisions, antagonistic rhetoric and, ultimately, violence, in Myanmar. The platform was used to spread hate speech and incite violence – violations of its terms of service – against the country’s Muslim Rohingya minority. The company was not unaware of its role in what would become human rights and refugee crisis. Frontline interviewed a tech specialist living in Myanmar who made a presentation to the company about its role in the spreading crisis in 2015.

Maria Ressa, head of the Philippine news outlet Rappler Media, told Frontline she had been warning Facebook that its social media platform was being used by the country’s controversial leader, Rodrigo Duterte, to spread misinformation in support of its war on drugs since 2016. The government’s fight against drugs has resulted in the deaths of thousands of Filipinos – many considered extrajudicial, according to the report.

As the Frontline report documented, Facebook’s news feed algorithm – the code that determines what content each user sees – has been proven to facilitate the spread of content that expresses extreme viewpoints, as this is the content that results in the strongest reactions, and, as a result, is the most shared.

The Russians have gained notoriety for taking advantage of this phenomenon – first by spreading fake news and sparking divisions in order to facilitate its power play against the Ukraine and ultimate takeover of Crimea – and then, in an attempt to influence the 2016 presidential election.

To respond to Frontline’s questioning on these matters, Facebook trotted out executives with titles like “vice president of social good,” who all repeated the same canned lines – and were called out for it in a simultaneously funny and perturbing montage by the documentary’s editors. Essentially, Facebook’s leadership was too focused on the firm’s potential positive impact on society to see its possible negative implications – it was “too slow,” to act, they all said.

But, between watching the Frontline documentary and reading last week’s extensive New York Times piece, “Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis,” it’s quite easy to detect that these executives are, well, full of it.

In summation, the gist of both reports is this: You thought you were disturbed by the company’s data sharing practices? That shouldn’t be the most of your worries.

The vast majority of Americans use Facebook, and most of them do so daily, according to a March report by Pew Research Center. Sixty-eight percent of Americans use the application, and three-quarters log in daily. About 51% check it multiple times per day.

According to Statista, which aggregates data from thousands of reports, about 58% of all Facebook users are aged 18 to 34. By our definition, using a multitude of sources, Millennials are typically considered (in the widest range) to be between the ages of 20 to 38.  Statista has included a bit of Generation Z in its mix, but if you remove those and add on the tail end of Millennials that it’s missing, I’m guessing it would balance out to about the same percentage.

It was a Millennial who invented Facebook, Mark Zuckerberg, now 34. And he did so for others in his age group – college students, specifically. Facebook was born just before I went to college. Shortly thereafter, Zuckerberg opened up the application to all ages. But even as his social media empire expanded beyond anyone’s wildest dreams – it now boasts more than two billion users worldwide – the core group of users remains the Millennial generation.

What the Frontline report revealed about Facebook affects everyone who uses it – and, due to its reach, those who don’t use it. But because most of its users are Millennials, and because we adopted use of the app at such a formative time in our lives that it is now engrained in our day-to-day functions, I worry the most for us, and for the younger generations to follow. Not to downplay the impact to the older generations, but the issue here is that we’re going to be around a lot longer (no offense, folks) – and we’re not likely to stop using Facebook while we’re doing it.

Of course Facebook isn’t all bad. It’s full of family and friends, and videos of cute dogs and silly babies, and fundraisers for charity, and stories of human triumph. But its capacity for facilitating cultural divisiveness to the detriment of society is proven, real, and powerful.

The question we need to ask ourselves is, are we going to continue using a platform that has proven to encourage and inflame the worst aspects of humanity – false gossip, tribalism, and, ultimately, hate speech and violence – in the same way? Or are we going to demand change, and, in the meantime, to try to make changes to our own online behaviors? Are we going to keep feeding the beast? Or are we going to put it on a leash?