In some cases, Facebook's automated systems did a good job finding and flagging content before users could report it.
The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users.
Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.
The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances.
Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.More news: Turtle Beach Shares are Booming - Is it Time to Buy?
Facebook says the number of views of terrorist propaganda from organisations including ISIS, al-Qaeda and their affiliates that happen on the platform is extremely low.
Tuesday's self-assessment - Facebook's first breakdown of how much material it removes - came three weeks after Facebook tried to give a clearer explanation of the kinds of posts that it won't tolerate.
This was up by three quarters from 1.1m during the previous quarter because of improvements in Facebook's ability to find such content using photo detection technology. The company found and flagged 95.8% of such content before users reported it.
According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.
Rosen even specifically admitted that "for hate speech, our technology still doesn't work that well", so Facebook employs review teams to fill in the purported gaps - which is part of somewhat widespread concerns that the company is over-censoring.More news: WhatsApp's latest update brings more order and control to group chats
Facebook's new Community Standards Enforcement Report "is very much a work in progress and we will likely improve our methodology over time", Chris Sonderby, VP and deputy general counsel, wrote in a blog post about the report.
Facebook took action on 2.5 million pieces of content over hate speech, but doesn't have view numbers as it is still "developing measurement methods for this violation type". "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". The inaugural report was meant to "help our teams understand what is happening" on the site, he said.
Facebook shares slid as much as 2% Tuesday morning after it announced it had disabled 583 million fake accounts over the last three months.
Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4.More news: Koreas in close consultations ahead of N. Korea-US summit