In a blog post published Thursday, Facebook described how an artificial-intelligence system would, over time, teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group.
The same system, they wrote, could learn to identify Facebook users who associate with clusters of pages or groups that promote extremist content, or who return to the site again and again, creating fake accounts in order to spread such content online.
“Ideally, one day our technology will address everything,” Ms. Bickert said. “It’s in development right now.” But human moderators, she added, are still needed to review content for context.
Brian Fishman, Facebook’s lead policy manager for counterterrorism, said the company had a team of 150 specialists working in 30 languages doing such reviews.