As the U.S. presidential election looms, the social network announced this week that it will be enforcing the removal of manipulated material that could be used to trick the average user, but stressed that the policy would not extend to content that is intended as parody or satire.

While experts say deepfakes are largely dominated by porngraphy, analysis suggests they have already been used to destabilize politics.

Monika Bickert, Facebook’s vice president of global policy management, indicated that the website wanted to get ahead of the issue now, with the 2020 election on the horizon.

“While these videos are still rare on the internet they present a significant challenge for our industry and society as their use increases,” she wrote.

Often created with “deep learning” technology, manipulated media is able to convincingly tamper with video and audio to dupe viewers of the doctored material. In many cases the results spread quickly online and, in recent years, the development process behind-the-scenes has become more user-friendly.

In one instance, face-swap technology was integrated into a Chinese mobile application that later went viral across social media. Another project tested the limits of Facebook’s policy by making a deepfake of Mark Zuckerberg. Podcaster Joe Rogan’s voice was also realistically mimicked.

The political ramifications are yet to be seen, but House Intelligence Committee chair Adam Schiff previously warned that the technology could soon be abused to fuel misinformation and confusion.

“Thinking ahead to 2020 and beyond, one does not need any great imagination to envision even more nightmarish scenarios that would leave the government, media, and the public struggling to discern what is real and what is fake,” the top Democrat said last June, Politico reported.

There is already precedent for covertly-edited video impacting on U.S. politics. As noted in a study on deepfakes by cybersecurity firm Deeptrace last year, House Speaker Nancy Pelosi featured in a video which had been slowed down in a way that made it appear her words were being slurred. The manipulated clip was shared by the Twitter account of President Donald Trump.

“Without defensive countermeasures, the integrity of democracies around the world are at risk,” Deeptrace founder and CEO Giorgio Patrini noted in the September report’s foreword.

Facebook said it will remove misleading content if it meets two criteria:

It has been edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.

It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Videos that don’t meet the exact criteria for total removal will still be reviewed by third-party fact checkers but the consequences or penalties may be more nuanced, Facebook said.

Bickert explained: “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. People who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.

“This approach is critical to our strategy and one we heard… from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false we’re providing people with important information and context.”