One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, according to a police-commissioned survey.
Unfortunate, when the point of the survey was not to find out if it was a real problem after all, but, like so many other worthless studies, to provide cover for what activists want to do anyway.
The survey of 1,700 people commissioned by the office of the police chief scientific adviser found 13% felt there was nothing wrong with creating and sharing sexual or intimate deepfakes – digitally altered content made using AI without consent. A further 12% felt neutral about the moral and legal acceptability of making and sharing such deepfakes.
What to do, what to do? The answer's obvious - push on regardless.
Det Ch Supt Claire Hammond, from the national centre for VAWG and public protection, reminded the public that “sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating”.
How very dare you ignorant peasants disagree! I’ll tell you worthless subjects what you should think!
Commenting on the survey findings, she said: “The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.”
Blaming technology companies for men harassing women is like blaming car companies for enabling getaway drivers.
She urged victims of deepfakes to report any images to the police. Hammond said: “This is a serious crime, and we will support you. No one should suffer in silence or shame.”
But if you mention a racial or homophobic slur in your texts, don’t expect us not to notice, after all, a hate crime is a tick in the box!
No comments:
Post a Comment