Deepfake porn videos of American pop singer Taylor Swift on social media. The controversy started when the videos circulated on X (formerly Twitter) got more than 45 million views and 24000 reposts in 17 hours. Many fans shared the concern. With that, more posts came out about this.
Taylor Swift Image Credit: Facebook/ Taylor Swift
Fans and political leaders have demanded that legislation be speeded up to criminalize the creation and promotion of deepfake videos. Last November, actress Rashmika’s deepfake video was also controversial in India. Later, Delhi Police registered a case and arrested 4 people who circulated the video. Later, Deepfake became a big discussion and the central government intervened in the matter.
Naveen has given a statement that he is a big fan of Rashmika and is running a fan page in the name of Rashmika on social media. Deepfake video was first posted on this page. The deepfake video was created using the video of Sara Patel, a British Indian social media influencer. The video went viral within minutes of being posted.
Behind the deepfake
This is done using machine learning using a large dataset of people’s images and videos. Once this model is properly trained, it can be used to create images and videos that are indistinguishable from the real thing.The process of creating a deepfake usually involves two main components.
Image Credit: Rifrazione_foto/shutterstock
Autoencoders: These are neural networks trained on a large dataset of images or audio of a target person. The encoder part learns to compress the target’s data into a hidden representation, while the decoder learns to reconstruct the data from the hidden representation.
Generative Adversarial Networks (GANs): These are competing neural networks with a generator and a discriminator. The generator tries to generate realistic fake contents, while the discriminator tries to distinguish real from fake content. Both networks iteratively improve each other, leading to the creation of more realistic deepfakes.
Credit: RapidEye/Istock
For what?
The reality is that this technology is most misused for porn videos. AI firm DeepTrace found 15,000 deepfake videos online in September 2019, doubling in nine months. While spotting deepfakes can be challenging, there are some signs to look out for:
∙Facial Discrepancies: Small differences may be seen around the face, especially in the hair, ears and jawline. Deepfakes may struggle to maintain highlights and shadows during movement and speech across the face.
∙Eyes that don’t close: Real people naturally blink frequently, while deepfakes may have eyes that stay open for unnaturally long periods of time.
∙ If you see a suspicious video, you can search the internet to see if there is any information about it.
∙ Check whether the news or information about the things mentioned in it has come in reliable media.
(Representative image by magann/istockphoto)
Are deepfakes always dangerous?
Never. Many are interesting and some are helpful. Voice cloning deepfakes can be helpful when people lose their voice due to disease. Deepfake videos can liven up galleries and museums