The most common types of deepfakes are non-consensual explicit content (deepfake pornography), various forms of financial fraud and scams, and political misinformation.
46% of fraud experts have encountered synthetic identity fraud, 37% voice deepfakes, and 29% video deepfakes (Statista, 2024). In January 2024, fraudsters using deepfake technology impersonated a company's CFO on a video call, tricking an employee into transferring $25 million.
Deepfake texts may exhibit unnatural syntax, repetitive phrasing, or inconsistent tone. For example: Overly formal or robotic language: A fake news article might use overly complex vocabulary in an unnatural way (e.g., "The meteorological conditions precipitated a cataclysmic event").
Some deepfaked images contain clear spatial and visual inconsistencies, such as differences in noise patterns, or colour differences between edited and unedited portions. Video and audio deepfakes, meanwhile, can be given away by time-based inconsistencies, such as mismatches between speech and mouth movements.
Yes, creating and sharing deepfakes is increasingly illegal, especially when they are used for sexual exploitation, fraud, harassment, or to impersonate real people without consent, with many jurisdictions (like Australia's NSW) enacting specific laws with jail time and fines, though laws vary by location and context. General deepfakes might fall under existing laws (defamation, fraud), but specific AI-generated deepfake laws target non-consensual intimate images and other harmful uses, with penalties rising.
While there are existing image-based offences in the Summary Offences Act that target deepfakes that have been created by editing or manipulating real images, the new laws will ensure that deepfake images that are generated entirely by AI or other digital technology will be captured as an offence.
Most Consumers Can't Identify AI-Generated Fakes;
The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video.
Your Professor Can Detect AI Writing
Even if AI detection tools don't flag your work, professors often recognize sudden shifts in writing style, generic arguments, or fabricated citations—clear signs of AI-generated work. Bottom line: Relying on AI to write your assignments doesn't guarantee you'll get away with it.
Deepfakes have signs like mismatched lips, robotic voices or blurry visuals. Messaging that feels rushed or asks for odd payment methods may also be a warning sign. If you come across a deepfake or AI scam, take steps to protect yourself. Report the scam, review your privacy settings and freeze your credit if needed.
Tips for Spotting AI-Generated Images
Unusual or Inconsistent Details: AI-generated images often contain minor, noticeable detail errors. Encourage students to look for abnormalities like asymmetrical facial features, odd finger placement, or objects with strange proportions.
Deepfake is a form of artificial intelligence (AI) that can be used to create convincing hoax images, sounds, and videos. The term "deepfake" combines the deep learning concept with something fake. Deepfake compiles hoaxed images and sounds and stitches them together using machine learning algorithms.
What are the Types of AI-Driven Cyberattacks?
In these situations, the follow these three steps:
Frequently asked questions about fake social media profiles
A low number of posts, lack of interaction, the absence of a profile picture, a questionable bio, or even a large number of bogus followers could mean you're dealing with a fake account.
Limit your online footprint: Be cautious about what you share online. The less personal information and images available, the harder it is for someone to create a deepfake of you.
The different types of cyber security threats include malware, phishing, ransomware, DoS attacks, MitM attacks, SQL injection, XSS, social engineering, password attacks, and insider threats. These threats can have serious effects such as financial loss, data breaches, identity theft, and disruption of services.
How to spot a deepfake
A deepfake usually refers to a highly realistic but fake image, video, or audio of a person saying or doing something they never actually said or did. Deepfakes were around long before generative AI tools became popular.
The 10-20-70 rule for AI, popularized by Boston Consulting Group (BCG), suggests that successful AI adoption focuses 10% on algorithms, 20% on underlying technology/data, and a crucial 70% on people and business processes, emphasizing change management, role redesign, and workforce training to integrate AI effectively and drive real value, not just technical solutions.
Take screenshots of your browser history when searching for references. You can provide these as evidence you researched the assessment yourself. If you make hand written notes or make notes on hard copy books or articles while completing your assessment, take photos of them before you get rid of them.
Jobs AI can't easily replace involve high emotional intelligence, complex human interaction, creativity, strategic judgment, and physical dexterity, found in healthcare (nurses, therapists), skilled trades (electricians, plumbers), education (teachers), emergency services (firefighters, police), creative arts (artists, musicians), and leadership roles (C-suite, HR), where human empathy, nuanced decision-making, and hands-on skills are essential.
Look out for telltale signs of a deepfake video
Be wary of videos that have uneven resolution around the facial features and inconsistencies between the audio and visuals and in some instances, the lip-sync is off. Blurred or distorted areas in the video's background can also be a sign of altered and deepfake videos.
Be wary of perfection
Another sure-fire way of identifying an AI image is checking if it looks a little too perfect. AI images often lack details that can be found in real pictures, leading to these photos having an 'airbrushed' look.
McAfee® Deepfake Detector uses advanced AI detection techniques, including transformer-based Deep Neural Network (DNN) models. The DNN models are expertly trained to detect and notify you when audio in a video is likely generated or manipulated by AI.