It’s time to talk about deepfake technology. You know, those AI tricks that fabricate videos or audio making it seem like someone said or did stuff they never did? It’s blurring reality so badly, you have to ask: “What if that viral clip of a celeb or politician tomorrow is total BS?” Or scarier, what if it’s targeting you or your family?
This tech is evolving at breakneck speed, and it’s shockingly easy to misuse. What used to need pro-level gear and $100K in GPU now happens in minutes with free apps and AI models. Deepfake content jumped from about 500,000 files in 2023 to a projected 8 million in 2025. The number of deepfakes detected globally across all industries increased 10x in 2023 (eftsure.com). Open-source tools and cheap computing mean anyone—hackers, trolls, or kids—can harness it. This is a recipe for problems.
Misuse is rampant, starting with fraud that’s hammering Canadians. We’ve lost $103 million to deepfake scams in 2025 (Mitchell Dubros), with North American cases up 1,740%. And 95% of Canadian companies say deepfakes have increased their fraud risk. Deepfakes now account for 6.5% of all fraud attacks, marking a 2,137% increase since 2022. One example? A firm lost $25 million to a deepfake CEO scam. Would you catch a deepfaked loved one asking for cash?
But the real horror is – none of this is directly a criminal offence.
Harassment, especially against young people, leading to tragedy. Deepfakes fuel bullying, extortion, and non-consensual porn—mostly targeting women and girls. In Canada, a pharmacist was linked to the world’s most notorious deepfake porn site, and Alberta cops warn of kids sharing AI fakes. Receipts: A Faridabad student died by suicide over deepfake blackmail. Twitch streamer QTCinderella (Blaire) faced humiliation in 2023, and Q1 2025 saw 179 incidents, up 19% from all of 2024 (keepnetlabs.com). Sextortion using deepfakes has driven suicides amid blackmail and isolation. If you’re a parent, think: How protected are our kids from this digital nightmare? There is no protection for this conduct under law – at least not directly.
C-63 in Canada targets PLATFORM OPERATORS – it stops short of the important step of making non-consensual deepfakes of others a crime. If someone deepfakes your kid, and plasters it on TikTok – it’s too late, the harm is done. We need to prevent this in the first place with severe consequences under law.
Deepfakes erode trust—in elections (like those fake videos of Canadian politicians), relationships, everything. Youth, always online, suffer most from amplified cyberbullying that can end in depression or worse. Europol warns deepfakes fuel harassment, extortion, and fraud, and detection lags behind the tech.
Call To Action
My plea: Governments must legislate now to shield the public, especially young folks. Canada is lagging behind the US on this. We’ve got piecemeal stuff like the Elections Act on campaign fakes and Bill C-63’s start on harms, but no full law tackles everyday non-consensual deepfakes. We need it classified as a serious criminal offense under the Criminal Code of Canada—not a slap on the wrist, but with hefty fines and jail time for creating deepfakes of anyone without consent.
I am calling on our Canadian Government to enact changes to the Criminal Code of Canada to specifically make non-consensual deepfakes a serious criminal offence, with stiff fines and jail time.
Act now: Contact your MP and demand this change. Share this, educate your circle, teach kids to spot fakes. Platforms must remove deepfakes fast, schools need education programs. Why risk more lives? Let’s make deepfakes a crime that bites back—before it’s too late. What are you waiting for?
My Proposed Legislation
Non-consensual deepfakes
162.3 (1) In this section,
deepfake means a visual or audio recording that is created or altered using artificial intelligence or other technology in a manner that would cause a reasonable person to believe it depicts the person engaging in conduct or speech that did not occur, and includes any synthetic representation that is realistic in appearance or sound;
distribute includes to transmit, sell, advertise, make available or possess for the purpose of distribution.
(2) Everyone commits an offence who, without the express consent of the person depicted, knowingly creates, distributes or possesses a deepfake of that person if
(a) the deepfake depicts the person in an intimate context, including nudity, exposure of genitals, or explicit sexual activity; or
(b) the creation or distribution is intended to cause harm, including emotional distress, reputational damage, or incitement to violence against the person.
(3) For the purposes of subsection (2), consent must be informed, voluntary and specific to the creation and use of the deepfake, and may be withdrawn at any time; however, no consent is obtained where the agreement is obtained through abuse of trust or power, or where the person is incapable of consenting.
(4) Subsection (2) does not apply to
(a) deepfakes created with informed consent by the depicted persons that has not been specifically revoked.
Punishment
(5) Everyone who commits an offence under subsection (2) is guilty of
(a) an indictable offence and liable to imprisonment for a term of not more than five years; or
(b) an offence punishable on summary conviction and liable to a minimum fine of $1000 but not more than $25,000 or to imprisonment for a term of not more than two years less a day, or to both.
(6) In determining the sentence, the court shall consider as aggravating factors
(a) whether the offence involved a minor or vulnerable person;
(b) the extent of harm caused to the victim; and
(c) whether the offender profited from the offence.





























