US President Donald Trump created (some more) controversy recently by posting an AI video that ends with his predecessor Barack Obama being jailed. Leaving the politics aside, the most disturbing part is the disturbing lifelike quality of the “arrest” sequence. This kind of trouble was expected with this video showcasing Google AI video’s capabilities. Pair this with the ever-evolving deepfake videos and images, and you can understand why the CISOs are getting increasingly stressed and the SecOps looking increasingly like zombies as the weekend nears. It is time to add N to WYSIWYG. What You See isn’t Necessarily What You Get.
Welcome to the (even denser) jungle
It would not be a bit surprising if the security specialists are somewhat unhappy with the technological advancements. Life was much simpler with the phishing and spoofing emails, with the writing issues, including typos, and the (compared to the present) easier to detect spoofing attempts. Sure, AI has been very useful for detecting and taking action against various types of threats, but the same technology has made access management a harder task.
Why the dense jungle analogy? A jungle is teeming with threats, from both air and earth. The terrain can be treacherous, and your path will be littered with a host of wildlife, from poisonous snakes to attacks by startled wild animals. You have to be alert to every trouble, even with training and experience for multiple years. The same is true with organizational security. AI can now recreate human voices from a 3-second sample, along with Software like ElevenLabs’ voice cloning, and the aforementioned launch of Google Veo’s new iteration can allow the bad actors to combine the video and audio creation technologies to create a near-believable deepfake. The audio recreation part is especially scary. It is allowing the creation of videos of people saying anything, which can also be used for disinformation campaigns, along with fraud. All of these elements point to a terrifying reality: creating deepfakes is now stepping outside the perimeter of black hats needing coding expertise. Anyone can create and scale deceptions. A new attack surface has emerged: trust and a sense of reliability.
“Reality is an illusion”
While Albert Einstein was most likely referring to his theory of relativity, the quote is acquiring new, sinister meanings when it comes to the exploitation of information security through means like deepfakes and near-lifelike AI videos. As stated above, these threats directly exploit the most critical weak point in the cyber kill chain: people. The danger amplifies multiple times when it is someone familiar and someone with a position of more power addressing the mark. The familiarity with the person being faked strikes at the basic firewall of human trust: skepticism, and how many people can really be expected to NOT comply with something coming from someone with more authority than them? Another factor is to go, in Lewis Carroll’s words, everything’s so confusing and upside-down! The suspicion about a video’s authenticity is another weakness attackers can capitalize on.
A call to action
So what steps can be taken immediately to protect yourself?
- Since such content basically manipulates employee trust, update the organization’s security awareness programs to allow employees to spot deepfakes by training them to look for inconsistencies in audio, behavior, and message timing. Also, integrate deepfake detection tools into your security stack as a fail-safe.
- Plan for contingencies. Don’t wait for a deepfake video about your company to pop up. Plan the likely response for such an event. Build a relevant playbook.
- Treat ID verification as a security surface. Implement multi-channel authentication for access to important data and related workflows.
- Wargame. Use your security team to simulate such responses to check how and what mitigation steps should be taken.
Trust is the new attack vector
Deepfake is a threat that is particularly insidious because it strikes at a particularly vulnerable part of humanity: trust in structures and people. It is time to start treating trust as a new perimeter. QKS Group’s security wizard Sanket Kadam puts it best.
” Deepfakes undermine the very foundation of identity-based security by exploiting human perception and authority structures. Shifting toward adaptive, risk-aware identity architectures can help avoid the dangers. Integrating behavioral biometrics, multi-channel identity verification, and real-time context-aware access within IAM platforms is no longer optional; it’s critical. In this deceptive landscape, trust must be verified, not presumed.”
