Deepfakes are a growing threat due to accessible computing power, generative AI algorithms, and mobile apps that create convincing audio-visual human likenesses, often used for fraud and misinformation. Forrester recommends IT and security teams enhance media authentication and biometric verification with additional layers like behavioral biometrics and digital fraud management to combat these threats.
Detecting and stopping deepfake threats
- Deepfakes are a rapidly growing threat driven by quick access to inexpensive computing power, generative AI algorithms—such as generative adversarial networks (GANs) and autoencoders—and the rising popularity of mobile apps that can transform a person’s image. These technologies aim to create audio- and visually-convincing likenesses of humans. Deepfakes are often used to create synthetic identities for fraud, executing ransomware, and causing data and intellectual property (IP) loss. According to Forrester, deepfakes have been employed for stock-price manipulation, damaging reputations and brands, degrading employee and customer experiences, and amplifying misinformation. Detecting and stopping deepfake threats require algorithms capable of identifying audio and image manipulation.
- Forrester recommends that IT and security teams focus on controlling the source of media by using authenticator apps and enhancing facial and voice biometrics with additional verification and protection layers. These layers include behavioral biometrics, device ID fingerprinting and reputation, bot management and detection, digital fraud management, and passwordless authentication.