Whether it’s a deepfake video of actor Tom Cruise discovering gum in a lollipop or President Joe Biden discouraging people from voting via telephone, you’ve likely come across a deepfake video, photo or audio recording.

Over the years they have increased in number and sophistication and are often difficult to distinguish between fact and fiction. For businesses, the rise of deepfake technology can be a real nightmare.

“Deepfakes can cause many risks to businesses,” says Siwei Lyu, a University at Buffalo computer scientist and deepfake expert. “A falsified audio recording of a high-level executive commenting on her company’s financial situation could send the stock market awry. A voice call from a CEO requesting an employee to wire transfer funds to an offshore bank account could lead to actual financial losses to the company.”

Most recently, a finance worker at a multinational firm was deceived into shelling out $25 million to criminals that used deepfake technology to impersonate the company’s chief financial officer in a video conference call.

“Deepfakes are a new, advanced, and convincing mechanism of action fraudsters and bad actors are utilizing for the most sophisticated social engineering attacks of our time, and even the more cautious and astute are vulnerable,” says Ben Colman, co-founder and CEO of Reality Defender, a deepfake and AI-generated media detection platform.

What are deepfakes?

The term “deepfake” describes both the technology used and its fake content.

“Deepfakes are videos, audios, or images that are manipulated using deep learning algorithms to show someone doing or saying something that they didn't actually do or say,” says Dmitry Anikin, senior data scientist at Kaspersky, a cybersecurity company. “This content is created by training a deep neural network on a large dataset of images and videos of the person, which allows the algorithm to generate new videos that look and sound like the person.”

Deepfakes and voice clones have been created for extortion, imposter scams and financial fraud and can damage the reputation and trust of individuals and entire companies. Then there’s the costs that come with these kinds of assaults such as lawsuits, higher insurance rates and of course bad press, which can be a serious hit to a company.

“Deepfakes can be weaponized and pose significant threats across multiple societal dimensions, including personal security, democratic processes, financial sectors and the integrity of digital media,” says Lyu.

According to data from the Federal Bureau of Investigation and International Monetary Fund, the average annual cost of cybercrime worldwide is expected to skyrocket from $8.4 trillion in 2022 to more than $23 trillion in 2027.

“Deepfakes are being used by everyone,” says Edward Delp, an electrical and computer engineering professor at Purdue University and an expert in deepfakes.

He says now there’s no barrier to creating a deepfake video and audio and says OpenAI has tools for creating them.

“They are being used for creating child pornography, political ads, fake news announcements, reputation abuse and crimes like fraud and deception,” says Delp.

Delp is working on deepfake detection research and is leading one of the teams in the Semantic Forensics program created by the Defense Advanced Research Projects Agency for the U.S. Department of Defense.

“We are developing methods to detect whether an image, video, audio, or text media element has been generated or altered,” Delp says.

Deepfake risks to businesses

Meanwhile, deepfakes continue to pose many risks to businesses. This includes convincing employees to provide sensitive information or transfer money to bad actors or targeting a business’s customers through false representations of company representatives to scam them.

“Businesses are at risk from deepfakes because these fraudulent synthetic media files can be used in social engineering attacks, such as phishing and spear phishing, to deceive individuals and gain access to sensitive information,” says Matt Miller, principal of cyber security services at KPMG.

Phishing and spear phishing cons victims into providing sensitive details like account information, passwords and social security numbers. However, spear phishing is a more sophisticated and targeted attack against one or more specific individuals or organizations.

“The prevalence of deepfakes is growing and cyber criminals are increasingly using AI techniques to create highly realistic fake content,” says Miller.

That means businesses, particularly in healthcare, finance and tech, “face heightened risks from deepfake attacks targeting biometric security,” says Anikin.

One example is in companies that have a facial recognition system. Anikin says a cybercriminal can create a deepfake video or image of an actual employee's face to use it to get unauthorized access to secure areas or sensitive information.

“Similarly, in systems relying on voice authentication, deepfake technology can be used to generate synthetic voice recordings that mimic the speech patterns and tone of authorized individuals, allowing attackers to fool the system into granting access,” Anikin says.

A recent report by the Identity Defined Security Alliance (IDSA) found that 90 percent of organizations with more than 1,000 employees reported at least one security incident related to an identity-related breach. The report found that only 49 percent reported that their leadership teams actually understood identity and security risks and proactively invested in protection.

“A major challenge with deepfakes is the speed at which content can be developed, distributed to its target, amplified and acted upon,” says Shamla Naidoo, head of cloud strategy and innovation at global cybersecurity firm, Netskope.

She says common goals for attackers is to influence and extort victims.

“For businesses this often means anything from reputational risk, financial risks from fraud and intellectual property theft, to employee manipulation and blackmail,” says Naidoo.

How to protect your business from deepfakes

In 2023, the Federal Trade Commission received more than 330,000 reports of business impersonation scams and nearly 160,000 reports of government impersonation scams. Reported losses to impersonation scams topped $1.1 billion in 2023.

While there will always be risks to businesses, there are things businesses can do to protect themselves.

“Firstly, they should develop a robust cybersecurity culture and hygiene throughout their operations, prioritizing employee engagement and education on secure behaviors,” says Miller. “Strengthening identity confirmation processes, such as implementing multi-factor authentication and behavioral biometrics, can help counter sophisticated deepfakes.”

He says organizations should also require identity scrutiny to user onboarding and Know-Your-Customer processes to assess vulnerabilities. He says it’s important to improve organizational risk intelligence and invest in technology for better prediction, detection and response.

“Companies can also adopt a secure-by-design approach, embedding security throughout product lifecycles,” says Miller.

Naidoo says in most cases, deepfakes are a way of “getting into one's psyche to ultimately confuse or lure the victim.” Netskope helps organizations apply zero trust principles and artificial intelligence and machine learning innovations to protect data and defend against cyber threats.

She says it is crucial that companies take an education-focused approach with disinformation campaigns that they do with cybersecurity awareness training.

“This kind of employee awareness program should be designed to help employees understand what to look for to spot a deepfake, such as image inconsistencies, video misalignment and, ultimately, the nature of the request, even when the attempts do not involve their colleagues and others they interact with daily,” says Naidoo.

Naidoo says Netskope encourages implementing a zero trust policy to ensure verification before trusting anything.

“If one takes the time to ensure the validity, accuracy and consistency of information, they can likely avoid falling victim to deepfakes, ultimately protecting the business,” says Naidoo.

But, cybersecurity needs to be a part of a business’s culture, stresses Laurel Cook, associate professor of marketing at West Virginia University.

“I’ve often seen it treated as the responsibility of the IT department and this perspective is no longer appropriate for businesses or any size, especially for vulnerable SMBs,” says Cook.

Apart from cybersecurity training, she says companies can protect themselves in much the same ways the average consumer can. She says businesses can use a triangulation-based strategy to verify content, which means finding three sources to validate information.

“I would also advise businesses to consider cyber insurance, a new development in the insurance industry,” says Cook.

Colman says even with taking all these measures people will still be susceptible to deepfakes because of how advanced deepfakes and generative AI-driven media has become.

Reality Defender uses AI to catch AI and believes that approach is the only reliable way to be proactive. Colman says their clients have implemented deepfake detection in sectors like finance, media and government. He adds that laws requiring implementation of such measures don’t exist and probably won’t for some time.

“To be honest companies cannot do much to protect themselves at this time except try to watch out for deepfakes and be prepared to respond publicly to them,” says Delp. “The issue is that the tools now are so good most people cannot recognize a deepfake.”

Ana Durrani is a journalist and "Jill of all trades." She’s a regular contributor to U.S. News & World Report, Forbes and more and has written for Realtor.com, EB-5 Investors Magazine, Military Officer Magazine, American Scholar Magazine, California Lawyer Magazine and many others. She thrives on tackling a very wide range of topics.

Share:
More In Business
Load More