As university researchers, big tech companies like Facebook and Microsoft, and even the Defense Department, push efforts to detect and combat the spread of deepfakes, a handful of startups are embracing the technology behind these videos and trying to find ways to commercialize it.

These aren't clandestine operations hiding in the dark corners of the internet, manipulating political opinion, or scamming people for money. Instead, they want to harness the controversial AI-powered video technology for use by advertisers, sales representatives, and more.

One such company: Tel Aviv-based Canny AI, the startup behind the infamous deepfake that appeared to show Facebook CEO Mark Zuckerberg giving a speech, first reported by Motherboard, which also went viral in June.

Canny's cofounder, Omer Ben-Ami, explains that its technology, called video dialogue replacement, functions as an artificial-intelligence-powered form of dubbing. The company's AI "trains" on the face of an intended speaker — such as Mark Zuckerberg — essentially "studying" their facial movements and speaking style. It also trains on another video, one with new dialogue from another speaker (in the Zuckerberg deepfake's case, a voice actor).

Once its AI is "fluent" in both faces and dialogues, Canny can translate between them, enabling the startup to replace the speech in videos of high-profile figures like Zuckerberg, or Kim Kardashian, with dialogue that's entirely new.

This ability means that instead of re-filming a clip in every language, the same video could be dubbed, using voice actors, with previously unachievable realism.

Ben-Ami won't share how many clients he has, though he says they're primarily advertising and production companies. To demonstrate its technology to the public, Canny released a video of world leaders, including Donald Trump, Kim Jong-un, and Vladimir Putin, all "singing" John Lennon's "Imagine" — an aspirationally-inspiring clip that reveals the strange, non-realities Ben-Ami says we ought to expect.

Victor Riparbelli, the co-founder and CEO of U.K.-based synthetic video firm Synthesia AI, explains that a single ad could easily be localized to every country. "Basically, you're taking one advertisement and then creating many versions of it where you're slightly changing the script," he says. Synthesia has already worked with the BBC, Accenture, and even the Dallas Mavericks (the team's billionaire owner, Mark Cuban, is reportedly an investor in Synthesia).

An example of Synthesia's tech is its own feel-good video: an ad that aimed to raise global awareness of malaria, featuring British celebrity David Beckham 'speaking' languages the soccer legend is clearly not fluent in.

Riparbelli adds that another potential deepfake customer base is large multinational companies, which could use the editing technology to easily produce the same corporate communication video in multiple languages.

While Synthesia and Canny, and deepfakes more broadly, have focused on building photorealistic video, startups like Modulate AI and Dessa are both working on artificial intelligence-powered tech for creating convincing, synthetic voice, which could presumably be combined with Ben-Ami and Riparbelli's video tech.

But synthetic video technology hasn't seen the warmest of welcomes, with many wondering whether the emergence of deepfakes could make us even less inclined to trust one another online, especially as the technology has become more openly available through online applications. Notably, deepfake apps have allowed online users to create doctored images of naked women, contributing to the proliferation of "revenge porn" and the online harassment of women.

Some politicians are considering how to curb, and even ban the technology, over concerns that deepfakes will simply inflame the scourge of fake news. One example: a deepfake PSA developed by Jordan Peele and Buzzfeed features the director as he appears to "take over" President Barack Obama's face and voice to call President Donald Trump a nasty name and share other opinions that the former commander-in-chief would be unlikely to publicly articulate.

That segment became one of several deepfakes of politicians that have raised concerns as to how these videos might be used to mislead the public, especially during an election season.

"The scariest real-world scenario is that on the eve of the election, a candidate is portrayed saying or doing something very embarrassing or illegal — or what-have-you — and there's no way to correct the record fast enough that voters would understand that this AI-driven false video is indeed not true," Paul Barrett, the deputy director of NYU Stern Center for Business and Human Rights, told Cheddar earlier this month.

But the startups aren't deterred (and neither provide their technology to the general public). Ben-Ami argues that synthetic video technology is inevitable, and that "negative" applications shouldn't outweigh the positive. He likened unease over the emerging technology to concerns over the 3D-printing of homemade guns: "I don't think it makes sense just to disregard 3D-printing and all the good it can do just because someone misused it."

"The truth is you're surrounded by synthetic media already today," adds Synthesia's Riparbelli, pointing to Snapchat filters and greenscreens. Instead, he says concern should center on consent. "What you're more interested in is not if it's synthetic but if it's consensual or not." Synthesia has a policy that someone will not be re-enacted without their permission.

Both Ben-Ami and Riparbelli acknowledged the need for the public to be better understand -- and know how to spot -- deepfakes, and both companies have staff involved in building detection technologies.

Meanwhile, brand reputation monitoring services note that deepfakes will only make worse the threat of fake news to companies. After all, should that fake video of Mark Zuckerberg have been believed, it could have — at least temporarily — damaged the social media giant's already strained public reputation.

And the Zuckerberg video was not the first deepfake that featured a corporate leader. For instance, in May, Ad Age reported on how one creative professional made a deepfake imitating executives in an effort to land a job.

"Damaged reputation often results in decreased sales. Consumers who believe something about the company, that happens not to be true, are very likely not to do business with that company any more," explains William Comcowich, the acting CEO of the brand-reputation management service Glean.info, which also offers a fake news-tracking service. He warns that the technology could be "a significant risk to publicly traded companies," and hazards that deepfakes could be used to manipulate the news in order to short-sell.

Jean-Claude Goldenstein, the head of the social intelligence firm CREOpoint, says that concern over doctored videos is gradually growing, pointing to aerospace brands that have grown increasingly nervous about fake videos that claim to have been shot aboard the crashed Ethiopian air flight in March, when 157 people were killed.

This summer, CREOpoint announced that it was granted a patent for a new method of monitoring online discussions about fake news — and impacted companies and leaders — that relies on the integration of natural language processing and a network of human experts. The company says it could help track the proliferation of discussions about a deepfake video to indicate how "truthful" the content might be. Goldenstein's point: limiting the impacts of deepfakes will be less about finding the person who created the augmented video, and more about the people in a social media system that are spreading misinformation, deliberately or not.

But developing methods of monitoring these augmented videos could be a race against time should the technology proliferate as the startups anticipate. Rarapbelli says the tech available now is only "a very small glimpse into what the future of content creation is going to look like," and that, eventually, synthetic video will be "massively democratized," similar to how services like GarageBand helped make music production easier for the average user.

They emphasize that many of these new videos will be done in collaboration with executives and brand ambassadors. Some could even be interactive. Ben-Ami predicts that "to some extent, you're going to have a chatbot that looks like Kim Kardashian — and that really answers and will respond — I think that's going to be in the near future."

He says that might even work for those hoping to bring a dead celebrity back to life. That's if, Ben-Ami says, "they have the IP and it's legal."

Share:
More In Business
Rare Dom Pérignon champagne from Charles and Diana’s wedding fails to sell during Denmark auction
A rare magnum of Dom Pérignon Vintage 1961 champagne that was specially produced for the 1981 wedding of Prince Charles and Lady Diana has failed to sell during an auction. Danish auction house Bruun Rasmussen handled the bidding Thursday. The auction's house website lists the bottle as not sold. It was expected to fetch up to around $93,000. It is one of 12 bottles made to celebrate the royal wedding. Little was revealed about the seller. The auction house says the bids did not receive the desired minimum price.
New York Times, after Trump post, says it won’t be deterred from writing about his health
The New York Times and President Donald Trump are fighting again. The news outlet said Wednesday it won't be deterred by Trump's “false and inflammatory language” from writing about the 79-year-old president's health. The Times has done a handful of stories on that topic recently, including an opinion column that said Trump is “starting to give President Joe Biden vibes.” In a Truth Social post, Trump said it might be treasonous for outlets like the Times to do “FAKE” reports about his health and "we should do something about it.” The Republican president already has a pending lawsuit against the newspaper for its past reports on his finances.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Trump says Netflix deal to buy Warner Bros. ‘could be a problem’ because of size of market share
President Donald Trump says a deal struck by Netflix last week to buy Warner Bros. Discovery “could be a problem” because of the size of the combined market share. The Republican president says he will be involved in the decision about whether federal regulators should approve the deal. Trump commented Sunday when he was asked about the deal as he walked the red carpet at the Kennedy Center Honors. The $72 billion deal would bring together two of the biggest players in television and film and potentially reshape the entertainment industry.
What to know about changes to Disney parks’ disability policies
Disney's changes to a program for disabled visitors are facing challenges in federal court and through a shareholder proposal. The Disability Access Service program, which allows disabled visitors to skip long lines, was overhauled last year. Disney now mostly limits the program to those with developmental disabilities like autism who have difficulty waiting in lines. The changes have sparked criticism from some disability advocates. A shareholder proposal submitted by disability advocates calls for an independent review of Disney's disability policies. Disney plans to block this proposal, claiming it's misleading. It's the latest struggle by Disney to accommodate disabled visitors while stopping past abuses by some theme park guests.
Load More