How close are deepfakes to being used in big-budget films and TV shows? Pretty damn close, if a new demo from Disney is anything to go by. In a video and paper being presented at a computer graphics conference this week, researchers from the House of Mouse show off what they say is the first photo-realistic deepfake at a megapixel resolution.
And the results are... pretty good! They’re not mind-blowing, certainly, and not good enough to be used in the next Marvel movie, but it’s a solid step up from deepfakes we’ve seen in the past.
As the researchers suggest, what’s new here is the megapixel resolution. Megapixels may no longer be the byword for high-quality images that they used to be. (The camera on your phone probably has a double-digit megapixel count for a start.) But so far, deepfake tech has focused on smooth facial transfers rather than amping up the pixel count.
The deepfakes you’ve probably seen to date may look impressive on your phone, but their flaws would be much more apparent on a larger screen. As an example, Disney’s researchers note that the maximum-resolution videos they could create from popular open-source deepfake model DeepFakeLab were just 256 x 256 pixels in size. By comparison, their model can produce video with a 1024 x 1024 resolution — a sizable increase.
Apart from this, the functionality of Disney’s deepfake model is fairly conventional: it’s able to swap the appearances of two individuals while maintaining the target’s facial expressions. If you watch the video, though, note how technically constrained the output seems to be. It only produces deepfakes of well-lit individuals looking more or less straight at the camera. Challenging angles and lighting are still not on the agenda for this tech.Comparisons between Disney’s output (columns three and four) with deepfakes from earlier models show clear improvements. Image: Disney Research
As the researchers note, though, we are getting closer to creating deepfakes good enough for commercial projects. Right now, when a company like Disney wants to do some face-swapping, it will use traditional VFX, as the studio did when it created virtual models of deceased actors Peter Cushing and Carrie Fisher for the Star Wars film Rogue One.
“While those results are impressive, they are expensive to produce and typically take many months of work to achieve mere seconds of footage,” write the researchers. Deepfakes, by comparison, require far less oversight once the original model has been constructed, and can produce video in a matter of hours (given the right budget for computing power).
Sooner or later, deepfakes are going to stop being a research project and start being a viable option for big studios. Indeed, some would argue they’re already there.