How many times have you heard somebody say “enhance that!” in various cop shows over the years, as the good guys huddle over a screen looking at grainy footage, struggling to i.d. the bad guys?
Well now the ability to do so is just a neural network away and ironically those shows themselves are about to be enhanced!
Speaking of television, 90% of all video content ever created is now becoming obsolete by the standards of modern television. Netflix, Amazon Prime, Hulu and network broadcasters are struggling to find or create 4K content to keep up with the demand meanwhile, we as consumers have invested our hard earned dollars in ultra HD TV’s so our expectations are high!
As an example, the Lion King was Disney’s first movie to leverage computers and a digital workflow. The lines for each frame were still sketched by hand, but the drawings were scanned and then painted digitally using a computer. As a result, there’s no original source material that can be rescanned at 4K which is how this type of upgrade has traditionally been done. Certainly, back in the early 90’s, 2k seemed like a very future proof resolution to the decision makers at Disney. Oh how the times have changed.
The movie industry is littered with similar examples dating back as far as 30 years, where live action analog film has been fused together with digitally created visual effects in a symbiotic relationship that makes it impossible to simply rescan old content.
If we switch focus to the world of animation, rendering accounts for a large part of the time and cost that goes into a HD production. With the recent hike in demand for 4K content, this effectively quadruples the average $500k rendering budget for a 90 minute HD feature.
Animated television series aren’t just 90 minutes long, they’re 11 minutes times 52 episodes, or 572 minutes. With distributors like Netflix and Amazon making 4K a requirement, this means that either profits go down or rendering quality suffers.
How can neural networks fix these problems? Let’s walk through this process with a real world example. Here’s an input image:
The standard industry approach today is to use bicubic interpolation to increase the resolution of the image and then we sharpen it using signal processing. It’s a very subtle improvement, however as signal processing can’t add new details to the image, it can just emphasize the details that are already there.
This is the reason that video remastering is largely a manual process today and rarely performed, reserved primarily for use on high value content. Ideally it would be great if we could easily and effectively up-res and remaster both old and new video content, it just hasn’t been technically possible. Until now that is…
Deep Learning is the latest and most successful wave of Artificial Intelligence (AI). As the name would suggest, Artificial Neural Networks are inspired by mathematical models of the brain and visual cortex. They are composed of layers of neurons which transmit signals based on their connections. When shown enough examples these networks act as universal function approximators that are able to learn complex functions from input to output.
This new Deep Learning way of doing things can achieve something that was science fiction just a few years ago. Neural Networks can actually form their own imagination and hallucinate new plausible details where they didn’t otherwise exist in the original image and the difference is pretty striking.
The results are impressive and thanks to AI, this capability is now available to the wider market and not just large movie producers. The price point is well within any production budget for e-commerce, advertising or other short form pieces. Meanwhile, because the work has been automated using software and does not depend on human beings, the turnaround times have collapsed, making this far more practical.
To learn more about how Artomatix has pioneered the use of Deep Learning to remaster old content or up-res more modern footage, please contact us at [email protected]