In order to make an actor look younger or older in a scenario, researchers at Disney have devised an AI system. Though artists will have the option to fine-tune the effect by hand, an AI tool may be able to do the bulk of the work. It’s been stated that the AI just needs five seconds to age a single frame.

Costing a lot of money and taking a lot of time, re-aging an actor involves going through a sequence frame by frame and painstakingly altering the character’s appearance to make them look older. There have been prior attempts to automate this procedure using neural networks and machine learning. While other algorithms “often suffer from facial identity loss, poor quality, and unpredictable results across succeeding video frames,” as Disney’s researchers put it, “may function well for still photos.” It is claimed that their technology is “the first viable, fully-automatic, production-ready method for re-aging faces in video footage.”

Disney’s latest AI-powered approach addresses many of the issues that have plagued prior efforts. Less than five seconds are needed to create a frame using this method, and the results are high-resolution and believable.

Although techniques for ageing and de-aging actors by face imaging exist already, the researchers admit in the abstract that they are largely useless in practise. While some are able to successfully age or de-age a figure, the resulting image often looks very different from the original. The low resolution and the erratic outcomes from frame to frame make them appear extremely unnatural.

For a significant number of real people, it is nearly impossible to obtain longitudinal training data for learning to re-age faces over longer periods of time, and this was our first major insight. In spite of its shortcomings with real-world photos, the present state-of-the-art in facial re-aging does provide photoreal re-aging outcomes with synthetic faces, as the authors demonstrate.

See also  Twitter users can combine videos, GIFs, and images in single tweet

Using this artificial data, the team says, “our second significant breakthrough is to conceptualise facial re-aging as a feasible image-to-image translation problem that can be achieved by training a well-understood U-Net architecture, without the need for more complicated network designs.” We show that the unexpected simplicity of the U-Net enables us to achieve extraordinary temporal stability and retention of facial identity over a wide range of expressions, angles, and lighting conditions when re-aging real faces in video.

Also check New technology to explain effect of genetic factors on brain structure

Leave a Reply

Your email address will not be published. Required fields are marked *