Software created by researchers at the Massachusetts Institute of Technology amplifies variations in video that are imperceptible to the naked eye, making it possible to exaggerate tiny motions. More telling, it could provide greater credibility for products.
MIT Computer Science and Artificial Intelligence Laboratory graduate student Michael Rubinstein designed the software, along with recent alumni Hao-Yu Wu, Eugene Shih and professors William Freeman, Fredo Durand and John Guttag. The researchers initially intended it to amplify color changes, but in their experiments found it amplified motion as well. The software makes visible the vibrations of individual guitar strings, or the ability to see someone's pulse as the skin reddens and pales with the flow of blood. The new technology could assist online agency executives in creating more attention-grabbing advertising based on MIT's research on amplified emotions.
Eric Gulino, an ad executive at Skiver Advertising, said having the ability to see change without computer-generated graphics could increase the credibility of products because it would not require computer-generated art. He said it is not likely to revolutionize the way agencies create content, but it will open the door to demonstrate things not easily communicated and give consumers a whole new appreciation for products.
Take a car manufacturer, for example. Gulino said. "Maybe the manufacturer had trouble communicating the effectiveness of technology in the car that takes over just before a crash,""The seatbelts might tighten. The brakes are applied and the airbag deployed. It's difficult to show it in real-time. It just looks like a crash."
Rather than a simulation, the manufacturer would show the event as it happens, giving the brand more credibility. Simulation would have been the only way to demonstrate a high-speed crash. Gulino said.
"This technology opens up new avenues for creative folks. It removes the fog from a consumer's perspective."
In a set of experiments, the software amplifies the movement of shadows in one frame of a street sequence photographed only twice, at an interval of about 15 seconds. Amplifying motion rather than color requires a different processing. The smaller the motion, the better it works.
Most of the research has been around imaging and monitoring medical conditions, such as in video baby monitors for the home, so that the respiration of sleeping infants would be clearly visible.
Agencies rarely work with universities like MIT. GroupM has begun to look into the idea, but doesn't work with researchers at the university yet. The ad industry already has a host of tools to create special effects, but even Needham Analyst Laura Martin agrees that
"Academic research demonstrates that diversity maximizes economic value."
While Martin refers to the TV ecosystem, her report on "The Future of TV: The Invisible Hand" highlights the need for more creativity in TV and video content.
COMMENTARY: MIT's technical abstract describes the new video amplification technology as follows:
"Our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. Our method, which we call Eulerian Video Magnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at temporal frequencies selected by the user."
You can download a PDF file of the technical whitepaper describing MIT's Eulerian Video Magnification technology by clicking HERE.
Courtesy of an article dated June 22, 2012 appearing in MediaPost Publications Online Media Daily
Comments
You can follow this conversation by subscribing to the comment feed for this post.