Latest News

 

A Dance With Algorithms

Published on June 30, 2016 by Makeda Easter

For its upcoming feature film, Paramount Pictures cast Brad Pitt in the lead role. After months of filming — in exotic locales, green screens, and Los Angeles — the production team realized that they are missing a few action scenes with Pitt. But because the megastar is busy filming other projects, producers are worried they will blow their budget reshooting the scenes.

While hypothetical, this is a challenge many filmmakers face. But technology is primed to fill in the gaps. Philippe Pasquier, a professor in artificial intelligence and a researcher at Simon Fraser University, is merging art and science to create systems that can understand and produce human-quality movement. Meaning that there's no need to get Pitt back in the studio.

"Today, the main use of computers is for creative tasks — tasks where there's no optimal solutions like choreography for dance, animations, composition, drawing, and graphic design" Pasquier said. "With this project, we're building systems that can analyze and learn how humans move and generate new movement."

"On a single computer, running our algorithms would take years, on medium-sized resources months, but using XSEDE, we can train some of most complex models within 24 hours."
Philippe Pasquier, professor and researcher, Simon Fraser University
Pasquier's project, Movement Style Machine, uses deep learning techniques and relies on the collaboration of graduate students with artists to create new algorithms. The principal investigator of the project is Guy Garnett at the University of Illinois at Urbana-Champaign. Garnett heads up The Illinois eDream Institute which is dedicated to exploring and expressing human creativity through emergent digital technologies.

Pasquier, one of his collaborators, is supported by funding from the Social Sciences and Humanities Research Council (SSHRC) and movingstories, a collaborative research partnership to develop digital tools for movement.

After recording movement from actors and dancers in a motion capture studio, Pasquier feeds this information into an algorithm, which then builds a model based on how a person moves. From there, Pasquier can ask the algorithm for movement that has not been recorded.

Movement is one of the most complex types of information to interpret. With hundreds of joints and limbs interacting simultaneously and a person's imagination, there are countless ways in which the body can move. The first prototype of Movement Style Machine is WalkNet that is currently trained for locomotion (walking), but there is no limit to any particular movement in principle.

Advanced computing resources are required to process this type of information. To run their algorithms, Pasquier and his team use resources through the Extreme Science and Discovery Engineering Environment (XSEDE). With XSEDE, they run algorithms on the Texas Advanced Computing Center's (TACC) Stampede supercomputer, the 10th most powerful in the world.

Philippe Pasquier, professor in artificial intelligence and researcher at Simon Fraser University
"On a single computer, running our algorithms would take years, on medium-sized resources months, but using XSEDE, we can train some of most complex models within 24 hours," Pasquier said. "Students don't have to wait three months to know if the system works or not."

To validate the success of their algorithms, Pasquier runs experiments where people watch human-generated movement and algorithm-generated movement. If they can't tell the difference between the two, it means their system is human-competitive.

According to Pasquier, there are numerous applications for such human-competitive systems. One is in the dance realm, where dancers have an individualized style of movement.

"We teach our systems a dancer's style and the system can show the type of movements that they do not do," Pasquier said. "Sometimes it's good for a dancer to be reminded of type of movement they don't use so they can expand their movement vocabulary."

Dancers can create new works through Pasquier's platform iDanceForm. The software helps choreographers create movement when there are space and financial limitations.

Expression and movement often go hand in hand. The researchers are also training systems to detect mood from movement. They worked with people to see if they could interpret emotion — sadness, joy, anger — in movement and wanted to see if machines could do the same. Pasquier found that they were indeed able to generate movement with different emotions with their systems and build systems that could detect intended emotions.

The combination of generating new movement based on old patterns and expression cues are applicable to the film and gaming industries. For the case of Pitt, there would be no need to bring him back in the studio because systems could complete the scene. In fact, the studios could create scenes with several characters interacting without the actors present using this technology.

"As far as we know, neither the film industry nor the game industry use this type of technology yet. These are fresh results from our Metacreation Lab," Pasquier said. "There are other approaches for computer-assisted animation, but ours is new and more powerful and versatile than the existing ones."

Static rendering of some of the movement generated from algorithms. Credit: Omid Alemi
But if computers can complete creative tasks, will there even be a need for human art? It's a concern that Pasquier receives often. However, the fear of advancing technology is one that has recurred throughout history.

Take the example of the sewing machine.

"When the sewing machine was popularized in London at end of 19th century, there were huge riots," Pasquier said. "People who were sewing by hand worried that machines would take their jobs. But a few years later they agreed machines were better. People could concentrate on creative tasks and fashion blossomed."

Pasquier feels the same is true in this case. New technologies are empowering artists to spend less time on tedious tasks and more time being creative.

In a typical Pixar film, a team of artists can work for years perfecting each animation, frame by frame, to achieve the finished product. The gaming industry is even bigger than television and film. In nonlinear video games like Mass Effect, where there's no clear storyline, there are many situations to prepare animations for. Too many for it to be done manually.

"The system will learn the way a character, animal, or monster moves, and then we can generate new movement as we need them," Pasquier said.

For now, Pasquier and his team of researchers are continuing to research these systems and build more accurate algorithms.

Demonstration of Pasquier's WalkNet, an interactive, affect-expressive movement generation

"In 10 years, I expect these type of systems to be integrated in games where real-time generation of new animation is needed as well in animation tools," Pasquier said. "The idea is not to replace human animators, but to give them power tools to facilitate their task, so that they can concentrate on the more creative parts and be more efficient. This will just be a new evolution in computer-aided animation."


Latest News

 

Story Highlights

Philippe Pasquier, a professor at Simon Frasier University, is merging art and science to create systems that can understand and produce human-quality movement.

Pasquier's project, Movement Style Machine, uses deep learning techniques and relies on the collaboration of graduate students with artists to create new algorithms.

Through XSEDE, the researchers run their algorithms on the Stampede supercomputer.

There are numerous applications for this technology including dance, film, art and and the video game industry.


Contact

Faith Singer-Villalobos

Communications Manager
faith@tacc.utexas.edu | 512-232-5771

Aaron Dubrow

Science And Technology Writer
aarondubrow@tacc.utexas.edu

Jorge Salazar

Technical Writer/Editor
jorge@tacc.utexas.edu | 512-475-9411