TECH TUESDAY: Computer-generated images see more effective use in movies
The 88th annual Academy Awards were held on Sunday, and people have been talking about the results ever since. Notably, a large number of the films nominated for awards used computerized visual effects, including “The Revenant,” “Ex Machina” and “The Martian.”
Visual effects have been a part of motion pictures for decades, first used by creating illusions to deceive the audience. Later, they would be used to create entirely new items without the use of physical props, according to the PBS.
Today, many types of visual effects and techniques are in use, including forced perspective, stop motion and motion control, according to PBS.
Computer generated imagery (CGI) is one type of visual effect, and has been the focus of critical attention for years. Movies such as “Star Wars” and “The Lord of the Rings” series have attracted attention for their exceptional use of CGI.
Despite the prevalence of CGI in modern film, the history of the process behind its production is not widely known. Computer graphic images originate from the 1960s when Ivan Sutherland, a computer scientist at the Massachusetts Institute of Technology, demonstrated their use on a computer said Herbert Freeman in “Interactive Computer Graphics."
“In his now classic thesis, he showed how a computer could be employed for interactive design of line drawings using a simple cathode-ray tube display and a few auxiliary input controls,” he said. “It was not until Sutherland developed his system for man-machine interactive picture generation that people became aware of the full potential offered by computer graphics.”
Once Sutherland made it possible to interact with generated pictures, people began to experiment with the technology. They unfortunately found that it is taxing on the computer and an extremely complex process, he said.
As time progressed, computers became more powerful and computer scientists were able to understand the intricacies of making better, more realistic pictures. The focus of CGI production shifted over this time, creating better algorithms rather than more powerful machines, he said.
Today, CGI has advanced to the point of creating characters that look extremely realistic in the same shot as human characters. The process involved is extremely long and complex, but the results are impressive.
In animation, computer animators used
This is comparable to how roll films work, with each shot representing a single frame. Flipbooks also employ this method, while relying on
Animations with depth have a similar production process. Instead of 2D images, three-dimensional (3D) “models” can be modified in smaller pieces, allowing the whole image to be slightly different, according to makeuseof.org.
For example, 2D animation processes would rely on altering larger sections of a character to show movement, while 3D animations processes only need smaller regions to be changed.
The drawback with 3D animating is that these images require much more computing power than 2D images, so there is a limit on how many complex 3D images can be used. This becomes less of a problem as computers become more powerful, but there are still limits to computing.
Three-dimensional images are extremely complex because they can be thought of as layered 2D images. When attempting to produce realistic 3D images, it becomes a challenge to take into account how each layer would change and react to other changing images, according to makeuseof.org.
Computer generated images can be used in a multitude of ways, ranging from 2D images in a 3D world, as in "Mary Poppins" from 1964, to creating entire worlds and species, as in James Cameron’s "Avatar." Ultimately, as technology progresses, the only prohibiting factor is the creator’s imagination.