Introduction
One of the first scientists to toy with the idea of an interactive computer graphics program was Ivan Sutherland, often called the “grandfather” of computer graphic generation (3). Using his programs, computer graphics could be more readily integrated into the areas of engineering and design. One such program published in 1963, called Sketchpad, represented a major breakthrough in the field of computer graphics. With this program came a pivotal demonstration of the potential for human interaction in the creation of computer graphics, as well as the potential for graphics to be used for both scientific and artistic purposes. This breakthrough set the tone and pace for future innovation (3).
Some of the most popular products currently being used in computer graphics at both the professional and amateur level include Maya and Cintiq. Maya is a commonly used modeling package that artists can use to create and manipulate meshes. The term “mesh” in computer science refers to visual representations of shapes through complex polygonal constructions. Cintiq, a product developed by Wacom, is a tablet hybrid designed to allow the user to draw directly on the screen itself with a pressure-sensitive stylus. These products not only facilitate the creation and manipulation of graphics, but are also accessible to a variety of users. Customization is a key feature of Maya and Cintiq, so both provide a personalized platform for artistic expression. These two programs are examples of today’s basic tools in computer graphics.
Although computer-generated films and the incredibly complex artwork they showcase often leave audiences in wonderment, rarely is awe and praise directed toward the technology that brings this art form to life. One of the developers behind the technology that facilitates artistic expression in a digital medium is Dartmouth’s Jon Denning. The primary focus of Denning’s research is computer graphics. Through his studies, he seeks to support artistic expression by removing the obstacles created by the technology that digital artists use in order to generate computer graphics. The construction of polygonal meshes, for example. is incredibly complex and involves “tens of thousands of individual operations.” Denning’s research, especially through the project MeshFlow, focuses on the interactive visualization of mesh construction sequences (2). The researcher hope to streamline the artistic process through technological innovation, as Denning describes below in an interview with the DUJS.
Q: When did you first get started in computer graphics?
A: Officially I started 4 years ago in 2009, but I’ve been writing computer graphics for years and years–probably 20 years at least.
Q: So when you started writing those graphics programs, were you creating them from scratch?
A: Yes, computers have come a long way.
Q: How would you compare these programs to the ones Pixar and other animation houses are using?
A: Well, Renderman and its predecessors were your typical ray-tracing algorithms that cast a ray into the scene from the camera and bounce around to try to see what the camera is able to see. What I was typically writing was more like a gaming engine. We would use similar techniques in that we would cast rays out.
Q: What kind of research are you doing right now?
A: Currently I am working with artists to record what they are doing and develop tools to visualize what they do and then summarize it in a way that enable us to understand how an artist does what he or she does. What tools do they use and what patterns do they follow—do they work on this part of the model or that part of the model or do they hop around a lot? So I’m working on developing tools to be able to visualize and watch how an artist works.
Q: So would it be translating the visual art into the visualized graphic?
A: No, there are already tools there to do that, such as Blender, which is a free and open source program to do 3D stuff, sort of like Maya, the industry standard. What is nice about Blender is we are able to hook onto it and write code that will spit out what the artist was doing while they’re working. These artists are used to working in Blender and nothing changes on their end, it’s just saving snapshots of their data as they go. What my program does is that it takes those snapshot and compresses it and then plops it into a viewer that allows you to explore the construction sequence.
Q: What would be an example of this?
A: For instance, in one project we looked at digital sculpting with a digital block of clay, and we watched how the artist turned it into a face. We know where the virtual camera that allows the artist to spin this virtual objects is located, and we know which brushes the artist was using. It’s like the program Syntiques, which even tracks the pressure of strokes. We are just beginning to see how artists work from a scientific point of view.
Q: What are you going to do with this information?
A: Presently, we are just trying to tease out some patterns with it. Some future directions include being able to compare the workflow of two artists. If they both start at the same spot and each end up with a product that is supposed to be fairly similar to that of the other, how did the two artists get there and what techniques did one artist use compared to the other?
Q: What would be an application of this?
A: Well, if you have a teacher and a student, or a master and a novice, and the master produces a tutorial of how to do something for the student to follow, then the program would constantly be comparing the novice’s work with the master’s work. Feedback would be the immediate application.
Q: What would you like to see done with this data?
A: Because no one so far has really studied this data yet, it is hard to imagine the kinds of things you’d be able to pull out of it. One thing that I would like to be able to do is to find artists who are using a particular tool in a particular way but maybe don’t use some other tool whose utility in the situation is not immediately obvious but obvious through the data. Maybe there is some other tool that is able to push these two tools together in a way that would allow the artists to do their job faster.
Q: Where did these projects start?
A: One of our first projects was called MeshFlow in which we recorded a bunch of artists. We took tutorials that were online and basically replicated them on a computer. Then we showed the tutorials to digital art students as an experiment for them to try to get some information from an online tutorial. We changed how summarized the information is. Rather than having to click this vertex repeatedly to move here, here, and here, the instruction was condensed to only show all these moves in one shot. So we did that experiment and many of the subjects going through that experiment considered it to be a good replacement for the static tutorial system. Compared to a document or a video medium, MeshFlow is a little more interactive, so if the artists need the extra level of detail to know exactly which vertices to push or they need just the top-level view, that’s okay. In MeshFlow they can tailor the information to fit what they need. So that was kind of a starting point on this direction that we’re heading toward.
Q: When you’re doing this research, do you feel that your ability to answer the question is limited by the technology that exists or are you constantly trying to make new technology to facilitate your project? Are there any limitations in the field right now?
A: Well, one of the limitations that we’re, in a sense, solving through these projects is that we want to know what the artists do, but there are very limited ways of getting to that information. So we ended up using Blender and writing instrumentation code for it to spit out snap shots as the artist is working. Once we had all of this data we had to figure out what to do with it, so we had to build a viewer on top of that. In comparing two different artists who are performing the same task, presently, there are a few ways to determine that one artist is working on a part of the mesh that roughly corresponds to the part of the mesh the other is working on, but we needed a lot more than this fuzzy correspondence solution. We needed more than “this part roughly matches to this other part,” we needed something that was a lot more discrete like “this vertex matches to this one.” And so we know how the movements differ when Subject A moves a vertex and when Subject B moves the same vertex.
I guess the limitation that we’re running into is that no one has worked in this area yet, so we’re having to develop a lot of infrastructure as we go. If it continues to go as well as it has, it will be very interesting in about five years, after all of this infrastructure has been established and people start to look at the data generated.
Q: Are these more intuitive tools or control features for the artists?
A: Pixar was one of the first companies to stick the animator into the technology to make the art come alive, but it was only able to do that by working closely with technology. The techs were able to give the animators the tools they needed to express themselves and to tell the story they imagined. So in my view, there could be a tool that could help them in a more intuitive, natural way, unhindered by the limitations of present technology.
There are some new hardware technologies that are coming out that could start working towards this idea, like Microsoft’s Kinect, which is basically a sophisticated camera. There is a new Kinect-like device called Leap that doesn’t scan the entire room, but scans a small area above your desk. You could have a 3D area or input to be able to manipulate the environment around you, much like the transition from the computer mouse or the trackpad to the full-touch or multi-touch screens. It’s the same thing, but being up here in space being able to manipulate the data.
Q: Like a hologram?
A: Not necessarily, because it’s not projecting, but it senses where your hands are—this allows the artist to manipulate the project.
Q: What else are you working on?
A: Syntique is basically a screen that allows you to use a stylus more like an actual pen. It detects which end of the pen you’re using and measures the pressure of your stroke. With the pen, you have a more intuitive input device. It’s like your paper-and-pen type of input, but it gives you much more control.
The future of computer graphics is accelerating due to the work of researchers like Denning. By analyzing the artistic process, Denning and his colleagues will facilitate artistic expression through his research, work that will undoubtedly impact the future of graphics. These efforts at Dartmouth are mirrored in the efforts of many attendees of the annual SIGGRAPH conference. The featured presentations showcased advancements that hold promises for the future of graphics. Of note in forthcoming feature films will be the increased realism of complex human hair using thermal imaging, a preview of which can be found in Pixar’s “Brave.” This is just one small example of the future of computer graphics, for the pace of technological advance is quickening with each new innovation. With the increased availability of graphics software such as Maya and the increased work of researchers such as Denning to make these programs more conducive to artistic expression, in all likelihood the common computer user may one day be as adept at graphics generation as word processing.
References
1. A. O. Tokuta, Computer Graphics (n.d.). Available at http://www.accessscience.com/content.aspx?id=153900 (22 April 2013).
2. J. D. Denning, W. B. Kerr, F. Pellacini, ACM Trans. Graph. 30, DOI=10.1145/1964921 (2011).
3. A Critical History of Computer Graphics and Animation (n.d.). Available at http://design.osu.edu/carlson/history/lesson2.html (22 April 2013).
Leave a Reply