Nieuwe Instituut
Nieuwe Instituut

Sonneveld House

The Fragility of Life

A conversation between Femke Snelting, Jara Rocha and Simone Niquille during the Possible Bodies working session at Schloss Solitude in Stuttgart in May 2017, following a screening of process material of Niquille's film _The Fragility of Life_.

13 July 2017

06 CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. “Method for providing a threedimensional body model,” Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2015.

Jara Rocha: In the process of developing Possible Bodies one of the excursions we made was to the Royal Belgian Institute of Natural Science's 3D reproduction workshop in Brussels, where they were working on reproductions of Hominids. Another visitor asked: "How do you know how many hairs a monkey like this should have?" The person working on the 3D reproduction replied, "It is not a monkey." You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about bodies, we can push certain limits because of the hegemony of the species. In court, the norm is anthropocentric, but when it comes to representation&

Femke Snelting: This is the subject of Kritios They?

Simone Niquille: Kritios They is a character in The Fragility of Life, a result of the research project The Contents. While The Contents is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh's resolution, decreasing its information density, can affect the viewer's empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a 'fleshed out' profile is a fragile endeavour. More information does not necessarily lead to a more defined image. In the case of Kritios They, I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin colour is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined?

"Looking at design history and the field's striving to create a standardised body to better cater to the human form, I found similarities of intent and problematics."

Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896.

Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. collection.cooperhewitt.org/objects/51689299

Anthropometric efforts ranging from Da Vinci's Vitruvian Man, to Corbusier's Modulor, to Alphonse Bertillon's' Signaletic Instructions and invention of the mug shot, to Henry Dreyfuss's Humanscale& What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing&

In a Washington Post article from 1999 on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military's school for information warfare) is quoted as saying: "Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things."

To create the Kritios They character I used a program called Fuse. It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly based 3D modelling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn't work because the mesh won't be recognised as a body.

A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don't account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.

FS: Could you say something about the notion of 'parametric truth' that you used?

SN: Realising the existence of a built-in anthropometric standard in such software, I started looking at use cases of motion capture and 3D scanning in areas other than entertainment - applications that demand an objectivity. I was particularly interested in crime and accident reconstruction animations that are produced as visual evidence or in court support material. Traditionally this support material would consist of photographs, diagrams and objects. More recently this sometimes includes forensic animations commissioned by either party. The animations are produced with various software and tools, sometimes including motion capture and/or 3D scanning technologies.

These animations are created post-fact; a varying amalgam of witness testimonies, crime scene survey data, police and medical reports etc. Effectively creating a 'version of', rather than an objective illustration. One highly problematic instance was an animation intended as a piece of evidence in the trial of George Zimmerman on the charge of second-degree murder on account of the shooting of Trayvon Martin in 2012. Zimmerman's defence commissioned an animation to attest his actions as self defence. Among the online documentation of the trial is a roughly two-hour long video of Zimmerman's attorney questioning the animator on his process. Within these two hours of questioning the defence attorney is attempting to demonstrate the animations' objectivity by minutely scrutinising the creation process. It is revealed that a motion capture suit was used to capture the character's animations, to digitally re-enact Zimmerman and Martin. The animator states that he was the one wearing the motion capture suit portraying both Zimmerman as well as Martin. If this weren't already enough to debunk an objectivity claim, the attorney asks: "How does the computer know that it is recording a body?" Upon which the animator responds: "You place the 16 sensors on the body and then on screen you see the body move in accordance."

"But what is on screen is merely a representation of the data transmitted by 16 sensors, not a body."

A misplaced or wrongly calibrated sensor would yield an entirely different animation. And further, the anthropometric measurements of the two subjects were added in post production, after the animation data had been recorded from the animator's re-enactment. In this case the animation was thankfully not allowed as a piece of evidence, but it nevertheless was allowed to be screened during the trial. The difference from showing video in court is, seeing something play out visually, in a medium that we are used to consume. It takes root in a different part of your memory than a verbal recount and renders one version more visible than others. Even with part of the animation based on data collected at the crime scene, a part of the reproduction will remain approximation and assumption.

This is visible in the visual choices of the animation, for example. Most parts are modelled with minimal detail (I assume to communicate objectivity). "There were no superfluous aesthetic choices made." However, some elements receive very selective and intentional detailing. The crime scene's grassy ground is depicted as a flat plane with an added photographic texture of grass rather than 3D grass produced with particle hair. On the other hand, Zimmerman and Martin's skin colour is clearly accentuated as well as the hoodie worn by Trayvon Martin, a crucial piece of the defence's case. The hoodie was instrumentalized as evidence of violent intentions during the trial, where it was claimed that if Martin had not worn the hood up he would not have been perceived as a threat by Zimmerman. To model these elements at varying subjective resolution was a deliberate choice. It could have depicted raw armatures instead of textured figures, for example. The animation was designed to focus on specific elements; shifting that focus would produce differing versions.

3D animation by Reuter’s owned News Direct “Transform your News with 3D Graphics”, “FBI investigates George Zimmerman for shooting of Florida teen, Trayvon Martin” News Direct, 2012.

FS: This is something that fascinates me, the different levels of detailing that occur in the high octane world of 3D. Where some elements receive an enormous amount of attention and other elements, such as the skeleton or the genitals, almost none.

SN: Yes, like the 16 sensors representing a body&

FS: Where do you locate these different levels of resolution?

SN: Within the CGI [computer-generated imagery] community, modellers are obsessed by creating 3D renders in the highest possible resolution as a technical as well as artistic accomplishment, but also as a form of muscle flexing of computing power. Detail is not merely a question of the render quality, but equally importantly it can be the realism achieved; a tear on a cheek, a thin film of sweat on the skin. On forums you come across discussions on something called subsurface scattering, which is used to simulate blood vessels under the skin to make it look more realistic, to add weight and life to the hollow 3D mesh. However, the discussions tend to focus on pristine young white skin, oblivious to diversity.

JR: This raises the notion of the 'epistemic object'. The matter you manipulated brings a question to a specific table, but it cannot be on every table: it cannot be on the 'techies' table and on the designers table. However, under certain conditions, with a specific language and political agenda and so on, The Contents raises certain issues and serves as a starting point for a conversation or facilitates an argument for a conversation. This is where I find your work extremely interesting. I consider what you make objects around which to formulate a thought, for thinking about specific crossroads. They can as such be considered 'disobedient action-research', as epistemic objects in the sense that they make me think, help me wonder about political urgencies, techno-ecological systems and the decisions that went into them.

SN: That's specifically what two scenes in the film experiment with: the sleeping shadow and the decimating mug shot. They depend on the viewer's expectations.

"The most beautiful reaction to the decimating mug shot scene has been: 'Why does it suddenly look so scary?'"

The viewer has an expectation in the image that is slowly taken away, quite literally, by lowering the resolution. Similar with the sleeping scene: What appears as a sleeping figure filmed through frosted glass unveils itself by changing the camera angle. The new perspective reveals another reality. What I am trying to figure out now is how the images operate in different spaces. Probably there isn't one single application, but they can be in The Fragility of Life as well as in a music video or an ergonomic simulation, for example, and travel through different media and contexts. I am interested in how the images exist in these different spaces.

FS: We see that these renderings, not only yours but in general, are very volatile in their ability to transgress applications, on the large scale of movements ranging from Hollywood to medical, to gaming, to military. But it seems that, seeing your work, this transgression can also function on different levels.

SN: These different industries share software and tools, which are after all developed within their crossroads.

"Creating images that attempt to transgress levels of application is a way for me to reverse the tangent, and question the tools of production."

Is the image produced differently if the tool is the same or is its application different? If 3D modelling software created by the gaming industry were used to create forensic animations, possibly incarcerating people, what are the parameters under which that software operates? This is a vital question affecting real lives.

JR: Can you please introduce us to Mr. item #0082a?

SN: In attempting to find answers to some of the questions on the Fuse character creator software's parameters I came across a research project initiated by the U.S. Air Force Research Laboratory from the late 1990s and early 2000s called CAESAR [Civilian American and European Surface Anthropometry Resource].

#0082a is a whole body scan mesh from the _CAESAR_ database, presumably the 82nd scanned subject in position a. CAESAR project's aim was to create a new anthropometric surface database of body measurements for the Air Force's cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the CAESAR database, by trying to find information on the Cyberware scanner.

I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the CAESAR project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the CAESAR scans, #0082a, with an early version of Poser.

Leonard Nimoy is one of the first actors to get scanned and be replicated digitally in Star Trek IV: The Voyage Home. […] Image: Cinefex 29, 02/1987.

Cyberware has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy's head scan is among the first 3D scans& The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.

CAESAR, as far as I know, is one of the biggest databases available of scanned body meshes and anthropometric data to this day. I assume, therefore it keeps on being used -- recycled -- for research in need of humanoid 3D meshes.

While looking into the history of the character creator software Fuse I sifted through 3D mesh segmentation research, which later informed the assembly modelling research at Stanford that became Fuse. #0082 was among 20 CAESAR scans used in a database assembled specifically for this segmentation research and thus ultimately played a role in seting the parameters for Fuse. A very limited amount of training data, that in the case of Fuse ended up becoming a widely distributed commercial software. At least at this point the training data should be reviewed& It felt like a whole ecology of past and future 3D anthropometric standards revealed itself through this one mesh.

[Possible Bodies is a collaborative research project initiated by Jara Rocha and Femke Snelting 'on the very concrete and at the same time complex and fictional entities that 'bodies' are, asking what matter-cultural conditions of possibility render them present. This becomes especially urgent in contact with the technologies, infrastructures and techniques of 3D tracking, modelling and scanning.']

Nieuwsbrief

Ontvang als eerste uitnodigingen voor onze events en blijf op de hoogte van komende tentoonstellingen.