On Seeing Human: A Three-Factor Theory of Anthropomorphism

Posted in Uncategorized on September 2, 2013 by pauljohnwhite

On Seeing Human: A Three-Factor Theory of Anthropomorphism

Shen Shaomin

Posted in Uncategorized on June 17, 2013 by pauljohnwhite

I Sleep on Top of Myself and I Want to Know What Infinity Is are works (by Shen Shaomin) that reveal the artist’s outlook through hyperrealistic sculptures that illustrate the potential and metaphorical consequences of a world which is over-developed and despoiled.

I Sleep on Top of Myself is a series of hairless, motorized, lifelike sculptures of animals which portend a future where the earth is severely depleted of natural resources-so much so that animals begin to loose their fur. Trapped in deep sleep, these breathing creatures-a cat, a dog, a bunny, and two geese-are forced to sleep on the remnants of their fur and feathers in order to survive. As a metaphor, this quietly troubling piece asks if we humans will be forced to survive on the remnants of our past once we have exhausted all of our natural resources.

Through a similar methodology, I Want to Know What Infinity Is questions the urge to relentlessly develop and expand our economies at all costs. Similar to the way we strive for development, the old, naked, breathing hyperrealistic sculpture of a woman relentlessly strives for a perfect tan. In Shen Shaomin’s eyes, the consequences of our constant need for development will change the face of our planet, just as the undying urge for beauty transformed this old woman’s body.

Shen

Image and information on artist found at http://www.re-title.com/exhibitions/eliklein.asp first accessed on 2/5/2013.

Why Does Time Fly? Article describing experiments with subjective time experiences

Posted in Uncategorized on June 17, 2013 by pauljohnwhite

The following article describes how inducing particular emotional states can alter our perception of time, a valid and potentially interesting area of study for artists as well as scientists:

“Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen.  This overestimation does not happen when the dot seems to move away.  Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.  

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Second, brain areas including the left anterior insula were more active during the receding condition relative to the looming condition. The insula as a whole has been the focus of many recent studies and is thought to be involved in complex emotional processing.  In particular, Craig has suggested that there is an emotional asymmetry, in which the left forebrain is associated with approach, safety, positive affect and the right forebrain is associated with arousal, danger, and negative affect. An object moving away might be seen as non-threatening, signaling the self to relax. 

In fact, some investigators have suggested that the amount of energy spent during thinking and experiencing defines the subjective experience of duration.  In other words, the more energy it takes to process a stimulus the longer it appears as a subjective experience of time.  Something moving toward you has more relevance than the same stimulus moving away from you:  You may need to prepare somehow; time seems to move more slowly.

The experience of time is not linear. Fear and joy stretch time as do stimuli that move towards us.  What can we learn from these studies for our day-to-day experiences?  When we experience something as “taking a long time” it is really the result of three inter-twined processes: the actual duration of the event, how we feel about the event, and whether we think the event is approaching us.  There is little we can do about the first factor but there are obvious ways of modulating how we feel about an event and how we think about an event approaching us.  Future studies will need to address the question of whether modifying these factors can alter our subjective time experience so that that we can shorten life’s painfully extended moments of boredom and extend those wonderful moments of bliss. 

Article by Dr. Martin P. Paulus located at http://www.scientificamerican.com/article.cfm?id=why-does-time-fly accessed on 17/6/2013.

Artist makes portraits from DNA found in chewing gum and cigarettes: io9 article

Posted in Uncategorized on May 5, 2013 by pauljohnwhite

dna heads

Have you ever seen a wad of chewing gum on the sidewalk and wondered about the person who spat it out? Artist Heather Dewey-Hagborg has done more than wonder. She collects errant hairs, cigarette butts, fingernails, and discarded chewing gum from public places and using the DNA she finds, creates 3D portraits of how the owners of this discarded genetic material might look.

Dewey-Hagborg’s Stranger Visions project is fascinating in both its concept and its limitations. She explains on her blog that she originally conceived of the project while sitting in her therapist’s office, where she noticed a hair lodged in a crack in a picture frame. She wondered about the person to whom the hair had belonged, imagining what they might look like. Tapping her inner forensic scientist, she extracts the DNA from these found items and referencing a database of regions of DNA that are known to code for certain traits (she told the Smithsonian that she has 40-50 different traits she uses, including things like the space between the eyes and propensity to be overweight), she creates a portrait using the Basel Morphable Model. Once she has a portrait, she prints it out using a 3D printer, creating an imagined version of the stranger who left behind their DNA. You can see three of those portraits above, and more on her website.

Of course, DNA can tell us only so much. Since Dewey-Hagborg doesn’t know the people behind the DNA, she can’t compare the portraits to the real human beings, although she has created her own genetic self-portrait. Even if we had a complete picture of how DNA links to facial features, these portraits wouldn’t account for the role of epigenetics and environment in how our features develop. Additionally Dewey-Hagborg has found certain limitations with the Basel Morphable Model itself; most of the people used to train the system were of European descent, which has led to some problems in creating portraits for people who are not of European descent.

Still, her endeavor may have utility beyond being an interesting art project. The Smithsonian has a fascinating profile of Dewey-Hagborg that reveals more about her process. It ends on this note: Dewey-Hagborg was recently contacted by a medical examiner on a cold case, hoping she could create a portrait of a woman whose remains have gone unidentified for 20 years. Perhaps a DNA portrait—even a highly imperfect one—could shed a little light on this mystery.

Stranger Visions [Heather Dewey-Hagborg via Smithsonian Magazine]

Article by Lauren Davis on io9 website http://io9.com/artist-makes-portraits-from-dna-found-in-chewing-gum-an-489736699 accessed 5/5/2013.

Frieze Art Fair 2012: Images from Bedlam exhibition at the Old Vic Tunnels

Posted in Uncategorized on October 16, 2012 by pauljohnwhite

Images from art exhibition held in the unused railway vaults below Waterloo Station.

Unnervingly realistic ‘hologram’ of dead singer performs for crowd (and how it is done)

Posted in Uncategorized on June 18, 2012 by pauljohnwhite

 

“Tupac Shakur appeared in concert at the Coachella music festival Sunday night, wowing audiences who watched his image rap with Snoop Dogg.

And now, the Wall Street Journal is reporting (with the puntastic headline “Rapper’s De-Light”) that the late rapper, despite having died in a shooting 15 years ago, may be going on tour.

The image of the rapper is not, in fact, a hologram. The 2D-image is an updated version of a stage trick that dates to the 1800s. In the old version, an actor would hide in a recess below the stage as stagehands used mirrors to project the image of a ghost.

According to a 1999 patent uncovered by the International Business Times, the trick used by the company AV Concepts employs an angled piece of glass placed on the the stage to reflect a projector image onto a screen that looks invisible to the audience.

The team pulled together Tupac’s performance by looking at old footage and creating an animation that incorporated characteristics of the late singer’s movements.

AV Concepts president Nick Smith told the Journal that the company had used the technology to digitally resurrect some deceased executives — though he gave no details on that. The patent on the technology shows an example of a presentation where the presenter is on stage with the projected image of a car.

Over at MTV, writer Gil Kaufmann questioned whether the success of the virtual Tupac would set a trend, particularly for performances including multiple artists. The potential for a surprise appearance from a beloved celebrity performer could be a draw for audiences.

But the trick could be overused, Kaufmann wrote: “For example, if Paul McCartney announced a tour with a virtual John Lennon, Beatles fans would likely see that as being in bad taste and not show up.”

Speaking to Kaufmann, Dave Brooks of the magazine Venues Today said that the trick could have gotten tired quickly even in the Coachella performance, but that the effect was impressive when used sparingly.”

Article on Tupac ‘hologram’ located at: http://www.washingtonpost.com/business/technology/how-the-tupac-hologram-works/2012/04/18/gIQA1ZVyQT_story.html and video located on Youtube, both accessed on 18/6/12.

 

A woman controlling a robot using only her mind

Posted in Uncategorized on June 18, 2012 by pauljohnwhite

Watch this video, and witness a breakthrough in the field of brain-machine interfaces. Researchers have been improving upon BrainGate — a brain-machine interface that allows users to control an external device with their minds — for years, but what you see here is the most advanced incarnation of the implant system to date. It is nothing short of remarkable.

Starting at around 3:10, you can watch Cathy Hutchinson — who has been paralyzed from the neck down for 15 years — drink her morning coffee by controlling a robotic arm using only her mind. According to research published in today’s issue of Nature, Hutchinson is one of two quadriplegic patients — both of them stroke victims — who have learned to control the device by means of the BrainGate neural implant. The New York Times reports that it’s the first published demonstration that humans with severe brain injuries can control a sophisticated prosthetic arm with such a system:

Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.

The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

“It is a spectacular result, in many respects,” said John Kalaska, a neuroscientist at the University of Montreal who was not involved in the study, “and really the logical next step in the development of this technology. This is the kind of work that has to be done, and it’s further confirmation of the feasibility of using this kind of approach to give paralyzed people some degree of autonomy.”

What remains to be seen is how such precision will be achieved. One of the things that makes the arm and hand movements of able-bodied people so precise is their ability to actually feel objects in the real world, and sense the position of their limbs in space (a sensation known as proprioception). The interface between our brains and our limbs is therefore bi-directional, meaning we can not only reach for something with our hands, but receive sensory feedback that allows us to make necessary adjustments to our movement, giving rise to improved dexterity and more purposeful, calculated movement.

The New York Times reports that it’s the first published demonstration that humans with severe brain injuries can control a sophisticated prosthetic arm with such a system:

Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm.

The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

“It is a spectacular result, in many respects,” said John Kalaska, a neuroscientist at the University of Montreal who was not involved in the study, “and really the logical next step in the development of this technology. This is the kind of work that has to be done, and it’s further confirmation of the feasibility of using this kind of approach to give paralyzed people some degree of autonomy.”

Hutchinson’s control over the robotic arm is not perfect, but it’s damn impressive. As the video points out, the arm featured in the video is currently programmed to compensate for lurches and unexpected collisions by “entering safety mode” and ceasing movement, but future versions of the arm will presumably be capable of finer, more delicate motions.

What remains to be seen is how such precision will be achieved. One of the things that makes the arm and hand movements of able-bodied people so precise is their ability to actually feel objects in the real world, and sense the position of their limbs in space (a sensation known as proprioception). The interface between our brains and our limbs is therefore bi-directional, meaning we can not only reach for something with our hands, but receive sensory feedback that allows us to make necessary adjustments to our movement, giving rise to improved dexterity and more purposeful, calculated movement.

A bi-directional brain-machine-brain interface may sound like blue-sky technology, but it was successfully demonstrated in monkeys less than a year ago. Could the technology demonstrated in the video up top be married with that of a bi-directional brain-machine-brain interface?

The paper describing this revolutionary interface is published in today’s issue of Nature.

Information located in article at: http://io9.com/5910859/watch-this-paralyzed-woman-control-a-robotic-arm-using-only-her-mind?tag=thisisawesome accessed on 18/6/12.

Yvonne Roeb

Posted in Uncategorized on March 5, 2012 by pauljohnwhite

The Psychopath’s uncanny imitation of human characteristics.

Posted in Uncategorized on March 5, 2012 by pauljohnwhite

And then I heard a strange noise coming from Constant. His body was shaking. The noise I could hear was something like sobbing. But it wasn’t quite sobbing. It was an approximation of sobbing. His face was screwed up like a face would be if it were crying, but it was weird, like bad acting. A grown man in a dapper suit was pretending to cry in front of me. This would have been awkward enough if he was actually crying – I find displays of overt emotion not at all pleasant – but this was a man palpably simulating crying, which made the moment at once awkward, surreal and quite disturbing. (This describes an uncanny reaction to imitation of human characteristics and empathy, not from a machine or robot, but another person who is lacking genuine emotional characteristics despite having the exterior appearance of a normal person)

Our time together ended soon afterwards. He showed me to his door, the epitome of good manners, laughing, giving me a warm handshake, saying we’d meet again soon. Just as I reached my car I turned around to wave again, and when I saw him I felt a jolt pass through me – like my amygdala had just shot a signal of fear through to my central nervous system. His face was very different, much colder, suspicious. He was scrutinizing me hard. The instant I caught his eye he put on that warm look again. He grinned and waved. (pp132-133)

Bob Hare said psychopaths were skilful imitators. He once told a journalist a story about how he’d been asked to consult on a Nicole Kidman movie called Malice. She wanted to prepare for a role as a psychopath. Bob told he, ‘Here’s a scene you can use. You’re walking down a street and there’s an accident. A car has hit a child. A crowd of people gather round. You walk up, the child’s lying on the ground and there’s blood running all over the place. You get a little blood on your shoes and you look down and say, “Oh shit.” You look over at the child, kind of interested, but you’re not repelled or horrified. You’re just interested. Then you look at the mother, and you’re really fascinated by the mother, who’s emoting, crying out, doing all these things. After a few minutes you turn away and go back to your house. You go into the bathroom and practice mimicking the facial expressions of the mother. That’s the psychopath: somebody who doesn’t understands what is going on emotionally, but understands that something important has happened. (pp142-143)

Ronson, A (2011) The Psychopath Test. Picador. London.

Educational project exploring connections between real and virtual identities and environments

Posted in Uncategorized on February 27, 2012 by pauljohnwhite

“The Avatar Project is a Vichealth funded Victoria University administered program researching the mental health of teens in virtual environments, specifically issues such as community engagement, self esteem, and self determination.

Many students we work with lack basic literacy skills, are not engaged by traditional classroom teaching and are at risk of not reaching their full potential once they leave school.

Taking these students into a virtual 3d environment allows them the freedom to not only express themselves and gain a sense of control over their lives, but also to learn educational skills in a fun way including a wide range of subject matter including maths, science, english, history, art, design and of course multimedia. Many students are already regular computer users and take to Second Life like ducks to water, and we work actively with the school’s teachers to incorporate Second Life-based activities into students’ existing educational curricula.

Some of the activities that we run include the following;

Activity 1 – Exploring the Virtual Environment

You must visit three places in Second Life and complete a short review on each place.

Location, Name, Region, People Online, What is done there? What do you like? What don’t you like? Would you visit again? Include a snapshot of each place.

The first purpose of this exercise is to learn how to take pictures using the built in SL snapshot button. The second purpose of this exercise is to learn how to navigate around Second Life whilst both individually and collectively gaining more knowledge about what has been done and how people react to and use the various environments.

Activity 2 – Online Identities

In this activity we will learn how to upload a texture and apply it to an object, whilst exploring the differences and similarities between our online and real world personas.

You must take a photo of yourself in real life, as well as a snapshot of yourself in Second Life. You must then post these two images in your blog(or a word document) and write about the differences and similarities, and anything else you have observed or felt.”

Information located at: http://www.avatar-project.org/ and accessed on 27/02/12.