Artificial Senses

Artificial Senses visualizes sensor data of the machines that surround us to develop an understanding how they experience the world.

In current times, machine learning and artificial intelligence are buzzwords. But they are more than that—they influence our behavior as well as our conception of the technologies themselves and the world they represent. A lack of understanding of how these systems operate on their own terms is dangerous. How can we live with, trust, and interact with this alien species, which we set forth into the world, if we know it only through interfaces designed to make the machine unnaturally akin to the world we already know? This project visualizes raw sensor data that our phones and computers collect and process, to help us understand how these machines experience the world.

Locating live

Each vertical line represents one request of the latitudinal and longitudinal geolocation of the device. Rather than displaying the two numbers that are returned the visualization displays every digit of those numbers individually. What becomes visible is the accuracy on which the device locates itself. It is not uncommon that devices return their location with an accuracy to the sixth or seventh decimal place. This means the device can know its position to an accuracy of a few centimeters/inches. The interesting part is that on every request, the location of the instrument, even when it is not in motion, is constantly changing. The visualization makes visible this constant guessing of the machine. Each digit of the latitude and longitude values are represented by saturation of blue and magenta. Both created patterns are overlaid: the difference between the bottom layer and the top layer is calculated is represented as a positive value.

While the location sensor works on many devices and browsers, it often behaves very differently. Try different machines and web browsers.

Open Live Visualization

Moving live

This visualization represents the device motion sensor. Each dot stands for one digit returned by the apparatus. The longer the returned numbers, the more points from top to bottom of each row. Darker lines symbolize negative numbers, brighter lines symbolize positive numbers. Time is drawn from left to right for each row.

Motion sensors are most common in smartphones and tablets. Only very few personal computers have them. If you are using a phone or a tablet and you can not see the live visualization try switching browsers.

Open Live Visualization

Seeing live

The code behind the visualization captures an image from the camera of the device. This picture gets stored in the JPEG format encoded in base64. Base64 is a binary-to-text encoding scheme: A = 000000, B = 000001, C = 000010. The total encoding holds 64 characters with a one-to-one mapping into binary code. The visualization maps these 64 characters to 64 grayscale values between black and white, and displays each value on the screen from left to right and top to bottom. This process repeats every second.

While many phones and tablets have a camera their access over the web today often only works on personal computers. If you are on a personal computer and the sensor is not live, try switching browsers.

Open Live Visualization

Hearing live

The code captures the frequency data of the microphone of the device. Each frequency value ranges between 0 and 255. The number of frequencies collected can range between 32 and 32768. Frequencies are drawn from left (low frequency), to the right (high-frequency). The range of each frequency is illustrated by each point from 0 (black) to 255 (white). This data is requested 20 times per second and time moves from top to bottom.

Microphone web access is restricted on most phones and tablets. Most probably this sensor will work on your laptop or pc.

Open Live Visualization

Touching live

Each vertical row represents the vertical and horizontal position of the finger on the screen of the device compared against the total height and width of the screen. Rather than displaying the two numbers representative of the location of the user’s finger, the visualization uses every digit of those often long decimal places individually. The width of each drawn line depends on the time between two incoming signals. Both created patterns are overlaid onto each other. The difference between the bottom and top layer is calculated to always return a positive value.

As ‘touch’ in the form of mouse gestures or screen contact is such a common gesture in graphical user interfaces, this visualization will work on nearly all devices.

Open Live Visualization

Orienting live

This graphic visually recodes the gyro sensor that is built into most contemporary smartphones. Similar to the LOCATING visualization, each individual digit is represented – rather than the entire number – as there is very high accuracy in the sensory output. The x-axis of the gyroscope returns the direction the phone is pointing toward as a number between 0 and 360. But rather than returning an integer, depending on the device, the number can have more than 20 decimal places. The machine returns the direction of the phone on a level of air vibrations.

Most phones and tablets nowadays have a gyro sensor. It will be harder to find in personal computers.

Open Live Visualization

Contemporary culture is unimaginable without the machines that surround us every day. Our knowledge is influenced by Google search results, our music taste by the mixes Spotify creates for us, and our shopping choices by Amazon recommendations. This strange new world became part of our reality in a very short time. Human-facing interface design makes these systems feel natural, as if they are really of our world. But if we want to live with these devices and understand them, we should not soley rely on the machines becoming something easily understandable to us. We need to develop an understanding of how these devices experience our world.

The visualizations here explore a number of sensory domains: seeing, locating, orienting, hearing, moving, and touching. Rather than yielding machine’s sensory data in ways that we intuitively grasp, however, these visualization try to get closer to the machine’s experience. They show us a number of ways in which the machine’s reality departs from our own. With many of its sensors, for example, the machine is operating in a timescale that is too fast to understand; the orientation sensor returns data up to 300 times per second. This is too quick to draw each of these values on the screen, and also too quick for us to comprehend. In most cases, to make these visualizations, the machine had to be tamed and slowed for us to perceive its “experience.”

A second and more worrying finding is the similarity among many of the images. Seeing, hearing, and touching, for humans, are qualitatively different experiences of the world; they lead to a wide variety of understandings, emotions, and beliefs. For the machine, these senses are very much the same, reducible to strings of numbers with a limited range of actual possibilities. While some of these sensory experiences—notably temperature—have long been given numerical value, their effects on us remain ineffable. Nowadays, however, it is not only temperature that can be reduced to a discrete number, but seemingly anything. But is this really true? Isn’t there something that our current measures of temperature does not reveal about the entire spectrum, from crisp cold to feverishly hot? And is this a question of more data points, or is there a deeper disconnect, reflective of a difference in kind, in these translations?

The entire orientation of a machine towards the world is mediated by numbers. For the machine, reality is binary—a torrent of on and off. Any knowledge about the world that we learn from the machine goes through this process of abstraction. As we become more dependent on our machines, we need to understand the underlying limits and boundaries of this abstraction.

Resources

Sensor Mappings

Visual explainations of the mappings from sensor data to visual output.

Exhibitions

Harvard Art Museums

The possibilities of artificial intelligence have long seemed futuristic and far-fetched. Today, however, AI technology is making its impact felt in such real-world realms as autonomous vehicles, online searches and feeds, and the criminal justice system. In conjunction with the Berkman Klein Center and MIT Media Lab’s recently announced Ethics and Governance of AI Initiative, metaLAB at Harvard presented MACHINE EXPERIENCE, a showcase of works by metaLAB artists exploring the emotional effects of algorithms, the uncanny experiences of sensor-enabled computers, and what intelligent machines might reveal about understandings of the nature of intelligence itself.

MIT List Visual Arts Center

Hacking Arts 2017 exhibit at MIT List Visual Arts Center.

Volatile Truths

A group show at Rainbow Unicorn Berlin that is searching for what lies between. ‘Volatile Truths’ asks you to focus your gaze on the ephemeral void and perceive the complexities between trust and perception. It wants to see your blurred vision; the uncertainty and ambiguity between your blinks.'

Selected Press Coverage

WIRED

'See the World through the Eyes of your Phone'

Fast Company

'These Eerie GIFs Show How Your Phone Feels, Hears, And Sees You'

Wired Japan

'スマートフォンが「見ている」世界を、視覚化したらこうなった──その不思議な画像が伝えてくるもの'

DigiCult

'My fundamental idea was to create images of different sensorial inputs of the machine. These graphics function as a visual database of the data stream the computer generates.'

Artificial Senses is a project by Kim Albrecht in collaboration with metaLAB (at) Harvard, and supported by the Berkman Klein Center for Internet & Society. The project is part of a larger initiative researching the boundaries between artificial intelligence and society.

Contact via website or twitter

Copyright © 2017 Kim Albrecht all rights reserved