The Transparent World

By Magdalena KrönerMay 17, 2022

The Transparent World
A bearded man wears a martial-looking mask on his face. It consists of thin metal rods arranged in geometric bracings. The mask appears light and delicate, but seems too small for the man’s head. It squashes his nose flat, cuts into his cheekbones. The skin appears waxy and swollen where the metal presses in. The man breathes heavily, his eyes half closed. After about ten minutes, the video breaks off. The short film is Face Cages by the American artist Zach Blas and it is probably one of the most convincing works about the tangible effects of biometric recognition technologies and artificial intelligence. Blas calls his work “Endurance Performance”: he wears the face cage created on the basis of his biometric data until he can no longer bear the pain.

He explains, “Identity, for a biometric machine, is something stable and objective that can be digitally calculated, measured and extracted from the surface of the body. Identity, for biometrics, is what can be digitally captured. This is where the category of ‘queerness’ comes into play for me as a gay man. Queer identity transcends inherited categorizations and definitions. By translating the biometric data of our faces into devices made of metal, Face Cages exposes biometric identity as a product of violent enclosure.”

The United States have been leading for decades ahead of Japan, Korea and Europe regarding the commercial and military development and application of biometrics, artificial intelligence, robotics and virtual reality. Manufacturers prefer to hide the risks that military, law enforcement and intelligence exploitation of the new technologies could entail. The commercial opportunities and the prospect of entertainment, safety and convenience offered by applications such as payment by fingerprint, intelligent chatbots, humanoid care robots and home assistants mask the dark side of the latest digital novelties.

If you think about how things are currently developing in AI, you’ll find lots of disturbing news: the Pentagon has commissioned the company Clarifai to analyze drone recording data with artificial intelligence. Job application processes in large American companies have long been aided by AI, which is supposedly able to select future candidates more “objectively” than a human.

As a society we have to face a multitude of difficult questions: How do we shape the complex relationships between humans and machines? How can adequate digital representation and visibility be ensured?

Beyond a dystopian fear of technology and a utopian enthusiasm for technological innovation, it is necessary to develop unorthodox ideas for modes of self-determined handling of digital technologies.

In this moment in time, I would like to argue, that contemporary artists working right now, using the exact digital tools that surround us, are uniquely equipped to provide surprising answers and radical food for thought — and the senses.  

Artists ask themselves — and us: if the spheres of screentime, mediated experience of reality and existing IRL are becoming increasingly blurred, why not push it towards possible utopic and dystopic outcomes and see what that looks and feels like? The young artists understand that the realms of technology and art, art and politics, reality and fiction might become increasingly indistinguishable. And that is the sweet spot where they start their work: they synthesize, visualize and transgress those things that we find difficult to imagine.

With digital means, contemporary art currently recalls its innate qualities as a sensorium of the new, an anarchic playground, a slippery slope and an echo chamber for radical thinking.

In the U.S., and especially in California, where the spheres of digital technology and culture are as close and diverse as probably nowhere else in the world, artists are working to create intuitively comprehensible and haunting images that we understand before we even begin to grasp what “Big Data” actually means.

This kind of contemporary art is slyly inserting itself into our everyday lives: showing up in our Instagram feeds, in our homes, on billboards, on campus, in a cab. It operates outside the white cube. It transcends the hermetic paths of institutional art education and elitist gatekeeping.

And it looks like it might just provide some unique ideas about Big Data, biometric surveillance, and AI. This kind of art demands, irritates, provokes.

The artists who critically engage with the implications of AI and Big Data not only use the digital products they critique, but many times they are also engineers, researchers, coders, hackers. For these artists, a natural part of creating is to develop the tools to not only better understand how digital processes work, but to be able to design them autonomously. This is not to be confused with the notorious tech-bro optimism that assumes that the problems that big tech creates can only be solved with big tech.

Their work translates the rather abstract questions raised by the rapid development of new digital technologies for the way we live our lives into practical examples: How can I protect myself from facial recognition? What does it mean when I share the most intimate parts of my everyday life with a machine?

The latter is addressed by Los Angeles-based artist Lauren Lee McCarthy in her work “Lauren,” where she takes up the role of a human home assistant, like an Alexa. “Lauren” switches on for a week in the homes of volunteers she has sought via ads on the Web. The performance begins with the installation of a series of customized, networked smart devices, including cameras, microphones, switches, door locks and faucets.

The artist explains: “I remotely monitor the person 24/7 and control all aspects of their home.  I examine how we invent AI into our home that is sold to us as comfort and relief, but actually brings surveillance and commercial interests into the most private of spaces. How does knowing that a human, not a machine, is listening to my desires determine behavior?”

With “Lauren,” McCarthy puts into very vivid and real terms something that American feminist and biologist Donna Haraway recognized long before the invention of “smart” machines.

“Technology is not neutral,” Haraway said. “We’re inside of what we make, and it’s inside of us. We’re living in a world of connections — and it matters which ones get made and unmade.”

Lauren Lee McCarthy, who teaches as a visiting professor at the Media Arts Department at the University of California, Los Angeles, also created the open-source platform p5.js, which helps students code the information they need for websites or apps themselves. When an education system systematically disadvantages certain groups, it’s even more important to provide the groundwork for core digital literacy skills early. McCarthy also serves on the board of the nonprofit “Processing Foundation,” which has been around for more than two decades, making digital literacy and software skills accessible to historically disadvantaged, marginalized groups worldwide in the form of software, learning programs and fellowships, especially members of the BIPOC community — outside of academic contexts.

In the U.S., where regulation of the booming tech sector is barely happening, and where Section 230 of the United States Communications Decency Act, which dates back to the early 1990s, grants tech providers de facto free reign over user data and content — and also releases them from any liability — private involvement is all the more important. Numerous private and non-profit groups and initiatives meet the need for digital literacy, including the initiative “Women in AI Ethics,” which promotes the still few women in the tech sector; “Algorithmic Justice League,” which conveys critical awareness and knowledge in relation to AI; or “Afrotectopia,” an art and education center for the Black community in Brooklyn.

Safiya Noble, professor of Gender Studies and African-American Studies at UCLA, speaks of “data discrimination” and shows in her comprehensive analysis “Algorithms of Oppression” how algorithmic processes discriminating according to skin color, social status and gender are influencing consumer behavior, private as well as political action. Noble says, “We have more data and technology than ever before, but at the same time we have the highest levels of global, socio-political, and economic inequality than ever before. In what ways are the big tech corporations responsible and accountable for this level of inequality?”

It is hard to underestimate the dangers to a democratic society created by biased algorithmic processes, predictive analytic tools, and large-scale data mining, controlled by global corporations whose mandate, structure, and interconnections are as opaque as the actual workings of the AI they develop and use.

Accountability, Ethics, Equality — issues hardly negotiable in a climate of political and economical dependencies on lobbies and legislative periods are now vividly discussed in art.

Contemporary art is free to work outside of the economic and tactical concerns of engineers and politicians. It holds its own against neoliberal efficiency and the ideologies of Silicon Valley. Art is allowed to play, to fantasize, to question values, to raise doubts.

What it looks like when high-tech and history go together is demonstrated by Navajo artist Emma Robbins. Robbins' work, “His X Mark,” gathers the signatures Native American chiefs placed under treaties signed with the U.S. government between 1700 and 1900. They run virtually across the walls of Washington's Museum of American History.

Robbins explains, “The ‘X’ I take from the historical paper documents represents the violent erasure of our culture, but also the broken treaties between my people and the federal government. The Americans, when the treaties were written, fed us the most absurd things: we’ll give you 100 blankets for your country; we’ll give you musical instruments, but we were so bullied as a people that we agreed to anything. The government promised protection and infrastructure; which absolutely includes access to clean water, but to this day has broken many of its contractual obligations. The exploitation and dehumanization codified in those treaties at the time has consequences for Navajo life realities to this day. Mining, fracking, oil exploration: the United States is paving the way for wealthy corporations to enter our lands and exploit our resources and people.”

Augmented reality as a means to highlight historical injustice and inspire change in the present: Visibility is one of the central concerns of “His X Mark.”

Those who want to view the work need to install the app “4Th Wall” on their phones. The free app was programmed by artist Nancy Baker Cahill and her team, without any Big Tech behind them. “The idea behind the app was to create an artistic platform designed by individuals rather than institutions. Specifically, by individuals who don’t get enough of a voice and visibility in the art context,” Baker Cahill says.

To those who find the idea that art can only be experienced on a small smartphone display gut-wrenching, the Los Angeles-based artist counters, “I don’t believe that a work of art has to physically exist in space to have a tremendous impact. Otherwise, how could the power of music be explained?”

The new art may not look like what we used to know. It no longer satisfies the same needs. It looks for other ways to operate. It feels different. But it fights for something that is unchangeable: Empowerment. Visibility. Accessibility. Participation. The new art seeks answers to the question: what does it mean to be human in the age of AI? Answers that we urgently need, the more transparent our world becomes.

¤


Magdalena Kröner works as a journalist, essayist and art historian in Düsseldorf and the U. S. Her contributions can be found in publications such as Frankfurter Allgemeine Quarterly, Frankfurter Allgemeine Zeitung, Kunstforum International, Monopol, Süddeutsche Zeitung and Die Zeit. In her writing, she focuses increasingly on the connections of contemporary art, digitality and body politics. To this end she has conducted intensive research in the U. S. and publishes widely on the topic. For the periodical “Kunstforum International,” she writes an ongoing series of essays called “Digital Bodies.” Another focus of her work is the investigation into art scenes outside of established western art hubs. So far, she has extensively reported on art scenes in the United Arab Emirates, the Baltic States, China, South Africa and Southeast Asia. From 2005–2015 she kept a secondary residence in New York, contributing to various German media. Kröner was a 2021 Thomas Mann Fellow at the Thomas Mann House Los Angeles. 

Artwork by Zach Blas

LARB Contributor

Magdalena Kröner works as a journalist, essayist and art historian in Düsseldorf and the U. S. Her contributions can be found in publications such as Frankfurter Allgemeine Quarterly, Frankfurter Allgemeine Zeitung, Kunstforum International, Monopol, Süddeutsche Zeitung and Die Zeit. In her writing, she focuses increasingly on the connections of contemporary art, digitality and body politics. To this end she has conducted intensive research in the U. S. and publishes widely on the topic. For the periodical “Kunstforum International,” she writes an ongoing series of essays called “Digital Bodies.” Another focus of her work is the investigation into art scenes outside of established western art hubs. So far, she has extensively reported on art scenes in the United Arab Emirates, the Baltic States, China, South Africa and Southeast Asia. From 2005–2015 she kept a secondary residence in New York, contributing to various German media. Kröner was a 2021 Thomas Mann Fellow at the Thomas Mann House Los Angeles. 

Share

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!