RE: Hey Sarah and Anni, thanks for chatting. Firstly, can you introduce yourselves and what you’re interested in?
SS: Hello! I’m Sarah Selby and I’m a Bristol based interdisciplinary artist interested in digital culture. I explore emerging technologies through interactive artworks and am interested in how as artists we can foster curiosity, facilitate discussion and provoke critical thinking around often inaccessible ideas.
AH: Hello 🙂 My name is Annemiek Höcker and I’m a Rotterdam based anti-disciplinary artist interested in the digital world that is hidden behind the screens of my electronic devices. Here lies a hinterland full of undiscovered surveillance practices, cloud-infrastructures, and artificial landscapes that I want to explore by making them visible.
I use new media as an expression tool to reflect on the rapid changes and the impact of technology on society in order to create awareness and facilitate discussion about societal issues surrounding the internet, surveillance culture and corporate power.
RE: You both have an interdisciplinary practice, and often collaborate – why is this important and how do you implement this into your work?
SS: Ani and I met on the Interactive Arts course at Manchester Metropolitan University whilst Ani was participating in the Erasmus University Exchange Programme. For me, the course strongly influenced my creative approach. It’s ideas driven with a strong emphasis on interdisciplinary collaboration and engagement with fields outside of the arts. I spent most of my time during my final year in the bioengineering department learning about generative design and cellular automaton. My love of interdisciplinary collaboration continued from there and I took part in Roche Continents, an ideas exchange programme that challenges the relationship and boundaries between Science, Art and innovation. Collaboration is a crucial element of innovation – often it’s the combination of diverse thought processes and skills that enable us to break free of niches and routine – to experiment. I implement this in my own practice through a strong research focus, connecting with people outside of my field and a willingness to ask ‘stupid’ questions. I think artists are like bridges, so if we’re going to build meaningful connections between disparate ideas the ability to collaborate and navigate unfamiliar disciplines is crucial.
AH: I studied Advertising and Critical Studies at de Willem de Kooning Academy in Rotterdam. Our learning system also had a very strong emphasis on interdisciplinary collaboration as most of the projects I did at the Advertising department we’re in teams. Apart from my major I was allowed to choose different practices in order to specialize within a certain field and collaborate with students from different departments. During the first years of art school I often doubted my role as an artist because I didn’t know how to define my creative practice as my work was always based around critical theory and visual culture but it could be expressed in multiple forms of media.
After studying Interactive Arts in Manchester I decided to take one year off in order to find out what I could create outside the walls of the art academy. I took the time to experiment and collaborate with other artists and got a job at an experimental art space in Amsterdam. During that year I discovered that I can create anything I want as long as I don’t feel restricted to one specific discipline and therefore decided to break free of all disciplines in order to give myself the ability to move freely in between different fields.
RE: Sarah, can you expand a little on the exhibition and what technologies and concepts are being used?
SS: Raised by Google examines current data practices and is heavily influenced by Shoshana Zuboff’s book ‘The Age of Surveillance Capitalism’. It uses behavioural analysis software ‘Apply Magic Sauce’ developed by The University of Cambridge’s Psychometrics Department to determine the psychographic profile of each visitor based on their interests. Based on this profile, visitors are directed through different doors in the maze, gradually herded into ever smaller rooms and essentially into physical ‘echo-chambers’ with others deemed by the algorithm to hold similar views. The profiling process is a very similar technique to that used by Cambridge Analytica during Donald Trump’s election campaign, in which Facebook users were invited to participate in a personality quiz before their profiles were correlated to their Facebook likes. This seemingly innocent, playful ‘personality quiz’ facilitated the profiling of millions of Americans and created a tool capable of understanding our behaviour traits better than friends, family and in some cases spouses. The impact of this on autonomy and diversity
The work is really a four step process. First is the gamified data-collection. To enter the maze viewers must complete a fun ‘quiz’ that collects their interests and builds their psychographic profile. In the second phase visitors train the algorithm, providing data that allows it to search for patterns or ‘clusters’ and draw correlations. In the third stage this data is turned against them, using their ‘Big 5’ personality traits to target them with specific ads drawn from the 2019 general election. Finally, visitors are confronted with the potential consequences of current data practices, in the form of a 4D baby scan of an unborn child. The child is being profiled whilst still in the womb using data collected from his parents to predict his IQ, education level, areas of interest, risk of offending etc.
The concept behind the work comes from a few different areas of research. As the first of the ‘digital natives’ we’ve been the lab-rat generation, which is where the initial idea of creating a maze-like structure came from. We were subjected to unauthorised experimentation such as Facebook’s emotional contagion experiment, and navigated new and unregulated digital environments. Algorithms trained by bias data are making decisions on policing, healthcare, insurance, employment and more. These issues are no longer constrained to digital
environments, but are actively impacting our real world experiences and opportunities. So the doors – dividing people by traits such as extroversion, gender, relationship status and neuroticism – become a metaphor for bias and privilege, acting as both barriers and passages for visitors.
RE: Can you expand on your involvement with Cambridge University and Apply Magic Sauce?
SS: I came across the Apply Magic Sauce (AMS) program after a co-worker found it online. Apply Magic Sauce is a prediction API developed by The University of Cambridge’s Psychometrics Department. It predicts your psycho-demographic profile from digital footprints of your behaviour, utilising platforms such as Twitter and data from Facebook and LinkedIn. AMS aims to help users understand their data, and reveal what is predictable (and therefore profitable) about them from the information they share online. I contacted Vesselin Popov hope (Executive Director of the University of Cambridge Psychometrics Centre) regarding the initial idea. Vess and the AMS team were very helpful and supported the project, assisting with research, queries and software.
RE: To what extent do you feel Raise sed by Google? And what are the negative and positive implications of this for our generation and future generations?
SS: I probably have a slightly different experience to others in my age group as my mum was always very wary of technology. My siblings and I were raised without TV, internet or mobile phones, so I didn’t have much access to that world until I moved to Manchester at 19. I think the sudden exposure is probably why technology and digital culture is such an area of interest for me – it’s easy to succumb to the ‘boiling frog’ effect and harder to question things if it’s all you’ve ever known. I understand my mum’s decisions and agree with a lot of them – social media can be an incredibly toxic place for young people and renders you accessible 24/7. There’s also other unhealthy habits I’ve picked up since leaving – like struggling to sleep without Peep Show playing in the background, or losing hours just scrolling through instagram! But there are also countless benefits – I’ve got friends from all around the world, engaged with communities and discovered knowledge I would otherwise not have had access to. Technology and the opportunities it creates are exciting, and I believe rather than shunning it we should strive to create positive applications and play an active role in the future direction. Whilst designing the show I found out that my sister was expecting her first child which strongly influenced my ideas and led me to put much more emphasis on future generations. It led me to consider the impacts of data collection, psychographic targeting and the automation of our opportunities over multiple generations. For example, if a parent is deemed high healthcare risk or potential offender, how might this data be applied to their children, or grandchildren? Are their futures calculated at conception by the underlying mechanisms of an increasingly digitalised society?
AH: It’s funny because I am the extreme opposite of Sarah. I started using the world wide web at the age of seven and can honestly say that I’ve been mesmerized and obsessed by it ever since I first visited this digital space. In the beginning I was mainly using the net for entertainment purposes such as online gaming. Google was our homepage that I used as a search engine for doing my homework and to find and collect nice pictures with Google image search. My internet behavior really started to change with the rise of social media platforms such as MSN messenger and Facebook. These new platforms introduced me to a whole new network that has had a major influence on my perception of social reality in the digital world. I think I was too young to get so deeply involved within these platforms without being educated about it by my parents or at school. My parents were monitoring what I was
doing online, but they couldn’t predict what kind of effects these platforms would have in the future.
In the beginning I used to look at social platforms as if they were tools that I was using for my own means. But we all know that platforms such as Google, Facebook and Instagram aren’t just tools anymore. These are platforms that are unethically using their power against us and we are no longer in the position to take control over this. To me this is a difficult aspect that I find really hard to deal with because from a critical point of view the current infrastructures and algorithms are leading us to a dead end. Young kids these days grow up in a very difficult political environment which is in a strange way very conservative and regressive. A young person is more watched and observed than ever before and at the same time their voice is being drowned out. Social platforms are psychologically designed to keep us there as long as possible in order for these platforms to extract as much personal data form us. Algorithms create timelines that for example mix hyper reality content such as the news with personal blogposts and 9GAG cat videos. This is where the border between social reality and reality fades. Mistaking social reality for reality is an important aspect of the internet that not only influenced me as a human being but also plays a major part in my work. I adapted the visual language of Google Earth in my graduation project to visualize the artificial landscapes of the natural resources that are hidden inside my Iphone and I designed Google BDSM gear to visualize the power relation between Google and the user. At the present moment I’m still using Google as my main search engine to do my research and collect nice pictures I find online. I use Google Maps whenever I’m lost in town or when I’m looking for an address, which has made me very reliable to their service. Ever since I was seven years old I’ve made choices and decisions based on the information and answers that Google has given me, so in that extent I can say that I feel raised by Google and their services.
RE: Thinking about the ways in which this type of work can assist in making inaccessible issues more accessible (via interaction, criticality and discussion), what role does art have in facilitating debate, providing an alternative education (of sorts), or highlighting important topics?
SS: I think one of the most powerful things about art as a form of education or tool for engagement is that is leaves room for viewers to think critically and form their own ideas or standpoints. Mainstream education often becomes about learning by rote and I think this diminishes our ability to do this. Today, in the era of fake news and psychographically-driven propaganda, our ability to question, think critically and draw our own conclusions is more important than ever. Art can also bring concepts and ideas often locked in academia to the public’s attention, translating concepts and ideas into a format that is engaging and promotes understanding of topics that are often very complex and inaccessible.
RE: Can you both talk about the processes of data and how they radicalise bias, privilege and opportunity in the real world? Have you seen this happen first-hand?
SS: Algorithms are trained with historic data created by humans that is often inherently biased. Not everything is quantifiable, and this leads to important contextual or social information being disregarded in the process. For example, it was recently discovered that a widely used health care algorithm had been favouring white patients over those from BAME backgrounds. It measured patient need by estimated future cost of healthcare, but failed to understand the much deeper and complex social issues such as access, insurance cover, job stability and income. With regards to witnessing it first hand, one of the biggest challenges with these issues is that it is such an opaque process. The applications and uses
are not widely understood and companies that adopt these systems are not necessarily transparent about their use. The term ‘black box algorithm’ refers to the obscurity of the process – we can see the data going in and the decision it makes but understanding what led to that is very difficult.
AA: I want to refer to a story about radicalized biased data that takes us all the way back to the Holocaust between 1941 and 1945, before the computer even existed. Radicalised data collection was something that was already happening during The WWII, when IBM provided the Nazi Regime from 2000 punchcard machines that were being used in their concentration camps. Data generated by means of counting and alphabetization equipment supplied by IBM was instrumental in the efforts of the Nazi Regime to concentrate and kill 6 million Jews, which was around two-thirds of Europe’s Jewish population. Card sorting operations were established in every major concentration camp, where Jews were moved from place to place, systematically worked to death, and their remains cataloged with icy automation.
CEO Thomas Watson to whom the current Watson Technology is being named, was fully aware and orchestrated this whole process. Watson, whose leadership was obsessed with IBM’s growth and profit, was happy to provide Adolf Hitler with their punch card machines and technology. He helped the Nazi Regime to consciously eliminate six million Jews and now has his name on the most powerful data and machine learning system that currently misuses its technological power over us! To me this is absolutely insane and this is a story I just can’t get my head around. I am incredibly lucky that I haven’t seen this happen in firsthand, but it did affect my grandparents and so many other families from my country and the rest of Europe.
So many powerful corporations and advertising agencies are using IBM Watson as a service in order to gain more of our personal data that will be used unethically against us. It affects me to see how corporate power is being misused and how long these power structures are being kept intact.
RE: Do you think it’s possible to implement a form of online advertising that doesn’t compromise human rights and isn’t invasive?
SS: Targeted advertisements have their uses. For example, I want to see products that are relevant and useful for me (sorry wish.com), but knowing where to draw the line is difficult. If the news stories we’re exposed to (that influence our perception of the world) are driven by our behavioural data, can we ever make fully autonomous and democratic decisions? We all have a different perception of reality, that’s nothing new. I think the issues begin when our data or psychology is used in an attempt to control or manipulate that for another’s gain. The British Army’s ‘This is Belonging’ campaign drives me insane with it’s blatant attempt to target young, vulnerable youths such as care leavers, low income households and gang members. I think that in order to find a way of implementing online advertising in a way that doesn’t compromise human rights, we need much more transparency around how data is collected and advert targets are selected, understanding around the potential impacts both on a personal level and wider society, and control over our personal data.
AH: I agree with you about the fact that we need more transparency about data collection and the selection of targeted advertisements Sarah. Whether we’re on Google, Facebook, Whatsapp or Tinder there’s always that other hidden element of data extraction. No matter how intimate or direct we are communicating, we are producing data. That data becomes commodified, observed and surveilled and will in the end be used against us. There’s no way we can see that, because these processes are
hidden underneath an invisible layer. If we want to raise the visibility of that layer we have to dismantle the cloud and this is one of the big challenges we have ahead of us. There’s an underlying issue of the architecture of the current cloud-infrastructures thats needs to be changed. The question is if we can change this by engaging with the people who are making these infrastructures or that we would rather engage with the users who half the time are not aware of how these infrastructures work in order to raise awareness and critical thinking.
RE: Can you explain a little about psychographic micro-targeting and how this impacts autonomy, diversity and our democracy?
SS: Psychographic micro-targeting is the hypertargetting of individuals based on an individual’s attitudes, interests and values. Though targeted advertising is nothing new (i.e. demographic segmentation), psychographic micro-targeting has been made possible by vast amounts of data and complex computing systems. It’s a form of segmentation that delves much deeper than previous approaches were able to. A key danger with this form of targeting is the way in which it has the ability to weaponize emotions and creative social divides for financial, social or political gain – at the expense of our autonomy and democracy. Cambridge Analytica played a key role in Donald Trump’s election using this exact approach, and there’s evidence that they were involved in the Leave.eu campaign. One of the biggest challenges with raising awareness and combatting the effects is that people are almost not willing to believe it. Convincing a person that they can no longer trust the sanctity of their mind is tough.
RE: The exhibition looks at this idea of the “gamification of data”. Some people have called it the “Friendly Scout” of data collection, but what does it really mean, and should we be wary of it? << not sure about this one – I guess I’m asking about the gamification, and your use of it in the website to collect data.
SS: Something that stood out to me during my research was the ways in which data collection is often disguised as harmless fun. Things like personality quizzes (now banned – but we see similar versions like ‘which Friends character are you?’) that can access our Facebook likes, apps like Pokemon Go, and Facebook’s ‘10 year’ challenges that are suspected to be used to train facial recognition algorithms. As an experiment I visited ‘Facebook Gaming’ and delved into all the third-party companies my data was being sent to after agreeing to the cookie policy. It contained several behavioural analytics companies with the ability to transfer my data to the US and other locations. There’s also the whole ‘addiction’ aspect. App designers use similar techniques to those used in the gambling industry to make create habit-forming products and games that keep us playing for longer.
This gamification of data collection is also enticing to younger people. In the UK under GDPR laws the age of data consent is 13 providing the information is in ‘a clear and plain language that the child can easily understand’. If adults aren’t aware of how their data is harvested in this way or understanding the impacts of it, how can we expect a child to?
RE: What other issues are you both researching now for future work?
I’m exploring a few things at the moment. After exploring ethical issues and the problems with big data through RBG, I’m looking at how we can rethink data and its applications. There’s a huge amount of open data available and I’m interested in how as artists, we can find a way to return it to the people in a format that brings value.
At the moment I’m researching the used materials that are necessary for making my iPhone by exploring the artificial landscapes of its non-renewable and natural resources that are hidden behind the screen. I want to create social awareness about the interconnected technological infrastructures that fuel our mobile lives in order to criticize the commodity fetishism towards Apple and the iPhone.
I’m also interested in researching the concept of ‘Liquid Surveillance’ that is described in the book ‘Liquid Surveillance: A Conversation’ by the authors David Lyon en Zygmunt Bauman. It is a book that I’d highly recommend if you’re interested in researching surveillance culture. I want to find out how I can translate the concept of liquid surveillance into a new artwork.
RE: Hey Sarah and Anni, thanks for chatting. Firstly, can you introduce yourselves and what you’re interested in?
SS: Hello! I’m Sarah Selby and I’m a Bristol based interdisciplinary artist interested in digital culture. I explore emerging technologies through interactive artworks and am interested in how as artists we can foster curiosity, facilitate discussion and provoke critical thinking around often inaccessible ideas.
AH: Hello 🙂 My name is Annemiek Höcker and I’m a Rotterdam based anti-disciplinary artist interested in the digital world that is hidden behind the screens of my electronic devices. Here lies a hinterland full of undiscovered surveillance practices, cloud-infrastructures, and artificial landscapes that I want to explore by making them visible.
I use new media as an expression tool to reflect on the rapid changes and the impact of technology on society in order to create awareness and facilitate discussion about societal issues surrounding the internet, surveillance culture and corporate power.
RE: You both have an interdisciplinary practice, and often collaborate – why is this important and how do you implement this into your work?
SS: Ani and I met on the Interactive Arts course at Manchester Metropolitan University whilst Ani was participating in the Erasmus University Exchange Programme. For me, the course strongly influenced my creative approach. It’s ideas driven with a strong emphasis on interdisciplinary collaboration and engagement with fields outside of the arts. I spent most of my time during my final year in the bioengineering department learning about generative design and cellular automaton. My love of interdisciplinary collaboration continued from there and I took part in Roche Continents, an ideas exchange programme that challenges the relationship and boundaries between Science, Art and innovation. Collaboration is a crucial element of innovation – often it’s the combination of diverse thought processes and skills that enable us to break free of niches and routine – to experiment. I implement this in my own practice through a strong research focus, connecting with people outside of my field and a willingness to ask ‘stupid’ questions. I think artists are like bridges, so if we’re going to build meaningful connections between disparate ideas the ability to collaborate and navigate unfamiliar disciplines is crucial.
AH: I studied Advertising and Critical Studies at de Willem de Kooning Academy in Rotterdam. Our learning system also had a very strong emphasis on interdisciplinary collaboration as most of the projects I did at the Advertising department we’re in teams. Apart from my major I was allowed to choose different practices in order to specialize within a certain field and collaborate with students from different departments. During the first years of art school I often doubted my role as an artist because I didn’t know how to define my creative practice as my work was always based around critical theory and visual culture but it could be expressed in multiple forms of media.
After studying Interactive Arts in Manchester I decided to take one year off in order to find out what I could create outside the walls of the art academy. I took the time to experiment and collaborate with other artists and got a job at an experimental art space in Amsterdam. During that year I discovered that I can create anything I want as long as I don’t feel restricted to one specific discipline and therefore decided to break free of all disciplines in order to give myself the ability to move freely in between different fields.
RE: Sarah, can you expand a little on the exhibition and what technologies and concepts are being used?
SS: Raised by Google examines current data practices and is heavily influenced by Shoshana Zuboff’s book ‘The Age of Surveillance Capitalism’. It uses behavioural analysis software ‘Apply Magic Sauce’ developed by The University of Cambridge’s Psychometrics Department to determine the psychographic profile of each visitor based on their interests. Based on this profile, visitors are directed through different doors in the maze, gradually herded into ever smaller rooms and essentially into physical ‘echo-chambers’ with others deemed by the algorithm to hold similar views. The profiling process is a very similar technique to that used by Cambridge Analytica during Donald Trump’s election campaign, in which Facebook users were invited to participate in a personality quiz before their profiles were correlated to their Facebook likes. This seemingly innocent, playful ‘personality quiz’ facilitated the profiling of millions of Americans and created a tool capable of understanding our behaviour traits better than friends, family and in some cases spouses. The impact of this on autonomy and diversity
The work is really a four step process. First is the gamified data-collection. To enter the maze viewers must complete a fun ‘quiz’ that collects their interests and builds their psychographic profile. In the second phase visitors train the algorithm, providing data that allows it to search for patterns or ‘clusters’ and draw correlations. In the third stage this data is turned against them, using their ‘Big 5’ personality traits to target them with specific ads drawn from the 2019 general election. Finally, visitors are confronted with the potential consequences of current data practices, in the form of a 4D baby scan of an unborn child. The child is being profiled whilst still in the womb using data collected from his parents to predict his IQ, education level, areas of interest, risk of offending etc.
The concept behind the work comes from a few different areas of research. As the first of the ‘digital natives’ we’ve been the lab-rat generation, which is where the initial idea of creating a maze-like structure came from. We were subjected to unauthorised experimentation such as Facebook’s emotional contagion experiment, and navigated new and unregulated digital environments. Algorithms trained by bias data are making decisions on policing, healthcare, insurance, employment and more. These issues are no longer constrained to digital
environments, but are actively impacting our real world experiences and opportunities. So the doors – dividing people by traits such as extroversion, gender, relationship status and neuroticism – become a metaphor for bias and privilege, acting as both barriers and passages for visitors.
RE: Can you expand on your involvement with Cambridge University and Apply Magic Sauce?
SS: I came across the Apply Magic Sauce (AMS) program after a co-worker found it online. Apply Magic Sauce is a prediction API developed by The University of Cambridge’s Psychometrics Department. It predicts your psycho-demographic profile from digital footprints of your behaviour, utilising platforms such as Twitter and data from Facebook and LinkedIn. AMS aims to help users understand their data, and reveal what is predictable (and therefore profitable) about them from the information they share online. I contacted Vesselin Popov hope (Executive Director of the University of Cambridge Psychometrics Centre) regarding the initial idea. Vess and the AMS team were very helpful and supported the project, assisting with research, queries and software.
RE: To what extent do you feel Raise sed by Google? And what are the negative and positive implications of this for our generation and future generations?
SS: I probably have a slightly different experience to others in my age group as my mum was always very wary of technology. My siblings and I were raised without TV, internet or mobile phones, so I didn’t have much access to that world until I moved to Manchester at 19. I think the sudden exposure is probably why technology and digital culture is such an area of interest for me – it’s easy to succumb to the ‘boiling frog’ effect and harder to question things if it’s all you’ve ever known. I understand my mum’s decisions and agree with a lot of them – social media can be an incredibly toxic place for young people and renders you accessible 24/7. There’s also other unhealthy habits I’ve picked up since leaving – like struggling to sleep without Peep Show playing in the background, or losing hours just scrolling through instagram! But there are also countless benefits – I’ve got friends from all around the world, engaged with communities and discovered knowledge I would otherwise not have had access to. Technology and the opportunities it creates are exciting, and I believe rather than shunning it we should strive to create positive applications and play an active role in the future direction. Whilst designing the show I found out that my sister was expecting her first child which strongly influenced my ideas and led me to put much more emphasis on future generations. It led me to consider the impacts of data collection, psychographic targeting and the automation of our opportunities over multiple generations. For example, if a parent is deemed high healthcare risk or potential offender, how might this data be applied to their children, or grandchildren? Are their futures calculated at conception by the underlying mechanisms of an increasingly digitalised society?
AH: It’s funny because I am the extreme opposite of Sarah. I started using the world wide web at the age of seven and can honestly say that I’ve been mesmerized and obsessed by it ever since I first visited this digital space. In the beginning I was mainly using the net for entertainment purposes such as online gaming. Google was our homepage that I used as a search engine for doing my homework and to find and collect nice pictures with Google image search. My internet behavior really started to change with the rise of social media platforms such as MSN messenger and Facebook. These new platforms introduced me to a whole new network that has had a major influence on my perception of social reality in the digital world. I think I was too young to get so deeply involved within these platforms without being educated about it by my parents or at school. My parents were monitoring what I was
doing online, but they couldn’t predict what kind of effects these platforms would have in the future.
In the beginning I used to look at social platforms as if they were tools that I was using for my own means. But we all know that platforms such as Google, Facebook and Instagram aren’t just tools anymore. These are platforms that are unethically using their power against us and we are no longer in the position to take control over this. To me this is a difficult aspect that I find really hard to deal with because from a critical point of view the current infrastructures and algorithms are leading us to a dead end. Young kids these days grow up in a very difficult political environment which is in a strange way very conservative and regressive. A young person is more watched and observed than ever before and at the same time their voice is being drowned out. Social platforms are psychologically designed to keep us there as long as possible in order for these platforms to extract as much personal data form us. Algorithms create timelines that for example mix hyper reality content such as the news with personal blogposts and 9GAG cat videos. This is where the border between social reality and reality fades. Mistaking social reality for reality is an important aspect of the internet that not only influenced me as a human being but also plays a major part in my work. I adapted the visual language of Google Earth in my graduation project to visualize the artificial landscapes of the natural resources that are hidden inside my Iphone and I designed Google BDSM gear to visualize the power relation between Google and the user. At the present moment I’m still using Google as my main search engine to do my research and collect nice pictures I find online. I use Google Maps whenever I’m lost in town or when I’m looking for an address, which has made me very reliable to their service. Ever since I was seven years old I’ve made choices and decisions based on the information and answers that Google has given me, so in that extent I can say that I feel raised by Google and their services.
RE: Thinking about the ways in which this type of work can assist in making inaccessible issues more accessible (via interaction, criticality and discussion), what role does art have in facilitating debate, providing an alternative education (of sorts), or highlighting important topics?
SS: I think one of the most powerful things about art as a form of education or tool for engagement is that is leaves room for viewers to think critically and form their own ideas or standpoints. Mainstream education often becomes about learning by rote and I think this diminishes our ability to do this. Today, in the era of fake news and psychographically-driven propaganda, our ability to question, think critically and draw our own conclusions is more important than ever. Art can also bring concepts and ideas often locked in academia to the public’s attention, translating concepts and ideas into a format that is engaging and promotes understanding of topics that are often very complex and inaccessible.
RE: Can you both talk about the processes of data and how they radicalise bias, privilege and opportunity in the real world? Have you seen this happen first-hand?
SS: Algorithms are trained with historic data created by humans that is often inherently biased. Not everything is quantifiable, and this leads to important contextual or social information being disregarded in the process. For example, it was recently discovered that a widely used health care algorithm had been favouring white patients over those from BAME backgrounds. It measured patient need by estimated future cost of healthcare, but failed to understand the much deeper and complex social issues such as access, insurance cover, job stability and income. With regards to witnessing it first hand, one of the biggest challenges with these issues is that it is such an opaque process. The applications and uses
are not widely understood and companies that adopt these systems are not necessarily transparent about their use. The term ‘black box algorithm’ refers to the obscurity of the process – we can see the data going in and the decision it makes but understanding what led to that is very difficult.
AA: I want to refer to a story about radicalized biased data that takes us all the way back to the Holocaust between 1941 and 1945, before the computer even existed. Radicalised data collection was something that was already happening during The WWII, when IBM provided the Nazi Regime from 2000 punchcard machines that were being used in their concentration camps. Data generated by means of counting and alphabetization equipment supplied by IBM was instrumental in the efforts of the Nazi Regime to concentrate and kill 6 million Jews, which was around two-thirds of Europe’s Jewish population. Card sorting operations were established in every major concentration camp, where Jews were moved from place to place, systematically worked to death, and their remains cataloged with icy automation.
CEO Thomas Watson to whom the current Watson Technology is being named, was fully aware and orchestrated this whole process. Watson, whose leadership was obsessed with IBM’s growth and profit, was happy to provide Adolf Hitler with their punch card machines and technology. He helped the Nazi Regime to consciously eliminate six million Jews and now has his name on the most powerful data and machine learning system that currently misuses its technological power over us! To me this is absolutely insane and this is a story I just can’t get my head around. I am incredibly lucky that I haven’t seen this happen in firsthand, but it did affect my grandparents and so many other families from my country and the rest of Europe.
So many powerful corporations and advertising agencies are using IBM Watson as a service in order to gain more of our personal data that will be used unethically against us. It affects me to see how corporate power is being misused and how long these power structures are being kept intact.
RE: Do you think it’s possible to implement a form of online advertising that doesn’t compromise human rights and isn’t invasive?
SS: Targeted advertisements have their uses. For example, I want to see products that are relevant and useful for me (sorry wish.com), but knowing where to draw the line is difficult. If the news stories we’re exposed to (that influence our perception of the world) are driven by our behavioural data, can we ever make fully autonomous and democratic decisions? We all have a different perception of reality, that’s nothing new. I think the issues begin when our data or psychology is used in an attempt to control or manipulate that for another’s gain. The British Army’s ‘This is Belonging’ campaign drives me insane with it’s blatant attempt to target young, vulnerable youths such as care leavers, low income households and gang members. I think that in order to find a way of implementing online advertising in a way that doesn’t compromise human rights, we need much more transparency around how data is collected and advert targets are selected, understanding around the potential impacts both on a personal level and wider society, and control over our personal data.
AH: I agree with you about the fact that we need more transparency about data collection and the selection of targeted advertisements Sarah. Whether we’re on Google, Facebook, Whatsapp or Tinder there’s always that other hidden element of data extraction. No matter how intimate or direct we are communicating, we are producing data. That data becomes commodified, observed and surveilled and will in the end be used against us. There’s no way we can see that, because these processes are
hidden underneath an invisible layer. If we want to raise the visibility of that layer we have to dismantle the cloud and this is one of the big challenges we have ahead of us. There’s an underlying issue of the architecture of the current cloud-infrastructures thats needs to be changed. The question is if we can change this by engaging with the people who are making these infrastructures or that we would rather engage with the users who half the time are not aware of how these infrastructures work in order to raise awareness and critical thinking.
RE: Can you explain a little about psychographic micro-targeting and how this impacts autonomy, diversity and our democracy?
SS: Psychographic micro-targeting is the hypertargetting of individuals based on an individual’s attitudes, interests and values. Though targeted advertising is nothing new (i.e. demographic segmentation), psychographic micro-targeting has been made possible by vast amounts of data and complex computing systems. It’s a form of segmentation that delves much deeper than previous approaches were able to. A key danger with this form of targeting is the way in which it has the ability to weaponize emotions and creative social divides for financial, social or political gain – at the expense of our autonomy and democracy. Cambridge Analytica played a key role in Donald Trump’s election using this exact approach, and there’s evidence that they were involved in the Leave.eu campaign. One of the biggest challenges with raising awareness and combatting the effects is that people are almost not willing to believe it. Convincing a person that they can no longer trust the sanctity of their mind is tough.
RE: The exhibition looks at this idea of the “gamification of data”. Some people have called it the “Friendly Scout” of data collection, but what does it really mean, and should we be wary of it? << not sure about this one – I guess I’m asking about the gamification, and your use of it in the website to collect data.
SS: Something that stood out to me during my research was the ways in which data collection is often disguised as harmless fun. Things like personality quizzes (now banned – but we see similar versions like ‘which Friends character are you?’) that can access our Facebook likes, apps like Pokemon Go, and Facebook’s ‘10 year’ challenges that are suspected to be used to train facial recognition algorithms. As an experiment I visited ‘Facebook Gaming’ and delved into all the third-party companies my data was being sent to after agreeing to the cookie policy. It contained several behavioural analytics companies with the ability to transfer my data to the US and other locations. There’s also the whole ‘addiction’ aspect. App designers use similar techniques to those used in the gambling industry to make create habit-forming products and games that keep us playing for longer.
This gamification of data collection is also enticing to younger people. In the UK under GDPR laws the age of data consent is 13 providing the information is in ‘a clear and plain language that the child can easily understand’. If adults aren’t aware of how their data is harvested in this way or understanding the impacts of it, how can we expect a child to?
RE: What other issues are you both researching now for future work?
I’m exploring a few things at the moment. After exploring ethical issues and the problems with big data through RBG, I’m looking at how we can rethink data and its applications. There’s a huge amount of open data available and I’m interested in how as artists, we can find a way to return it to the people in a format that brings value.
At the moment I’m researching the used materials that are necessary for making my iPhone by exploring the artificial landscapes of its non-renewable and natural resources that are hidden behind the screen. I want to create social awareness about the interconnected technological infrastructures that fuel our mobile lives in order to criticize the commodity fetishism towards Apple and the iPhone.
I’m also interested in researching the concept of ‘Liquid Surveillance’ that is described in the book ‘Liquid Surveillance: A Conversation’ by the authors David Lyon en Zygmunt Bauman. It is a book that I’d highly recommend if you’re interested in researching surveillance culture. I want to find out how I can translate the concept of liquid surveillance into a new artwork.
SS: Hello! I’m Sarah Selby and I’m a Bristol based interdisciplinary artist interested in digital culture. I explore emerging technologies through interactive artworks and am interested in how as artists we can foster curiosity, facilitate discussion and provoke critical thinking around often inaccessible ideas.
AH: Hello 🙂 My name is Annemiek Höcker and I’m a Rotterdam based anti-disciplinary artist interested in the digital world that is hidden behind the screens of my electronic devices. Here lies a hinterland full of undiscovered surveillance practices, cloud-infrastructures, and artificial landscapes that I want to explore by making them visible.
I use new media as an expression tool to reflect on the rapid changes and the impact of technology on society in order to create awareness and facilitate discussion about societal issues surrounding the internet, surveillance culture and corporate power.
RE: You both have an interdisciplinary practice, and often collaborate – why is this important and how do you implement this into your work?
SS: Ani and I met on the Interactive Arts course at Manchester Metropolitan University whilst Ani was participating in the Erasmus University Exchange Programme. For me, the course strongly influenced my creative approach. It’s ideas driven with a strong emphasis on interdisciplinary collaboration and engagement with fields outside of the arts. I spent most of my time during my final year in the bioengineering department learning about generative design and cellular automaton. My love of interdisciplinary collaboration continued from there and I took part in Roche Continents, an ideas exchange programme that challenges the relationship and boundaries between Science, Art and innovation. Collaboration is a crucial element of innovation – often it’s the combination of diverse thought processes and skills that enable us to break free of niches and routine – to experiment. I implement this in my own practice through a strong research focus, connecting with people outside of my field and a willingness to ask ‘stupid’ questions. I think artists are like bridges, so if we’re going to build meaningful connections between disparate ideas the ability to collaborate and navigate unfamiliar disciplines is crucial.
AH: I studied Advertising and Critical Studies at de Willem de Kooning Academy in Rotterdam. Our learning system also had a very strong emphasis on interdisciplinary collaboration as most of the projects I did at the Advertising department we’re in teams. Apart from my major I was allowed to choose different practices in order to specialize within a certain field and collaborate with students from different departments. During the first years of art school I often doubted my role as an artist because I didn’t know how to define my creative practice as my work was always based around critical theory and visual culture but it could be expressed in multiple forms of media.
After studying Interactive Arts in Manchester I decided to take one year off in order to find out what I could create outside the walls of the art academy. I took the time to experiment and collaborate with other artists and got a job at an experimental art space in Amsterdam. During that year I discovered that I can create anything I want as long as I don’t feel restricted to one specific discipline and therefore decided to break free of all disciplines in order to give myself the ability to move freely in between different fields.
RE: Sarah, can you expand a little on the exhibition and what technologies and concepts are being used?
SS: Raised by Google examines current data practices and is heavily influenced by Shoshana Zuboff’s book ‘The Age of Surveillance Capitalism’. It uses behavioural analysis software ‘Apply Magic Sauce’ developed by The University of Cambridge’s Psychometrics Department to determine the psychographic profile of each visitor based on their interests. Based on this profile, visitors are directed through different doors in the maze, gradually herded into ever smaller rooms and essentially into physical ‘echo-chambers’ with others deemed by the algorithm to hold similar views. The profiling process is a very similar technique to that used by Cambridge Analytica during Donald Trump’s election campaign, in which Facebook users were invited to participate in a personality quiz before their profiles were correlated to their Facebook likes. This seemingly innocent, playful ‘personality quiz’ facilitated the profiling of millions of Americans and created a tool capable of understanding our behaviour traits better than friends, family and in some cases spouses. The impact of this on autonomy and diversity
The work is really a four step process. First is the gamified data-collection. To enter the maze viewers must complete a fun ‘quiz’ that collects their interests and builds their psychographic profile. In the second phase visitors train the algorithm, providing data that allows it to search for patterns or ‘clusters’ and draw correlations. In the third stage this data is turned against them, using their ‘Big 5’ personality traits to target them with specific ads drawn from the 2019 general election. Finally, visitors are confronted with the potential consequences of current data practices, in the form of a 4D baby scan of an unborn child. The child is being profiled whilst still in the womb using data collected from his parents to predict his IQ, education level, areas of interest, risk of offending etc.
The concept behind the work comes from a few different areas of research. As the first of the ‘digital natives’ we’ve been the lab-rat generation, which is where the initial idea of creating a maze-like structure came from. We were subjected to unauthorised experimentation such as Facebook’s emotional contagion experiment, and navigated new and unregulated digital environments. Algorithms trained by bias data are making decisions on policing, healthcare, insurance, employment and more. These issues are no longer constrained to digital
environments, but are actively impacting our real world experiences and opportunities. So the doors – dividing people by traits such as extroversion, gender, relationship status and neuroticism – become a metaphor for bias and privilege, acting as both barriers and passages for visitors.
RE: Can you expand on your involvement with Cambridge University and Apply Magic Sauce?
SS: I came across the Apply Magic Sauce (AMS) program after a co-worker found it online. Apply Magic Sauce is a prediction API developed by The University of Cambridge’s Psychometrics Department. It predicts your psycho-demographic profile from digital footprints of your behaviour, utilising platforms such as Twitter and data from Facebook and LinkedIn. AMS aims to help users understand their data, and reveal what is predictable (and therefore profitable) about them from the information they share online. I contacted Vesselin Popov hope (Executive Director of the University of Cambridge Psychometrics Centre) regarding the initial idea. Vess and the AMS team were very helpful and supported the project, assisting with research, queries and software.
RE: To what extent do you feel Raise sed by Google? And what are the negative and positive implications of this for our generation and future generations?
SS: I probably have a slightly different experience to others in my age group as my mum was always very wary of technology. My siblings and I were raised without TV, internet or mobile phones, so I didn’t have much access to that world until I moved to Manchester at 19. I think the sudden exposure is probably why technology and digital culture is such an area of interest for me – it’s easy to succumb to the ‘boiling frog’ effect and harder to question things if it’s all you’ve ever known. I understand my mum’s decisions and agree with a lot of them – social media can be an incredibly toxic place for young people and renders you accessible 24/7. There’s also other unhealthy habits I’ve picked up since leaving – like struggling to sleep without Peep Show playing in the background, or losing hours just scrolling through instagram! But there are also countless benefits – I’ve got friends from all around the world, engaged with communities and discovered knowledge I would otherwise not have had access to. Technology and the opportunities it creates are exciting, and I believe rather than shunning it we should strive to create positive applications and play an active role in the future direction. Whilst designing the show I found out that my sister was expecting her first child which strongly influenced my ideas and led me to put much more emphasis on future generations. It led me to consider the impacts of data collection, psychographic targeting and the automation of our opportunities over multiple generations. For example, if a parent is deemed high healthcare risk or potential offender, how might this data be applied to their children, or grandchildren? Are their futures calculated at conception by the underlying mechanisms of an increasingly digitalised society?
AH: It’s funny because I am the extreme opposite of Sarah. I started using the world wide web at the age of seven and can honestly say that I’ve been mesmerized and obsessed by it ever since I first visited this digital space. In the beginning I was mainly using the net for entertainment purposes such as online gaming. Google was our homepage that I used as a search engine for doing my homework and to find and collect nice pictures with Google image search. My internet behavior really started to change with the rise of social media platforms such as MSN messenger and Facebook. These new platforms introduced me to a whole new network that has had a major influence on my perception of social reality in the digital world. I think I was too young to get so deeply involved within these platforms without being educated about it by my parents or at school. My parents were monitoring what I was
doing online, but they couldn’t predict what kind of effects these platforms would have in the future.
In the beginning I used to look at social platforms as if they were tools that I was using for my own means. But we all know that platforms such as Google, Facebook and Instagram aren’t just tools anymore. These are platforms that are unethically using their power against us and we are no longer in the position to take control over this. To me this is a difficult aspect that I find really hard to deal with because from a critical point of view the current infrastructures and algorithms are leading us to a dead end. Young kids these days grow up in a very difficult political environment which is in a strange way very conservative and regressive. A young person is more watched and observed than ever before and at the same time their voice is being drowned out. Social platforms are psychologically designed to keep us there as long as possible in order for these platforms to extract as much personal data form us. Algorithms create timelines that for example mix hyper reality content such as the news with personal blogposts and 9GAG cat videos. This is where the border between social reality and reality fades. Mistaking social reality for reality is an important aspect of the internet that not only influenced me as a human being but also plays a major part in my work. I adapted the visual language of Google Earth in my graduation project to visualize the artificial landscapes of the natural resources that are hidden inside my Iphone and I designed Google BDSM gear to visualize the power relation between Google and the user. At the present moment I’m still using Google as my main search engine to do my research and collect nice pictures I find online. I use Google Maps whenever I’m lost in town or when I’m looking for an address, which has made me very reliable to their service. Ever since I was seven years old I’ve made choices and decisions based on the information and answers that Google has given me, so in that extent I can say that I feel raised by Google and their services.
RE: Thinking about the ways in which this type of work can assist in making inaccessible issues more accessible (via interaction, criticality and discussion), what role does art have in facilitating debate, providing an alternative education (of sorts), or highlighting important topics?
SS: I think one of the most powerful things about art as a form of education or tool for engagement is that is leaves room for viewers to think critically and form their own ideas or standpoints. Mainstream education often becomes about learning by rote and I think this diminishes our ability to do this. Today, in the era of fake news and psychographically-driven propaganda, our ability to question, think critically and draw our own conclusions is more important than ever. Art can also bring concepts and ideas often locked in academia to the public’s attention, translating concepts and ideas into a format that is engaging and promotes understanding of topics that are often very complex and inaccessible.
RE: Can you both talk about the processes of data and how they radicalise bias, privilege and opportunity in the real world? Have you seen this happen first-hand?
SS: Algorithms are trained with historic data created by humans that is often inherently biased. Not everything is quantifiable, and this leads to important contextual or social information being disregarded in the process. For example, it was recently discovered that a widely used health care algorithm had been favouring white patients over those from BAME backgrounds. It measured patient need by estimated future cost of healthcare, but failed to understand the much deeper and complex social issues such as access, insurance cover, job stability and income. With regards to witnessing it first hand, one of the biggest challenges with these issues is that it is such an opaque process. The applications and uses
are not widely understood and companies that adopt these systems are not necessarily transparent about their use. The term ‘black box algorithm’ refers to the obscurity of the process – we can see the data going in and the decision it makes but understanding what led to that is very difficult.
AA: I want to refer to a story about radicalized biased data that takes us all the way back to the Holocaust between 1941 and 1945, before the computer even existed. Radicalised data collection was something that was already happening during The WWII, when IBM provided the Nazi Regime from 2000 punchcard machines that were being used in their concentration camps. Data generated by means of counting and alphabetization equipment supplied by IBM was instrumental in the efforts of the Nazi Regime to concentrate and kill 6 million Jews, which was around two-thirds of Europe’s Jewish population. Card sorting operations were established in every major concentration camp, where Jews were moved from place to place, systematically worked to death, and their remains cataloged with icy automation.
CEO Thomas Watson to whom the current Watson Technology is being named, was fully aware and orchestrated this whole process. Watson, whose leadership was obsessed with IBM’s growth and profit, was happy to provide Adolf Hitler with their punch card machines and technology. He helped the Nazi Regime to consciously eliminate six million Jews and now has his name on the most powerful data and machine learning system that currently misuses its technological power over us! To me this is absolutely insane and this is a story I just can’t get my head around. I am incredibly lucky that I haven’t seen this happen in firsthand, but it did affect my grandparents and so many other families from my country and the rest of Europe.
So many powerful corporations and advertising agencies are using IBM Watson as a service in order to gain more of our personal data that will be used unethically against us. It affects me to see how corporate power is being misused and how long these power structures are being kept intact.
RE: Do you think it’s possible to implement a form of online advertising that doesn’t compromise human rights and isn’t invasive?
SS: Targeted advertisements have their uses. For example, I want to see products that are relevant and useful for me (sorry wish.com), but knowing where to draw the line is difficult. If the news stories we’re exposed to (that influence our perception of the world) are driven by our behavioural data, can we ever make fully autonomous and democratic decisions? We all have a different perception of reality, that’s nothing new. I think the issues begin when our data or psychology is used in an attempt to control or manipulate that for another’s gain. The British Army’s ‘This is Belonging’ campaign drives me insane with it’s blatant attempt to target young, vulnerable youths such as care leavers, low income households and gang members. I think that in order to find a way of implementing online advertising in a way that doesn’t compromise human rights, we need much more transparency around how data is collected and advert targets are selected, understanding around the potential impacts both on a personal level and wider society, and control over our personal data.
AH: I agree with you about the fact that we need more transparency about data collection and the selection of targeted advertisements Sarah. Whether we’re on Google, Facebook, Whatsapp or Tinder there’s always that other hidden element of data extraction. No matter how intimate or direct we are communicating, we are producing data. That data becomes commodified, observed and surveilled and will in the end be used against us. There’s no way we can see that, because these processes are
hidden underneath an invisible layer. If we want to raise the visibility of that layer we have to dismantle the cloud and this is one of the big challenges we have ahead of us. There’s an underlying issue of the architecture of the current cloud-infrastructures thats needs to be changed. The question is if we can change this by engaging with the people who are making these infrastructures or that we would rather engage with the users who half the time are not aware of how these infrastructures work in order to raise awareness and critical thinking.
RE: Can you explain a little about psychographic micro-targeting and how this impacts autonomy, diversity and our democracy?
SS: Psychographic micro-targeting is the hypertargetting of individuals based on an individual’s attitudes, interests and values. Though targeted advertising is nothing new (i.e. demographic segmentation), psychographic micro-targeting has been made possible by vast amounts of data and complex computing systems. It’s a form of segmentation that delves much deeper than previous approaches were able to. A key danger with this form of targeting is the way in which it has the ability to weaponize emotions and creative social divides for financial, social or political gain – at the expense of our autonomy and democracy. Cambridge Analytica played a key role in Donald Trump’s election using this exact approach, and there’s evidence that they were involved in the Leave.eu campaign. One of the biggest challenges with raising awareness and combatting the effects is that people are almost not willing to believe it. Convincing a person that they can no longer trust the sanctity of their mind is tough.
RE: The exhibition looks at this idea of the “gamification of data”. Some people have called it the “Friendly Scout” of data collection, but what does it really mean, and should we be wary of it? << not sure about this one – I guess I’m asking about the gamification, and your use of it in the website to collect data.
SS: Something that stood out to me during my research was the ways in which data collection is often disguised as harmless fun. Things like personality quizzes (now banned – but we see similar versions like ‘which Friends character are you?’) that can access our Facebook likes, apps like Pokemon Go, and Facebook’s ‘10 year’ challenges that are suspected to be used to train facial recognition algorithms. As an experiment I visited ‘Facebook Gaming’ and delved into all the third-party companies my data was being sent to after agreeing to the cookie policy. It contained several behavioural analytics companies with the ability to transfer my data to the US and other locations. There’s also the whole ‘addiction’ aspect. App designers use similar techniques to those used in the gambling industry to make create habit-forming products and games that keep us playing for longer.
This gamification of data collection is also enticing to younger people. In the UK under GDPR laws the age of data consent is 13 providing the information is in ‘a clear and plain language that the child can easily understand’. If adults aren’t aware of how their data is harvested in this way or understanding the impacts of it, how can we expect a child to?
RE: What other issues are you both researching now for future work?
I’m exploring a few things at the moment. After exploring ethical issues and the problems with big data through RBG, I’m looking at how we can rethink data and its applications. There’s a huge amount of open data available and I’m interested in how as artists, we can find a way to return it to the people in a format that brings value.
At the moment I’m researching the used materials that are necessary for making my iPhone by exploring the artificial landscapes of its non-renewable and natural resources that are hidden behind the screen. I want to create social awareness about the interconnected technological infrastructures that fuel our mobile lives in order to criticize the commodity fetishism towards Apple and the iPhone.
I’m also interested in researching the concept of ‘Liquid Surveillance’ that is described in the book ‘Liquid Surveillance: A Conversation’ by the authors David Lyon en Zygmunt Bauman. It is a book that I’d highly recommend if you’re interested in researching surveillance culture. I want to find out how I can translate the concept of liquid surveillance into a new artwork.