Are my programs artists? As far as I am concerned, no. They are computer programs that I wrote to make my own art and fulfil a similar role to any other medium I have worked in to make art. The idea that the programs independently chose to write themselves and incorporate my observations and ideas into themselves in order to execute themselves on a computer and make works of art frankly strikes me as ridiculous.
The change of name from Facebook to Meta has garnered so much media attention that it makes the issue raised in Sarah Thompson Bell’s book ‘The Art Object As Data; Artists, Data and Psychoanalysis’ particularly relevant, namely the need for us to recognise the psychology of representing ourselves in digital form. She addresses this by applying the psychoanalytic theories that were advanced by Winnicott, Klein and Bion in their exploration of what constitutes what one might call a healthy psychological narrative of what it is to exist as a living human being. The effect of digital technology on how we differentiate our selves from and relate to other entities, be they other people or abstract concepts have already been causing undeniable concern. What I mean by abstract concepts in this context are social constructs such as, for example, nationality, race, gender and behavioural conventions. How we see ourselves in our relationships with these things is being momentously influenced by the impact of what one might refer to as the digital realm, or as the current buzz word (or is it biz word?) has it, ‘metaverse’(1) .
Our digital representations in this apparently artificially parallel and imaginary, yet actually integrated physical environment (bits do not exist without a medium to exist in, if only temporarily fluctuations in unimaginably small electrical charges) can be seen as ‘data objects’ that are, in Thompson Bell’s analysis, engaged in what building on psychological ‘object relations theory’ she identifies as ‘data object relations’ and this is where the relevance of her insight becomes a practical aid to our understanding of the transition our senses of self are being challenged by. Whether it is the data which we deliberately map into our devices, such as selfies, messages, calendar entries etc., or the behavioural surplus harvested and analysed by organisations like ‘Meta’ or ‘Alphabet’, legitimate researchers or shadier groups and individuals, in the digital realm we are represented by data objects.
What this means according to Thompson Bell is that if we are to achieve wellness and personal mental health as well as the broader social benefits of such, we need to determine what is and what is not us. This is, as ever, not a trivial question philosophically. In day to day life however getting to know and come to terms with the dynamic liminal nature of our digitised interface to the rest of the world might be significantly improved by taking on board the concept of ‘data object relations’ and applying it to enhance our ability to comprehend just what is happening to us and the roles we can take in shaping our destiny. As we confront the impact of humanity on the environment of planet earth, if we are going to be able to do anything to correct the imbalance and threat to our very existence, understanding how and why we behave the way that we do becomes more urgent than ever before. Exploring the theory of data object relations cannot provide all the answers but it may just illuminate a part of the complex challenges we are facing, our relationships to each other.
(1) a word that intriguingly my phone’s dictionary did not recognise, which is an apt illustration of the kind of impact that representing actual life in a mapping such as a language (2) can have.
(2) The digital realm is a medium and the different ways it is used can be seen to be different langauges.
I can only be who I am however shitty that might be
To some extent I feel as if I own myself yet to another I am like an observer
imprisoned in an autonomous entity that has its own independent agenda.
It chooses when we will feel tired or energetic, hungry or sick, bored or sexually aroused. My role would seem to be to wrestle mentally with this desperate being and force it into some kind of conformity in order that we both survive as long as possible.
Sometimes we get on.
At other times our joint decisions get us into trouble. There were times when we worked in harmony, a partnership or duet so close in our motivations and satisfactions that to anyone else we might as well have been one. Now, as we age the relationship is facing challenges where often we seem to be at odds with each other.
The frustration and annoyance that ensues is enangering – there may not be such a word yet that is how I often feel – enangered with myself. Yes my self, that vexing and yet often rewarding organism that constitutes my existence can only be called my …
Selfhood has its bonuses but it brings with it these challenges.
There have been times when we have been estranged, when it seemed that we might part company and this self would cease to be. When we disagreed so much that it led to a disharmony close to death, that final parting of the ways. Patched up, in some kind of resumed coalition we continued and so we stumble on. Surviving despite and yet because of how we mostly pull together. They say that I can help keep you out of trouble and in turn you might ensure that I continue to exist as well.
Your silence is endlessly infuriating but of course your actions speak louder than words. Our acts are what can prove who and what we are.
So we struggle on, cliché bound, the consequence of generations of previous similar relationships if we can trust the evidence presented to us. Bound by nature and natural forces, indeed a force of nature if ever so weak. It is said that gravity is a weak force and yet its consequences are enormous. Can we have consequences?
Paul Taylor’s review of books on AI in the London Review of Books (Taylor 21), gave a neat and succinct overview of some of the issues that are opened up by the subject, and reminded me of a meeting I had with the artist Edward Ihnatowicz back in 1986 when researching for my PhD. I had asked him what he thought of the idea of offline robot programming, i.e. testing the programs used to control robots in digitally simulated environments. Some of my colleagues at the then Loughborough University of Technology Computer-Human Interface Research Unit (LUTCHI) were at the time exploring such techniques. His response was that he believed that the only way that artificial intelligence with the capabilities of humans might be developed would be to build a technology that could explore the world independently just as humans do. A physical engagement with the complexities of the actual world would be the only way that those aspects of phenomena that we cannot ourselves communciate, but that do influence our behaviours, might be acquired by machines. Ihnatowicz’s works, SAM and The Senster, gave I think vital clues as to one of the most difficult challenges to us when we are developing AI, which is our tendency to anthropomorphise just about anything.
Indeed the map is not the territory as implied in Taylor’s article and that applies to our knowledge of ourselves as much as any other phenomena that we investigate. As Brian Gaines put it “It is our common ‘hallucination’ that is significant, not its source or accuracy. … much of the ‘world’ relevant to discourse is not the hard empirical world of the physicist, but rather the constructed, imagined world of artist, novelist, and poet – the world of concepts and possibilities that we ourselves fabricate.’ (Gaines 78) It is fascinating to consider what Ihnatowicz’s postulated entities might make of their existence. Not least, in the spirit of Taylor’s review one wonders how able they might be to communicate what they learn in a way that we can comprehend.
What Taylors’ review really draws my attention to however is that rather than the verisimilitude of artificial intelligence to that of humans, AI research and development is as much if not more important because of what it leads us to learn about ourselves. His title could in fact just as easily refer to us.
Taylor 21: Taylor, P. , “Insanely Complicated, Hopelessly Inadequate”, London Review of Books, Vol. 43 No. 2 · 21 January 2021
Gaines 78: Gaines, B. , “Man-Computer Communication – what next?”, International Journal for Man-Machine Studies, V10, pp 225-232, Academic Press, 1978
At The Slade from 77-79 I discovered the creative potential of Dungeons & Dragons at the same time as discovering what computing could do and it seemed to me that if animation had been the innovative art form of the twentieth century that some sort of computer enabled interactive medium – a synthesis of computing, holography and multiple media role-play gaming perhaps – would constitute the art form of the twenty-first. Quite what it would be was hard to envisage but the ingredients seemed to be there.
Any writing ambitions I had were redirected into the design of scenarios for role-play games which I just played with friends, with the goal of creating an imaginary world for them that we would remember and talk about in the same way we spoke about books we had read or films we had seen, in particular Tolkien’s, with the added sense of having been a participant in events recalled. Later I formed the opinion that it was a medium that really was just for small groups of participants rather than for others to watch, as the essential experience was being part of it rather than seeing others perform. Also something that I thought was particularly special was that every participant has a different point of view, like a multi point perspective. An interactive work shares the explicit recognition that there are many ways of seeing the same phenomenon with that idea of relativity in cubist composition.
As my knowledge of computing was sparse when I first started programming in the late seventies, although I tried to understand how computer games worked I had no idea of processing power at the time and just how and what might be achieved. So instead I focused on computing for my visual art, which used very simple techniques, and on role-play gaming for storytelling. I saw my RPG ‘hobby’ as similar to Calder’s flea circus.
Later at University of Kent at Canterbury (UKC) as Artist in Residence, having already been involved in historical re-enactment (with the Dark Ages Society), I got involved with the university’s Live Action Role-Play society (RETROS – REal Time ROle-play Society). After that some of my effort went into evolving rules with my fellow players.
Retros LARP at UKC
When I moved to Loughborough, one of my UKC friends John Naylor who lived in Nottingham formed a small group of us led by him to set up a society called “Fools & Heroes” which is still going. We worked out rules together to enable people to set up their own games across the country and also gather together for national events. A very satisfying thing for me is that the runic script that they still use was one I had knocked up for my D&D world a very poor imitation of Tolkien’s work and later introduced into the society. Some of the “historical context” that John invented that I elaborated upon a little has lasted and been built upon enormously over the past thirty years or so. There are many people around the country from all walks of life who will have taken part in creating and enacting stories set in the the Axirian Empire. I rather like that this is a popular dramatic form rather than “high art” although recently some aspects of LARP have been seen in art galleries.
Playing an Orc in Fools & Heroes
I dropped my involvement in LARP and re-enactment when I had to focus on my PhD write-up and since then have just played tabletop RPGs with a bunch of very good longtime friends. The most recent scenario that I have run with them over several years is set in an alternative history based upon Japanese mythology using the old Bushido rules published by Fantasy Games Unlimited.
There has been a formal connection between my computer based artwork and the role-play gaming ever since I started pursuing them in parallel. For one thing the programmed agents that generate the artwork are similar to the players in a game. Each programmed entity generates its own story as it moves and interacts with others. The resulting piece can be read as an interweaving of different stories. One can potentially choose to follow any one of them. Sometimes the code is edited to document their decisions and actions and a program could be written to generate narratives based on this. Either the agents could write autobiographies or a meta-agent could write about them – and so on. As my partner, now wife, Sarah Thompson asked at a Creativity & Cognition conference in Loughborough many years ago though – what is going to motivate an AI to do something like write a story?
Interactive piece exhibited at SMI 2013
It is possible that I have a peculiar approach to my interactive work compared to some other artists. That is because rather than seeing it as a vehicle for other people to be creative, for example as in Fools & Heroes which might be seen as “democratising” I was initially attracted to interaction as a way to bring participants to experience things that I had experienced and wanted to share with them, retaining authorship but engaging them in what I had discovered – exploring the aesthetics of participating in constructing and finding visual and physical phenomena.
The “Virtual Reality Aesthetic Programming Interface” (VRAPI) was conceived at a time when the idea of everyone learning programming at school had dropped out of favour and it seemed like a way to get people to do it by stealth by using a visual programming language (VPL) in a VR context as well as engaging people in the kind of discoveries that I was making at the time. The message or not so hidden agenda being “hey look it’s great to program”. Now that kids have Scratch etc. that aspect of it is a bit redundant so that couldn’t be achieved with it. I would however like to see where it would lead me if I did manage to get it working so am working out how to make it in Unity.
Since the mid eighties I have been using what amounts to a social distancing algorithm to make work. The programmed entities that trace shapes in the compositions often follow a rule to keep an optimum distance from each other and this effects the resulting patterns. It is passing strange to witness the effect of a similar rule on people rather than as part of a computer program. The peculiar sensation of having the rules underlying much of the behaviours built into the artwork manifesting in the changes in the way strangers interact with each other as people adopt the new laws is uncanny. It has led to a confusing mix of emotions including both horror and a bleak amusement informed by guilt at seeing this as a macabre dance directed by a preventative choreography.
Social Distancing – a macabre choreography
It looks to me as if we are participating in a kind of Pina Bausch influenced performance. The absurdity and at the same time the peculiar elegance of the moves that people make. The slight improvisations on repeated movements reflecting the person doing them. The small signals or grunts of acknowledgement when people move aside to give way in the daily game of pavement roulette enacted by a mix of exercisers, shoppers laden with bags or on their way to shop, masked or unmasked, all burdened, with an almost palpable a nervousness revealed by more obvious attention paid to every other person in sight. Birds looking out for predators. How well will the dog be kept under control? Shall I step out into the road to let him by? No the jogger has, running right around a parked car or two so I and my wife can stay on the pavement. I ready myself to do the same as I see a woman approaching with a child in a pushchair but she moves first and has stepped briefly into someone’s drive so we can pass, we nod acknowledgement and I smile, nervously as there’s so little to smile about.
I feel that we ought to be adopting some polite expression to acknowledge these many small kindnesses. A signal we can give those on essential support trips by car or bus, to show how much they are appreciated and the risks that they are taking have been noted.
There are occasionally moments of discord. Like when someone choses not play along. Who does not step aside. Like when outside a supermarket everyone in the queue is stood close against the wall so others can pass by on the far side of the pavement at a decent distance yet one man walks straight down the centre of the path, too close, too close. Is he a fool? Don’t you know that you do not have to do this social distancing for anyone else but yourself, I think. I can’t say “You selfish bastard,” because it’s not good for him either so what is he playing at? Is he immune? What does he have on his mind that is more important than staying as safe as possible? What can he have against those of us who are trying so hard to stay safe for the sake of both ourselves and everyone else, but especially the ones that we love. That’s the shock of it, he seemed like a man with no love.
Trying to make sense of the movements of people in this way is so much like what is involved in interpreting what is going on in the compositions made by my programs. Why is that animal following that route? Are they moving away from or toward something, or both? How successful have they been in achieving their goal and what might that goal have been?
We are part of a culture where our perception of the real is a complicated tapestry where illusion and fact have become woven into ever more sophisticated patterns which are, due to the invention of digital media, increasingly difficult to recognise and interpret. A situation for which I sometimes think the old term “mazed” seems most appropriate. I recommend listening to this programme for a particularly informative insight into how to identify some features of interest that contribute to making a kind of sense of it.
Among topics the programme explores is A/B testing. One example being how data gleaned from analysis of their online profiles can lead to different prices being quoted to different people, eg more expensive prices offered to people using Macs. As a Mac user this information particularly pissed me off.
Another discussion was about different headlines leading to different feelings about the same item and consequently a different experience and understanding. This was more sinister than the different cost issue. I had been aware of the mysteriously feckless nature of Facebook as a communication medium but it turns out that it’s merely the most obvious mangler of our world view. They are legion.
A clue as to how complex the challenges to our understanding of the effects of our digital systems came from the game World of Warcraft. A feature in the game, a plague curse that had unexpected consequences, was studied extensively by social scientists. For me the part that coincided with what I know from my own work was particularly relevant. That is the acknowledgment that in the game it’s the unknown unknowns that emerge that are interesting and that they come from participant behaviours. The game mechanic is known but the consequences when it is placed in an unpredictable context, i.e. human participants, are not. What makes the technique of interactive and participatory work so fascinating is discovering surpising results of the interaction of the understood with that which is not not.
In the programme the argument made in favour of study vs untested policy implementation is especially important, and difficult too as it appears to include what might be considered potentially inhumane acts when people are unknowing experimental subjects. That this subterfuge may be essential for authentic data to be determined makes for potentially fraught ethical discussion. Inexpert use of research techniques can lead to seriously misled consequences. How ethically trained or aware are the non-experts now able to access analytical tools and to what ends might they inadvertently end up developing unwanted effects? For example in the programme it is pointed out how in AI training there are risks in using poorly derived data. This problem of the need for expertise in methodology was something that I learnt about when doing a questionnaire as part of my PhD research and a participant pointed out that I was not doing it particularly well.
As online data analysis becomes available to more and more people, how many bother to learn the sometimes tediously necessary skills to do it right? This programme offers some idea as to why it is well worth the trouble.
By nature humans are inclined to make efficient use of resources including thought, by seeking simple solutions but this means that we are vulnerable to oversimplification.
Current computing technology works because it uses an extreme form of simplification. A binary digit (bit) represents only two states: yes or no, on or off, true or false, 0 or 1, etc.. Although simple at base, complex systems have been built upon this efficient symbolic representation that can achieve amazing feats of computation. These more complicated constructs (systems or applications) made using the simple basic component enshrine the binary principle in a way that fundamentally permeates the solutions to any challenges faced in their design. Inevitably the functionality of these constructs is biased toward favouring and hence promoting solutions founded on binary logic, despite the fact that there may be other, non-binary approaches.
The human race has a long history of employing technologies and ways of thinking that exploit simplicity in a similar way to the use of bits in information technology1. These simplifications also embody biases in ways that make some solutions seem more convincing when actually there may be many others. This inbuilt bias has been recognised by many people. Recently Douglas Rushkoff has done a good job of drawing attention to it. He identifies clearly the unfortunate consequences of some of the most popular ways that the technology is being used. However, to avoid catastrophe it is essential to avoid blaming the technology alone and understand why it is being used this way. We have to deal with the fact that humans conceived of and designed it and that others choose to use it despite the warnings. We all have this predisposition toward the simple over the complicated. I am arguing here that we have to understand why and how technologies and systems are often built to be oversimple either intentionally or by unconscious inclination and should learn instead how to promote the idea of embracing complexity.
There is evidence of this tendency of human behaviour to adopt simple solutions in prehistory. An example that convinced me of this are the neolithic field systems discovered at Céide Fields. I can’t remember whether when I was told about them that I was immediately struck by the similarity between the field systems and a computer spreadsheet or whether it simply fed into a worry that had been festering in my mind for quite a while2. It stuck in my mind however and eventually led me to realise that it might provide a clue to answer a question that had bothered me for quite some time. I had already been concerned by the similarity between aspects of the “matrix management” approach to running organisations and the design of simple computers. I had learnt about the management technique in 2007 when considering applying for a management role which demanded knowledge of that system of organisation. When I looked into it, the method immediately reminded me visually of the state transition tables that I had encountered in rudimentary AI programming back in the late 1970’s and continued to exploit in my art work. What bothered me was the way matrices were used to determine which manager a given worker should report to regarding a given project among several simultaneous projects they might be engaged in, or likewise was used to let a manager know which of the projects she should be communicating with a given worker about. This appeared to be startlingly open to errors that could create the equivalent of infinitely recursive loops, zombie processes and other undesirable events. Matrix management appeared to be a route to inefficiency and more serious bugs and crashes. That it would be necessary to embrace the technique made the job pretty unappealing. The arguable value of matrix management notwithstanding, learning of it made me aware that the ubiquity of computer based management tools might be shaping decisions that would be better made independently of the technology.
After a while an underlying bias toward binary, either/or decision making even when not using computer software emerged and it bothered me more and more. The actual technique being designed into the application or system became less important in my thoughts than the use of computer technology to embody human ideas of any kind. Particularly in the light of the way I began to feel that the academic life had changed considerably during the forty years or so that I had been a university lecturer and researcher. By moving away from holistic, intuitive and complex human management decisions based previously on human interaction to an approach mediated by computer software we had changed from a situation where one felt that one was working for human beings to one where it began to feel that we were part of a machine or like entries in a spreadsheet. I came to believe that what mattered about us to our managers was not whether we were complete rounded human beings with individual qualities and skills appropriate to our jobs. Instead it seemed that if indeed we were identified at all as individuals we were seen as the calculated result of whatever came out of the statistical entries in an accounting program’s algorithms. Who we were appeared to be based upon book-keeping entries in which our individual qualities were only considered to have value if they could be measured or counted. If something that marked a person out and made them valuable to colleagues and students could not be abstracted from the complex reality of being in a way that could could be described that conformed to what the system was able to process, it did not compute and might as well not exist. This led me to the conclusion that rather than choosing to rely completely upon an adopted technology and being tied to its implicit biases, a broader range of human abilities should be applied to any technology-mediated3 task.
This feeling of mine was not a carefully calculated and scientifically deduced solution to a problem, but a very human and felt intuition based upon years of using software and programming computers to make art and teaching aspects of it to others. I recognised things in common between the experience of the changes in management that I and friends in other working environments had experienced and what I was inclined to do when programming. From personal experience I knew how creative decisions could be limited by the technology. It became more and more likely to me that by using binary based digital computer technology in so many aspects of our lives that its fundamental simple principle was being reflected at every level of human thought, amplifying the apparent value of either/or us/them thinking and devaluing other ways of thinking, particularly more complicated ones.
One of the major influences on my thought regarding the potential problems with relying on binary systems was when I learnt that the actual memory cells used to store bits in computers need not be either one thing or another, 0 or 1, but that the physical embodiment of a bit could be a capacitor with either high charge or low charge. Not a cut and dried difference between charged or not, but rather a matter of degree. It was only because the system was designed to treat high or low as one or zero, true or false, that it worked. The physical representation of several bits could each have slightly different high charges but all would be treated as high enough to be interpreted as having virtually the same value, say 1, even though they were actually not the same. If a bit had a low enough charge it would be seen as, say 0. The whole machine is based on manipulating these either/or symbols. Because the actual value of two cells in a system might be subtly different yet for the purposes of the system they were each treated as either one thing or another, I thought, put a human being in that position and their individuality becomes irrelevant.4
In short, suddenly I saw that the either/or hi/low nature of the individual cells was being exploited to embody the human way of separating things into different categories and bunching them together. This is where the Céide Fields came into my thinking. What I saw was evidence that humans had for thousands of years been organising things separating them out and containing them in some way – by corralling them. This seemed to me to resonate with putting values in a spreadsheet, or people in a management matrix. The binary in computer technology encouraged, even necessitates the corralling of people. Techniques of simplification enable humans to comprehend complex situations more easily but like the abstraction used in map-making that edits out features the map-maker is not interested in recording it can lead to profound misunderstandings.
Using computer technology, particularly how we use it to find solutions to the challenges that face us when trying to determine the future of humans and the planet, it is essential to recognise that the technology appeals to a tendency in human nature to oversimplify. Simplifying a complex universe to improve efficiency can often be expedient but oversimplifying complex issues can also lead to major errors of judgement and potential disaster. Binary representations are maps and the map is not the territory.
1 One that is of enormous significance is the technology of storytelling that supports “Us and Them” narratives.
2 I am pretty sure that I first heard about the field systems from a participant in the “Solid Modelling International 2013” symposium who mentioned during a break that he was taking the opportunity of being in Europe to visit them.
3broadly interpreted as any technology used his includes all mental and physical mediating techniques.
4 An exercise sometimes used to teach people how computers work includes getting a row of people each holding up a card with a zero on one side and a one on the other to represent binary data cells. They are each being a bit.
Further viewing – Robert Sapolsky on categorisation is a very informative. See https://www.youtube.com/watch?v=NNnIGh9g6fA
Algorithms are not just used by computers
In the media recently I have become aware of a peculiar change – the increasing reference to algorithms. The term algorithm is quite often used with a sense of awe or fear or both, and mainly with reference to Artificial Intelligence. The worrying thing is that it is beginning to seem as if algorithms only exist in computers and even that there are only AI algorithms. But humans have used algorithms for centuries, if not millennia.
Consider how, if we want to make a meal that we have not made before, we follow a recipe. When we do, we are in effect following an algorithm. If we want to construct a piece of flatpack furniture we are provided with instructions to follow – also a sort of algorithm. Want to play a tune that we have not played before? We follow a score – another kind of algorithm. An author wants to share a story that she has imagined so she writes a story. A story is a particularly ingenious algorithm that will lead us to exercise our minds interpreting the language-encoded-in-words in such a way that we get the story, because a text is a way of leading the human brain to have particular thoughts that lead to conclusions and emotions intended by the storyteller. I could go on, but leave you to come up with more examples. My point is that as humans we use something like algorithms quite often and to good effect. It is no coincidence that the word is derived from the name of a man who described a method to enable human computers to achieve particular mathematical goals. Following ordered steps to achieve something is part of how we are able to achieve so much of what humans are able to achieve. But only one part. We pride ourselves that we are able to use our discretion when following these rules because we know that human algorithms are for the guidance of the wise and for the obedience of fools, don’t we? We know we are following algorithms. We know when we should and when we shouldn’t and how and why they work, don’t we?
Do we understand the rules?
Well actually possibly not. I can make a mean white sauce but I do not know exactly why when I make a roux of flour and butter then add milk it does what it does; whether it works or doesn’t work depends on how well I execute the algorithm. If I play or sing along to a tune, I do not know why it moves me emotionally, making me feel joy or sadness the way it does, I just know that it does. So often we just follow instructions or training that we have been given or have learnt, trusting that it will all work out as we hope. But of those of us who have tried how many of us have not got a lumpy white sauce now and again?
Computing technology is just making the benefits and threats of this particular method of doing things more apparent. It is troubling that a specific technology may be blamed for problems associated with the use of algorithms instead of the concepts that have been embodied in them. Computers work the way they do because they are based upon just one sort of human thought – using algorithms of one sort or another. There are alternative ways of thinking and working and we need to remember and use them.
What should be understood is that although a sense of awe and possibly fear about what can be achieved with computer technology might be appropriate and no doubt needs to be dealt with, we also need to realise that the way humans make use of algorithms, in whatever context can come with problems too.
In short we need to be sure of what should be feared, what should be admired and why. Algorithms enable humans to do awesome things but also to do things that are aweful.
Humans are not digital computers
It is possible that some readers if they have got this far are fuming at the inaccuracy of my comparison of techniques used by humans to govern their activities with algorithms in programs used to control digital computers, because digital computers and people are different. Before continuing my argument I would like to try to calm any possible indignation. You will already know that there were computers before mechanical analog or digital computers – people who computed were called computers. This was alluded to above when I referred to the etymology of the word algorithm, so I shouldn’t need to remind you what the man whose name has been mapped into a mind-boggling range of ways of achieving goals in digital computing was aiming to achieve when he recorded instructions on how to calculate.
Enough of al-Khwarizmi. The purpose of my post is not to suggest that humans are computers, though they can sometimes compute. I want to draw attention to the idea that the use of algorithms in computers is founded on the use of algorithms by humans. Computers are sometimes like humans because they were designed to apply human ways of thinking, including step by step processes. Human ways of thinking are however are not always applied in ways that have led to admirable or supportable consequences. We need therefore to be very careful about how we apply and interpret the uses of computer technology. Not because it is a technology but because of the human concepts physically embodied in it.
Methods appropriate for managing digital computers may not suit humans
Ironically a fear of unknown processes determining computer behaviour is similar to the fear of human psychological behaviour, a fear that leads some people to want to govern fellow humans in a way that treats them as if they were machines.
If you learn a bit of computer programming and a little about how the digital computer actually works and if you understand how algorithms work in computers, you realise that the methodologies applied to address the problems faced in writing computer algorithms are not necessarily the best way to address human issues, because the problems faced are caused by aspects of the technology that are different to humans. You don’t need to understand how computers work at a fully comprehensive level – it would take up too much of your time that would be better spent doing other stuff, like cooking or playing music, taking part in sports and games, caring for others, making works of art and so on – but if you just grasp the basics you get an insight into just how flawed it might be to apply techniques used to get computers to function usefully to the organisation and management of humans.
As well as learning a tiny amount about computers and how to program them it is also essential to think more thoroughly about how we choose to use what computing technologies have to offer. This includes exploring why as humans we are so prone to following rules and fitting in with cultural expectations and so unwilling to use new guidleines (or algorithms) rather than the ones we are used to. For example, why we will ignore evidence that contradicts what we already believe?
A question of rituals and art
Let’s take a break from algorithms and computers and consider human rituals. A ritual is a sequence of actions intended to achieve a particular end. I got into serious trouble at art college once by suggesting that my tutors were taking part in a ritual when they assessed my work. At the time I was exploring how performance art fitted into the history of human ritual practices but it seemed that my tutors had little interest in considering the way that I was approaching it. The thing that fascinated me at the time was how particular courses of action in everyday life, when repeated often enough, could become like a ritual performance that might be observed for its aesthetic qualities.
We seemed to think so little about the aesthetics of how and why we act the way we do. More to the point, I wondered, why do people value some repeated activities more than others, making some so important that they are willing to reprimand or even kill other people if they did not perform them and conform to particular rules when doing so? As I was studying art at the time my particular questions revolved around what constituted art. Why were some works considered better than others – and how did the activities of critical engagement contribute to our ability to evaluate the relative quality of art works?
I theorised that when we make art of any kind for people to experience we want their perceptions and thoughts about the work to build a kind of sculpture in the mind by changing or enhancing neural pathways in the brain. The medium chosen by the artist is used to cause experiences that will effectively sculpt the mind so that the spectator or audience is led to have a more or less specific aesthetic experience. My conjecture was that the material form of the work of art was only important in as much as it managed to achieve the necessary mental changes. The challenge was to work out what changes, in what medium were necessary to cause the desired effect.
A glitch in this algorithm
Oh dear, I fear I may be drifting off topic. But I think it is a clue to the issue that concerns me and I shall return to it later. It can get confusing if banana elf scrimbongle doodlywibble dupdup noodlewarble.
See what I did then? I introduced a deliberate error in the code of this text-algorithm.
Interaction and participation lead to new questions
Let’s recapitulate. Humans can sometimes achieve things by following rules. They have therefore built machines so they can also achieve things by following rules. Following some rules can lead to mental changes that lead to aesthetic experiences.
A common goal in making art has been to create illusions. Regularly people willingly suspend their disbelief , as Coleridge put it, and go along with the artist, in his case the poet. People participate in realising a work by accepting the proposal that something is different than it might otherwise seem. They have to interpret the work presented to them.
To engage people knowingly in the realisation of my art I started to make pieces that were intended to be manipulated and used to make compositions, with the idea that it might lead people to become part of spontanious ritual-like art performances. Later I began to use computer graphics and felt that interactive computer-based pieces would be much more likely to engage people in participatory works. It also became possible for me to develop work derived from an interest in the aesthetics of social interaction that had begun when I learnt about body language from the work of Desmond Morris. Among my first thoughts when I had the chance to try computer programming was that as the logic of computers and programming had been invented by humans, by learning how to program I would learn more about the logic followed by humans and perhaps about how we decide what is art or not. As a result of learning how to program and trying to make art using systems that are based upon viewing the patterns of social behaviour of animals and humans as well as being a participant in the recent changes that have happened in the management of the rituals of academic life, I have come to develop new questions and make connections between the relationship between human and machine algorithms. I have been concerned, like so many others, about the way that computer technology is having a profound effect on our society at large. My opinion is that algorithms appropriate for machines are being applied to humans. I also believe that this is being misinterpreted as a new phenomenon. I have a hunch that it has roots in some pre-historic and possibly fundamental human behaviours.
My hypothesis is that basic corraling and animal management techniques led to a form of data processing, where the data were cattle and the cells were animal folds. I will discuss this a bit more in my next port. But because it has been used more recently to embody just some human thinking methodologies, those related to calculation and data processing, it leads to the techniques used to get programs (software or apps – call it what you will) to work gaining an importance it makes them seem more significant than others.
The fact that algorithms exist in digital computers is attracting attention it should with luck lead to a deeper potentially more useful discussion about the way that humans have used algorithms in a broader sense for millenia.
Hearing on the news this morning about the plan to build the new Schwarzman Centre for humanties at Oxford university including an Institute for Ethics in AI makes me think that perhaps the need for humanities as well as the STEM subjects has not been forgotten. As mentioned in a previous post, it is all very well to try to ensure that potential students will continue to engage with unpopular subjects, but not to the exclusion of other important ones. From an article in The Guardian it seems that to some extent Mr schwarzman agrees that STEM subjects should not be the sole focus of investment.
To concentrate on the humanities alone would also be too limiting but it is encouraging to see that they are not being neglected. Now which areas that deserve support are being forgotten I wonder?
Previous related entry STEM > STEAM > SHTEAM