Designing humans into an AI future
AI generated art from Hotpot.ai
Technology has been part of our lives for a very long time and with every generation, there is a dance that occurs between embracing and resisting the changes accompanying new tech. The nodding and acknowledgement that it is normal, a sign of the times, that new technology brings progress, that it is a celebration of human ingenuity and has the potential to improve human lives. In conjunction with all these embracing aspects, comes the emotions of fear; the hypes and the unveiling of truths on what the new technology can actually deliver and how it will impact culture, politics, climate, and society; how it impacts human jobs and changes how we as humans interact with one another — all in both positive and negative ways.
With these technology advancements, many learnings have been made on how to design these new additions into our lives and social fabric, in ways that are not purely about productivity and efficiency but about experience. We don’t just buy cars that are efficient, we buy cars that are fast and beautiful. We don’t just buy phones that are cheap and functional, we buy phones that are beautifully designed and branded with social status.
Technology is all about humans, what we like, what we want and what we need. So what happens when technology replaces us? What happens when it is not just about designing technology for human consumption but for technology to consume humans? Do we need to design humans into future technology when we are not needed for productivity or efficiency sake but purely for the sake of the experience?
There are 3 ways that I think may explain why humans will prevail and continue to play a role in certain tasks and jobs even when technology exists that can automate these activities or replace us.
Humans step in when technology goes awry
Humans remain relevant as an experiential factor (form more than function)
Humans perform the task or job for self-serving purposes
Humans step in when technology goes wrong
We have all been in a situation when technology goes wrong or is designed poorly. That intense moment of incredible frustration and of massive amounts of time wasted trying to resolve a technical issue or get hold of a human to resolve it for us. Human-centred design focuses on how to design technology that is intuitive, inclusive and enjoyable for humans to use. Well-designed technology is simple and spontaneous to understand yielding higher adoption, usability and improved customer experience. We know from past successes that the design of technology is not purely for efficiency and productivity sake, it is for enhanced experience. Well-designed technology takes away the messiness of dealing with other humans when you need to get something done. Technology can be self-empowering, provide us with agency, ease and allow us to complete tasks on our own terms and time.
When technology is not designed well, this falls apart and suddenly we crave and are desperate for a human to help solve it and get us out of the mess. Humans that step in are not just important as solutionists but as a therapists, managing ta dire situation and an upset customer. Human interventionists when technology goes awry are crisis managers and hence require immense emotional resilience. With increased digitisation and automation, customer service representatives will be left with the more complex, taxing issues to solve for customers.
As organisations elevate the use of technology and AI from automation of simple tasks to delegating decision-making power to AI-based algorithmic management tools- the design of these algorithmic management tools will become highly critical. Examples of where algorithm management tools may cover tasks traditionally performed by human managers include: the hiring of employees (from CV selection to automation of the hiring process), optimisation of the labour process (through the tracking of worker movements, for instance GPS tracking or route-maximisation in transport and logistics), evaluation of workers (through rating systems), automated scheduling of shifts and coordinating customer demand with service providers (see Duggan et al. 2020 for a thorough review).
Adequate testing and development of these tools will be needed to avoid harms. Hilke Schellmann’s new book, The Algorithm explores how AI and complex algorithms are increasingly being used to help hire employees and then subsequently monitor and evaluate them, including for firing and promotion (see article on her book here). Based on her examination of the tools currently on the market, they fall short of doing a good at this and are based on pseudo-science and poorly designed algorithms that lead to biased and illogical recommendations.
From this point of view, human surveyors, interventionists and solutionists will be needed to ensure algorithmic management tools do not breech privacy, perpetuate societal biases and fail to adhere to transparent standards, ethical frameworks and appropriate governance measures.
The development, monitoring and ‘policing’ of an AI future is one role that humans should play.
Humans remain relevant as an experiential factor
Like all disruptions, we are in the learning curve. New AI tools are changing our world and our lives faster than governments and regulation can keep up with, faster than individuals have the capability, appetite or resiliency to change or adapt to, and faster than we have the necessary time to ponder and reflect on how we want this new world to look like and be designed. What are the rules of engagement, the principles, and the values?
AI art generated from Hotpot.ai
As much as the digital era and AI brings excitement and lots of fun, shiny new tools, there is also the reality of its impacts on society. There are already concerns about the negative impacts of social media on younger generations and evidence from experts on the soaring numbers of people suffering from loneliness; a new epidemic feeding its way into our culture and society. “Human beings are a social species and connection with others is a primary need for us. If we lack human connection this can have a negative impact on our well-being very quickly” (see here).
As businesses focus on customer experience and how to create magical moments, technology and AI will be critical for understanding how to influence people, how to understand people’s needs and preferences and how to connect people’s lives — the businesses of our lives in its entirely, from connecting data on our health, finance, education, employment and entertainment which will make our experiences with organisations appear seamless and enable these organisations to guide and facilitate our decision-making.
There will be instances where we still value humans to create experience and ambience. A restaurant, hospital or school without the smiles and presence of humans puttering about may feel too clinical, cold and lonely. In this sense, humans may play the role of ushers or companions to customers and visitors, providing the human connection, empathy and humour that is part of our essence, our being and our cultures.
As tools such as GenAI have become readily available there have been interesting perspectives on the use of these technologies and their displacement of humans in creative roles. There is the argument that just because ‘we can’ does not mean ‘we should’. The outcome of New York lawsuits arguing that widely commercialised products made by OpenAI and its business partner Microsoft are breaking copyright and fair competition laws will test the future of ChatGPT and other AI products that rely on foundational models created by ingesting huge amounts of copyrighted human works.
The fascinating part of technology is how it can sometimes have a revival or become retro, demonstrating that humans can develop a sentimentality towards technology and its role in our lives. Examples include the record player, disposable cameras, the flip phone and even non-technology products like printed books. To cater to an older cohort of customers, Uber has recently announced plans to launch a phone-booking ride services in what has been described by a top Executive as a “bit of a back-to-the-future moment” (see here).
Society and humans decide what to adopt and what to hijack. The decisions about what technology to embed in society is based on how we want we want our society to be like. There are a few obvious reasons for preserving humans in these creative roles especially as AI is an emerging technology that still has flaws, is not sustainable in terms of its immense demands on resources like energy and water and relies on creating content from ingesting existing human content. For all these reasons, while AI has been let out in the wild and poses a lot of great promises including medical discoveries that can save lives, it is also too soon to throw the baby out with the bath water, so to speak.
There is a tension between humans wanting AI and technology to make our lives easier; more predictive and more efficient, and not wanting AI to take over what we deem to be the ‘essence’ of what it means to be human. A tension between the innate qualities humans love about perfection but also imperfection. A fear that human imperfections, that inspire and help us to grow and change, might become less available if we rely too heavily on AI or are not thoughtful enough about how we bring and use AI, a technological tool into a social and cultural revolution.
Taking the music industry as an example. The latest Trolls film is essentially about a journey of discovery for four brothers trying to achieve “perfect harmony” with their vocals. Within this narrative are references in the film to the rampant use of Auto-Tune to change the pitch of vocal performances and the on-going debate over what all of this means for artists’ integrity and our ability to distinguish between genuine talent in vocalists versus performance artistry.
Auto-Tune, an audio-processor introduced in 1997, was “originally intended to disguise or correct off-key inaccuracies, allowing vocal tracks to be perfectly tuned.” The significance of Auto-Tune to the recent Trolls’ film is that the Trolls embody Auto-Tune. The underlying Trolls’ vocals are those created by using the distorted and exaggerated Auto-Tune effect, that first came to mainstream attention by Cher’s 1998 song “Believe”. Moreover, the film’s plot is based on two talentless singers that are using the Troll to extract his ‘essence’ and improve their singing. The process of using the Troll’s ‘essence’ to improve other singer’s vocals is ultimately killing the Troll, alluding to the use of Auto-Tune to correct off-key inaccuracies as killing the music industry.
The disruption of AI into the arts today harkens back to the infamous song “Video Killed the Radio Star”, by the Buggles, a poignant song about a radio star whose career is ended by the advent of television and video. A nostalgic song that laments the olden age when singers were heard on radio and were not required to be flashy and attractive, they just needed to have a good voice, to be popular.
The argument that AI cannot compete with the essence of what makes us human is recently being drowned out by new stories of massive layoffs (see here) and of AI causing major disruptions to the movie industry (see here). The pace at which startups are leveraging these tools is staggering and a clear signalling that the genie is out of the bottle. That said, the artistic community is fighting back. Don’t we love the tenacity of humans? Examples include, Nightshade, “a new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways” (reference here), recent research on voice cloning that makes it easy to determine the authenticity of an audio clip (read about the research here) and a company called Paravision has recently launched an AI-powered deepfake detection tools for fraud and misinformation protection (see here).
The decision to use AI tools comes down to 3 factors: purpose, access and application.
a. Purpose
What is the tool needed for. Is it for research and discovery for climate, new energy solutions, or medical breakthroughs? What are the use cases for using AI to enhance learning and where should the tools be avoided, such as when testing one’s capability and knowledge which requires removing access to tools and technology. Examples of this would often be in an academic or an employee recruitment setting. There is already an existing market for mass produced products versus hand crafted or non-factory products that appeal to different audiences and market segments. Would AI versus human created forms of art simply need to be transparent and marked as such to compete in these markets? How would the business models work given AI significantly reduces the time and resources to produce outputs?
b. Access
Open source has created a level playing field with equal access. While open source has disrupted large industries in film and music, the release of these tools into the public domain has also enabled the democratisation of data and tools and removed barriers to entry. If data and tools are not easily accessed or if there are biases in who gets access, then augmentation of human capabilities with AI and technologies could create large disparities in wealth and health.
c. Application
The application of these tools is probably one of the most important aspects of deciding when and how they get used. While many people and industries disrupted by AI may not be happy about it, no one can argue against the anger and fear of when AI tools are applied to harm others. News stories of online harms to children from social media platforms (see here), of deepfakes and AI porn which is targeting women (see here), and of the negative impacts of fake news on democratic elections (see here). The influx of fake news, misinformation and bot-created content flooding the internet has many concerned that we are eroding knowledge, truth and importantly trust. If the internet was used to build large-language models such as ChatGPT, where will new content come from for future foundational models and will these models even be valuable if the old adage remains — garbage in, garbage out.
Humans perform roles for self-serving purposes
The third and final way that I can see humans designing a role into the future of technology and AI is for self-serving purposes. AI has invigorated humans’ existential crisis — what is the purpose of our lives? If AI takes your job what will define you and the value or mark you leave on earth? There has been recent blogging from journalists, specifically Perry Bacon, that journalism will be a pubic service in the future. “Can journalism survive with this public-interest mind-set? ….with the decline of clear for-profit models, this feels like the only path left” (see here).
Arguments for and against the importance of work have a long history, each generation has wondered about work and its importance, in constantly evolving technological, economic, social, and political conditions. AI innovations are driven by the capitalist imperative. “Digital innovation has been driven to a significant extent by the attempts at winning the economic competition and increasing profit through the usual methods that are used in capitalistic economics. AI is seen by advocates of capitalism as ushering in a new, more agile and productive iteration of the system” (Artificial intelligence and work: a critical review of recent research from the social sciences).
AI is also influenced by nationalistic pressures. “Underneath economic and military competition, an ideological battle is underfoot. Beyond any cynical take on the influence values might actually exert over the development of AI for world-powers and corporations fighting for hegemony, there might also be an ideological battle underway between, say, a liberal-capitalist, a social-democratic and a communist understanding of ‘good AI'” (see here).
Many economists and technology experts contend that AI will substitute for human work at such a scale that social-economic organisations will be shaken to the ground as a result. This is a major aspect of debates on the centrality of work today, often the initial argument for “post-work” models of social organisation. As stated so eloquently in Andy Beckett’s article: “Work is the master of the modern world. For most people, it is impossible to imagine society without it. It dominates and pervades everyday life …Corporate superstars show off their epic work schedules. “Hard-working families” are idealised by politicians. Friends pitch each other business ideas. Tech companies persuade their employees that round-the-clock work is play. Gig economy companies claim that round-the-clock work is freedom. Workers commute further, strike less, retire later. Digital technology lets work invade leisure”. And yet “work is not working”. “The growth of productivity, or the value of what is produced per hour worked, is slowing across the rich world — despite the constant measurement of employee performance and intensification of work routines that makes more and more jobs barely tolerable” (see here). Work is proving to be bad for people’s health due to stress and burnout.
The idea of a world freed from work is not new. Repeatedly, the promise of less work has been prominent in visions of the future. The argument of post-workists seem to be gaining momentum with heightened narratives of automation and the loss of human jobs. Even the concept of an Universal Basic Income which may at some points in history have seemed absurd or outlandish are entering serious corridors of conversations.
In reading Beckett’s article, what I found most interesting is the concept of work as we know it is a recent construct. Many historians revert our work culture back to “16th-century Protestantism, which saw effortful labour as leading to a good afterlife; 19th-century industrial capitalism, which required disciplined workers and driven entrepreneurs; and the 20th-century desires for consumer goods and self-fulfilment.” The idea that the emergence of modern work ethic as an “accident of history,” can be mind-blowing for those whose lives purpose and self-identity has been ingrained by the work institution and prevailing culture. Going back in time, prior to the 16th century, comes the realisation that, “all cultures thought of work as a means to an end, not an end in itself.” From urban ancient Greece to agrarian societies, work was either something to be outsourced to others — often slaves — or something to be done as quickly as possible so that the rest of life could happen.”
This changes the narrative of AI disruption. Automation and AI do not need to be seen as taking away our purpose in life, but removing a construct that we have imposed on ourselves to give our lives a sense of purpose. Reminiscent of the 2020 film “Soul”, where the protagonist comes to the realisation that the final ‘spark’ for pre-life souls to enter the world is not finding their ‘purpose’ but finding their ‘desire to live’.
All that said, before we get too excited about AI freeing up our lives for leisure, hobbies and enjoyment, there is still the lingering phase of capitalism that is underpinned by increased shareholder value being what matters more than productivity. “If shareholder value can be ensured through other means than investment in technology, which comes with significant sunk costs, then that path will be chosen. It is possible that with the sophistication of financial tools and the protection of promiscuous taxation schemes, there is no pressing incentive in many industries to invest in technology designed to replace human labour” (see here).
So where does this all leave us now? Perhaps it gives us back some precious time to start planning and deciding on how we want to design humans into a future of technology and AI.
So what you are waiting for, no time to sit on our loins — let’s get to work!
Thanks for reading.