The Decision Dilemma: Embracing AI in an Imperfect World

What are the consequences of a future where AI agents influence our decisions or autonomously make decisions for us?

“Everything you want is on the other side of fear”.

[Jack Canfield]

Human decision-making is flawed. Factors contributing to the inherent flaws in human decision-making include cognitive biases, emotional influences, and limitations in our ability to process information at scale and speed.

Since the release of ChatGPT, the proliferation and mass adoption of AI in society, businesses, and amongst consumers is accelerating at an unprecedented pace and has triggered an intense arms race among companies worldwide to develop and deploy AI technologies. Tech giants, startups, and corporations are investing heavily in AI research, talent acquisition, and product development to gain a competitive edge.

Alongside the rapid expansion of AI adoption, concerns are escalating regarding ethical implications, privacy issues, and potential job displacement.

This article discusses:

  1. The downside of not embracing AI technology to augment human decisions.

  2. Concerns raised about allowing AI agents to autonomously make decisions on behalf of humans.

  3. Working on both ends of the equation — imperfect AI models and an imperfect world.

Generated using Microsoft AI Designer

The downside of not embracing AI technology to augment human decisions.

The downside of not embracing AI is significant. Human decision-making is flawed which means we make errors that have serious short and long-term consequences and, in some cases, can cost human lives. Moreover, humans do not have the computing power to tackle solutions to some of the complex problems and biggest challenges we currently face such as healthcare optimisation, climate change mitigation, disaster response, and economic development, which AI will add value towards solving.

Image from iStock

We, humans, are prone to biases when it comes to making decisions, such as fear of loss, a tendency to lean towards options that are easier to recall or envision, a high likelihood of being influenced by framing such as the choice of words, anchoring (a mind trick where someone sets your expectations to a certain level and then shows you how to improve your outcome or avoid a worse outcome) and allowing our past decisions to influence future ones in favour of what is familiar.

What if we could reduce the likelihood of regrets over hasty and uninformed decisions? What if we could access information to at least make evidence-based decisions? Or get a better sense of the likelihood of a range of predicted outcomes to weigh out the scenarios that could unfold with costs, benefits and risks?

Decisions are the keys to unlocking how our life pans out. To some extent, certain decisions made by others are out of our control and can heavily influence the development of each of our individual stories, but we still have the power to decide how we react to what happens to us, how we experience it, how we learn from it, and how we move on from it.

Decisions are pervasive in life and many decisions have a significant impact on our lives. Our choices on where and what to study, what career to pursue, where to live and what house to buy, who to marry and whether to have children are big and important decisions.

There is also a constant stream of micro-decisions, everyday choices, which seem to be proliferating in a global and digital marketplace, such that, as discussed in the Paradox of Choice, the abundance of options that are now available requires considerable more effort and leaves us feeling unsatisfied with our choices.

AI offers efficiency, scalability and objectivity in decision-making and is already proving its value in aiding decisions in healthcare diagnosis and treatment, financial trading and investment and smart manufacturing and supply chain management.

There is, however; an argument to be made on why humans are better decision-makers than current AI models and products.

What if future AI agents could extend their current capabilities to meet these human attributes?

Attributes such as contextual understanding and the ability to understand complex contexts, nuances, and subtleties that AI may struggle to comprehend.

Or being able to incorporate diverse factors, such as emotions, social cues, and ethical considerations, into judgments.

Humans also have emotional intelligence, enabling us to empathise, understand, and respond to emotions in ourselves and others. Emotional awareness plays a crucial role in decision-making involving interpersonal relationships or moral dilemmas.

Humans possess common sense and intuition, which often guides our decision-making in uncertain or ambiguous situations, where explicit reasoning may be inadequate.

Of significance, human decision-makers are capable of ethical and moral reasoning, considering principles of right and wrong, fairness, and justice.

Because current AI models are not yet sophisticated or advanced enough to emulate all of the above human attributes, humans have a sense of pride and comfort in the notion that we possess a special ability to navigate complex situations, make ethical choices, and understand subtle nuances that machines cannot.

While some people believe AGI is in hand’s reach, others choose to believe that humans have unique attributes that AI and machines can never emulate, a possibly romanticised view of human decision-making, especially in domains where human judgment is valued, such as art, literature, ethics, and interpersonal relationships.

In the 2023 American spy action thriller film “Heart of Stone,” director Tom Harper explores the complexities and consequences of both human and AI decision-making and actions.

Throughout the film, there is tension between the capabilities of human agents, who possess intuition, emotions, and moral judgment, and the potential power of AI systems, which can process vast amounts of data and make calculated decisions.

While the film highlights the strengths and limitations of both humans and machines in decision-making, it is ultimately about the importance of integrity, loyalty, and the pursuit of justice in navigating a morally complex world.

So what are the possible solutions? We could decrease the number of options; and attempt to reverse this trend of capitalism and growth which promotes creating more and more of everything. Or we could try to improve our decision-making by applying science; for example, one could apply a sequence of steps (inspired by algorithms) and use the ‘37% rule’ to make the best decision within a set amount of time. Or we could leverage AI tools as they come onto the market to act as predictive agents to recommend or automatically make decisions for us.

Herein lies the promise of transformative personal AI agents — artificial intelligence systems designed to assist people with everyday tasks like writing an email, scheduling a doctor’s appointment, or choosing a holiday travel destination.

Image from iStock

AI promises to provide us with better decision-making powers and overcome our innate biases.

How well AI performs this important responsibility is dependent on how well we as humans design and put AI’s capabilities to use.

Concerns raised about allowing AI to make decisions for humans

Allowing AI to make autonomous decisions for humans is a complex issue with various considerations.

What will these AI agents be optimised to achieve, what will be the incentives, who will stand to benefit, how will we manage multiple AI agents battling it out amongst each other to arrive at the best endpoint, how do we regulate and contain these complex systems and tools to avoid nefarious outcomes and finally, if these AI models are capable of generating new outputs and decisions that have not been fed into the models via historical training data, how will we humans understand or control these new digital agents?

Several prominent leaders and experts have raised concerns about the implications of allowing AI to make decisions for humans, particularly in areas such as ethics, accountability, and societal impact.

Warren Buffett, the renowned investor and chairman of Berkshire Hathaway, warns of the potential dangers of AI, comparing its impact to that of the atomic bomb (ref). He expressed deep concern about the threat of AI in supercharging fraud due to its ability to create convincing images and videos, making scams harder to detect.

Elon Musk, the CEO of Tesla and SpaceX has been vocal about his concerns regarding AI. Musk has warned about the potential risks of uncontrolled AI development, comparing it to “summoning the demon.” He has called for proactive regulation and oversight to ensure AI remains aligned with human values and interests (ref).

The late theoretical physicist Stephen Hawking also expressed concerns about the future of AI, warning that it could potentially spell the end of humanity if not properly managed. He cautioned against the unchecked development of autonomous AI systems and emphasised the importance of ethical considerations in AI research and deployment (ref).

Bill Gates, the co-founder of Microsoft, has highlighted the need for careful regulation and governance of AI technologies and the importance of addressing ethical and societal challenges, such as job displacement and privacy concerns (ref).

Mustafa Suleyman, a British entrepreneur and Microsoft AI CEO also expresses concern over an AI-driven future in his recent book: “The Coming Wave” and recent TED2024 presentation. Using the term ‘containment,’ he proposes several solutions to ensure that AI systems remain under human control and are used responsibly.

Interestingly, Suleyman frames AI not as a tool, following a long history across human evolution of humans developing and using tools to aid our productivity and growth, but as a new species entirely — a “digital species”.

In a recent conversation between Mustafa Suleyman and Yuval Noah Harari on the implications of the AI revolution, Harari raises questions about the impact of AI on human society, including issues related to inequality, power dynamics, and the future of work (ref).

Suleyman, one of the leading innovators of AI, emphasises the importance of ethical AI design and the need for transparency, accountability, and human oversight in AI systems.

Suleyman’s advocation for a collaborative approach involving policymakers, technologists, and other stakeholders to address the ethical and societal implications of AI is questioned by Harari, who is sceptical that this will happen.

Interestingly, when Suleyman draws a picture of what the near future will look like: “I predict that we’ll come to see them as digital companions, new partners in the journeys of all our lives,” Harari retorts that for him this would signal the end of human-dominated history.

So, what are the risks of allowing AI to become our decision oracles?

  1. Unintended Consequences: Autonomous AI agents may make decisions based solely on the data they are trained on, leading to unintended consequences or unforeseen outcomes that could harm individuals or society.

  2. Conflict with Human Values: AI agents may not always prioritise ethical considerations or human values in their decision-making processes, potentially resulting in actions that conflict with societal norms or human rights.Autonomous AI agents can perpetuate biases present in the data they are trained on, leading to discriminatory outcomes or unfair treatment of certain individuals or groups.

  3. Lack of Accountability: It can be challenging to assign accountability and responsibility when autonomous AI agents make decisions, especially in cases of errors, accidents, or ethical dilemmas.

  4. Security Vulnerabilities: Autonomous AI agents may be susceptible to cyberattacks, hacking, or manipulation, posing risks to data privacy, cybersecurity, and overall system integrity.

  5. Loss of Human Control: Allowing autonomous AI agents to make decisions without human oversight or intervention can lead to a loss of human control over critical processes, potentially undermining transparency, trust, and accountability.

The most scary and dire of these possible consequences is that autonomous AI agents will exhibit unpredictable and harmful behaviours making it difficult to anticipate or mitigate potential destruction and dystopia.

Working on both ends of the equation — imperfect AI models and an imperfect world.

As discussed in the debate between Suleyman and Harari, AI technologies are autonomous by default. No technology is deterministic. If AI is allowed to be autonomous, it is humans who will project agency into these models.

When it comes to artificial intelligence, what are we intentionally creating?

The biggest decision facing humanity is how we want to build and use AI.

Today, AI is capable of generating new content, and creating new ideas; the next frontier of AI is the ability for these AI technologies to act on these ideas.

As pointed out by Harari, allowing AI to make autonomous decisions would signify the first time in history that a technology is making decisions. In the past, humans remained in control of technologies and were the decision-makers. This calls for a thoughtful and inclusive approach to AI development which prioritises human well-being.

On the upside, there are growing efforts to curtail the risks and harms of AI in human decision-making through regulation, education and public discourse.

The multi-faceted approach being taken includes an arsenal of mechanisms including safety by design, improving the transparency and explainability of AI models, developing tools and systems to detect and mitigate bias (algorithmic auditing, diverse training data sets, and fairness-aware machine learning techniques), enforcing human oversight and control, establishing clear regulatory frameworks and standards for the responsible development and deployment of AI technologies, fostering collaboration between technologists, policymakers, ethicists, and other stakeholders to address the complex challenges posed by AI and develop comprehensive solutions that balance innovation with societal values and promoting education and awareness initiatives to increase public understanding of AI technologies.

However, with all of these great initiatives to ‘contain’ the risks of some ugly outcomes from an AI-driven future; an essential preventive measure not receiving adequate attention and investment is remediating the imperfections in the world that we are bringing AI — our new digital species – into.

We should invest in developing AI because of the immense benefits it promises to bring to humanity; and hence we must invest in developing tech with a conscience that includes safe AI by design and mechanisms (technical, political, legislative, regulatory, etc) to reduce risks and harms.

Importantly, we must invest equally and perhaps more so in improving the imperfect world that AI tools and products are entering, this includes reducing political tensions, ending wars, and addressing the collapse of trust, integrity and communication.

Ultimately, the decision to let AI agents make decisions for humans should be approached with caution. It is more appropriate to use AI as a tool to assist humans in decision-making rather than fully replacing human judgment.

While we continue to focus on perfecting AI (the pearl), we cannot forget the importance of constantly improving the oyster in which it will planted — our imperfect world.

Thanks for reading!

Previous
Previous

Unleashing the GenAI Elephant: Transform Your Organisation One Byte at a Time

Next
Next

Advancing LLMs using 3 ways that have led to humanity’s success