Optimising for the right outcomes in AI, business and life.

Reaching success is more than just the journey and execution, it is defining the right destination. This requires deep knowledge and intentional choices of what it is you really want to achieve.

Sometimes the tasks that seem the most simple and commonplace are actually the most difficult. Choosing an outcome is one of those tasks. It can seem obvious at first but when you really peel back the layers and speak with people, it becomes obvious that not enough time and critical thinking has been invested in optimising for the right outcomes. This knowledge can also manifest itself in how knowledgable we are as consumers in the outcomes that have been optimised for in the products that we use daily and allow to shape and influence our lives and well-being.

Understanding the ultimate endgame for AI products is going to become more and more important as AI becomes an ubiquitous fixture embedded into our everyday lives and decisions.

Take for a example the number of people already turning to AI chabots for therapy. “Tens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece” (ref).

Setting aside whether a machine AI chatbot is currently as good or better than a human therapist, is really the reality that these products are already on the market. What is critical now is to to provide consumers with sufficient information on the technical makings of these chatbots. Are they tested, what foundational model are they based upon, what is their level of precision, what are they being trained to achieve?

Some chatbots may have legitimately been designed and tested by medical and health experts, have adequate investment in ensuring unbiased and accurate outputs and be optimised to achieve the consumer’s well-being and safety. Other products may be more ad hoc, with more investment on the marketing front, less investment in building accurate models and designed to optimise return on investment, consumer engagement or the cross-sell of products put forward by advertisers and sponsors.

Educating the public about AI will enable them to make informed decisions. This is even more important for products that are designed for vulnerable customers. The consumption of misinformation is a slippery slope and, as evidenced in recent research, it starts with a bit of unhappiness about a current situation and some confusion towards what is causing it (ref).

Getting ahead of the curve and foreseeing the possible adverse and nefarious outcomes of some AI products will be essential for maintaining social values and public safety. For businesses leveraging AI capabilities to deliver better customer experiences, being open and transparent about their AI products will build consumer trust and create space for business-to-consumer co-learning to get the right solution in place.

Similar to providing food labels on processed consumer goods, we will enter a future whereby the design and configuration of AI models will need some level of transparency and communication to the general public to enable consumer safety and choice.

The lack of transparency and labels for AI products that can educate the general public on these important aspects of what goes into making an AI product would be equivalent to a world where we allowed companies to sell consumer food products without any labels, health and safety standards and regulation.

We know from experience, that the food supply chain has struggled with this trade-off between profits and public health. The production of cheap, fast and processed food optimises for efficiency, productivity and mass production. However, these processed foods also lead to foods that are less nutritious and more addictive, which has resulted in a global obesity epidemic that has serious economic consequences on health care systems and society.

The other angle to think about optimising for the right outcomes is in how businesses design these AI products. Many of the latest AI products using LLMs and GenAI rely on an emerging and developing technology that we are still figuring out and grappling with how we want it to shape our lives and social fabric.

Choosing outcomes for AI models that will augment humans in their jobs and work processes rather than replace them is a safe and reasonable way to avoid harm. As rightly stated by Allie K Miller, the biggest mistake leaders make with AI today is “assuming it’s better than it is.” Over shooting on the optimised outcome as being a directive, an action or absolute truth without a human in the loop, may not be safely achieved today and will require processes to be designed with stage gates and built-in feedback loops for learnings, safety checks and refinements before we can get there.

On a personal level, the inauguration of AI into the workforce, will lead to a human renaissance whereby we will re-calibrate our own outcomes for optimisation. Perhaps some people who have been very content and fulfilled by completing mundane tasks will be less excited about taking up the more complex and demanding jobs in the workflow once AI automates the time-consuming and repetitive ones. Others, who have optimised for work as their form of identity and purpose may re-skill and re-invent to find new outcomes for optimisation that exist or appear in the new AI era.

As the pace of technology acceleration and transformation continues, many people may suffer burn out and digital overload leading to a concentration of people seeking health and well-being as their primary outcome for optimisation. While others may decide to ditch their career ladders as the end-all outcome altogether and optimise instead for something outside of the work sphere that gives them a different sense of purpose and achievement.

The final possible path reminds me of the 2020 Pixar Animation Studio film ‘Soul’, which is about a protagonist realising that the special spark in each human is not their purpose in life or discovery of what they are meant to do with their life, it is actually their desire to live and enjoy life for what it is. Life has meaning that goes beyond personal ambition. The movie's aim is really to say that we're already enough. "We all can walk out of the door and enjoy life without needing to accomplish or prove anything. And that's really freeing” (ref).

Thanks for reading!

Previous
Previous

The growing pains of high-achievers wanting to become great leaders

Next
Next

Making sense of the wave of AI products flooding the marketplace.