Monday, 05 November 2018 13:10

AI and the New Fatalism

Written by

The rise of artificial intelligence promises rich benefits for society, but there may be a dark side, and we need to pursue the rise of AI with our eyes wide open.

The advent of virtual support entities such as Siri and Alexa illustrate some of the first signs of a new domestic paradigm in which you are encouraged to vest some level of anticipation and decision making with a virtual external agency. Another classic example is the now normal streetscape of pedestrians almost surgically attached to their cell phones.

Historically, in some cultures it was (and still is) expected that your parents would make decisions for you – about your education, career and marriage. Some cultures have vested decision making with their god, while others have developed a complete set of gods, each representing governance over a particular aspect of life. To a greater or lesser extent, an individual may abrogate personal responsibility in favour of their parents and/or their god.

These forms of fatalism have been practised for millennia, originally nurtured through animism, and still believed to be animistic in nature, notwithstanding the attachment to a particular religion.

As personal education and wealth have increased, advanced countries with a growing influential middle class have seen this type of fatalism diminish in favour of self-determination or self-actualisation, with individuals throwing off their reliance on someone or something else, and taking decisions and accountability into their own hands.

Interestingly though, accepting total personal accountability can be scary, and the absence of a “guiding hand” may have triggered a higher level of self-doubt and lack of confidence amongst many people in “advanced” middle classes. Compare the heavy demands of personal accountability with the apparent safety of committing oneself to unquestioning familial or religious obeisance!


Advertisement

Our experimentation with self-actualisation has delivered mixed results, and for those that are finding it tough going, along comes helpers such as Siri and Alexa. These low-level domestic AI entities can utilise machine learning to anticipate one’s basic needs, and organise things for you. And that’s just the start!

As these entities progress in functionality over the next few years, we may see the rise of a new type of fatalism – “in Siri we trust”; “praise be to Alexa”! Advanced AI entities based on Siri and Alexa prototypes may become as omnipotent and omniscient as previous religious deities, and people may once again adopt fatalism as a psychologically safe crutch for life.

At a corporate level, what happens when you have managers and employees who have grown used to reliance on domestic AI support then accessing powerful corporate AI systems at work? Will their domestic fatalism, and lack of self-determination manifest itself in the workplace?

We already know that in some cultures, employees respect their corporate bosses (and perhaps their tenure) to such an extent that they will not finish their workday until their boss does. We also know that in some cultures corporate seniority is still age and/or privilege based, and it would be unwise to question the opinion of the boss, or argue, or offer an alternative solution.

We may find that corporate AI systems increasingly will be seen as offering safe solutions that encourage the manager or employee to avoid the personal risk of applying their own intellect, intuition and experience to solve a problem. In effect, mirroring the historical and cultural reverence of age, privilege and deities.

We have already seen a version of personal risk-aversion arise through the over-zealous adoption of quality management, with the ensuing reliance on process documentation to the exclusion of individual intuition, experience and common sense.

The unswerving adoption of corporate AI-generated solutions may then build a culture where an individual’s personal contribution is not welcome, and those “dissenting” individuals will become pariahs in the workplace. Individuals named and shamed in the modern corporate equivalent of public stoning! Perhaps it’s even happening now!

Read 965 times
Paul Taplin

As Director of Utilibiz, Paul has been actively involved in strategic project and procurement planning and the pioneering development of collaborative contracts with infrastructure owners, including Project Alliances, Program Alliances, Early Contractor Involvement (ECI) contracts, Long Term Service Arrangements and a range of customised hybrids.
Email paul.taplin@utilibiz.com.au

© ProjectTimes.com 2017

macgregor logo white web