Skip to main content

The Psychology of Risk and How It Relates to Project Risk Management

Risk and risk management have been around for a very long time. But until recently, they have not been applied well to project risk management. In this article we shall examine the psychology of risk and see how it affects our attitude towards project risk management.

We shall look at the following points individually and then link them to project risk management:

  • What does the term risk mean?
  • Controlled and Uncontrolled risk
  • Perception of risks
  • Framing risk questions
  • Project risk management

When one hears the term risk, we might think of:

  • Doctors who tell us about risk factors
  • Executives/Bankers who do financial risk assessments
  • Insurers who perform risk managements to determine coverage
  • MBAs who think about upside and downside risk
  • Athletes who think about risking their bodies
  • Actors who think about risking their self
  • Project managers and team members who think about risk to their project

All of the above come to mind without thinking about what the term “risk” really means. In everyday life, we face the issue of controlled and uncontrolled risk. For example: if we choose to snow ski, it is voluntary. We believe that we have control (of the skiing process) and therefore we have control over the level of risk we expose ourselves. But is this really true? Once we start down the ski slope are we really that much in control of the outcome? Even expert skiers have serious accidents, some are fatal. Another example: if we choose to smoke, we do so voluntarily, even though the dangers of smoking have been well documented for some time. However, once the decision is made to smoke, we no longer have control over the risk of contracting lung cancer.

When confronted with risk, people generally find they are turned off by it. But this is not a universal feeling. In fact, there are some people who are attracted to risk (bungee cord jumping, etc.).

Our perception of risk varies widely and can lead us to the wrong conclusion. Our perception, for instance, of a rare disease such as botulism, which is only slightly less lethal than a more common disease like asthma. But the fact is that asthma is actually many times more likely to kill you than botulism. Despite this, we fear botulism more than asthma. Another common example is the fear of flying; people who feel flying is too big of a risk often opt to travel on the ground. Most studies demonstrate, though, that your chances of being killed in a car accident are about 500 times greater than being killed in an airplane crash. But the risk seems so much greater when you are several thousand feet in the air and in the hands of someone else – the pilot.

When it comes to risk and probabilities of negative events, people choose their behavior based on the way the risk scenario is framed and not on an actual evaluation of the risk. For example: imagine that the US has two different alternatives to combat a new disease that is expected to kill 900,000 people. If treatment A is implemented and followed, 300,000 people will recover. If treatment B is implemented and followed, there is a 1/3 probability that all 900,000 people will recover and a 2/3 probability that no one will recover. When presented with these two options (A or B), almost 3/4 of the people choose treatment A even though both have exactly the same expected outcomes. In this instance, people over-value the probability that no one will recover.

Now, let’s frame the same issue in a different way. If treatment program C is implemented and followed, 600,000 people will die. If treatment program D is implemented and followed, there is a 1/3 probability that no one will die and a 2/3 probability that 600,000 people will die. Only slightly more that 1/5 of the people choose treatment program C when presented the choice, even though both choices again have exactly the same expected outcomes. Here the people over-value the probability that 600,000 people will die.

In 1738, Daniel Bernoulli described the “best bet” as the one that maximizes the expected utility of a decision. He developed a summation equation (which we shall not repeat here) that calculated “expected utility.” This theory provides a basis for making best bet decisions when risk is involved. We know that it is impossible to make risky decisions that turn out best in every situation, so we regard decisions as gambles and attempt to make the best bet decision.

There are additional human factors that affect our ability to properly assess project risk. Frequent events are usually easier to recall than infrequent events. Residents of countries frequented with violence, like suicide bombings, repeatedly hear about people being killed by the acts. Since it happens so often, the people probably feel their chances of meeting the same fate are high, given that these types of events occur so often. But because in other places, like the United States, where we don’t hear about bombings very often, it would appear that your risk of meeting a similar fate is not very great.

However, single mega-disaster events are easily recalled and often result in over evaluation of the risk. Sometimes we segregate risk into components such that we only see one side or part of the risk. Sometimes we eliminate common parts of the risk and do not consider them. An example of this is not thinking about the additional risk of driving 65 mph instead of 55 mph. Another common mistake is failing to recognizing the risk altogether, such as using a cell phone as we drive a car.
Classical risk theory is choosing between fixed and known risky alternatives. But in the real world of project risk management, alternatives are not given, but must be sought. Many times not all risks are identified because the search usually ends when we find the alternative we like or have found three or four risks. This leads to risk identification being incomplete and biased. Because of this, our projects suffer greater consequences from risk events than is necessary.

Copyright © Global Knowledge Training LLC. All rights reserved. 3/09


William (Bill) J. Scott is a Global Knowledge Professional Skills Instructor. He has two undergraduate engineering degrees and a Certificate in International Operations from the Stockholm School of Economics in Stockholm, Sweden. Bill Spent 30 years in corporate America as a project team member, project manager, functional manager, project sponsor and in senior management. He ran his own project management consulting and training company for nine years prior to joining Global Knowledge in April of this year. For more information visit (

Comments (4)