Skip to main content

Tag: BI

A turning point for prescriptive analytics? Can a technology on the edge go mainstream?

Imagine you are the CEO of a major corporation. You’re sitting in a conference room, surrounded by bright people, facing down a strategic decision. You need to know whether your company should make a big investment in an emerging sector or not. So, you turn to your best data analysts, and you ask them what you should do?

If you are a layman, you might imagine that this is a reasonable request. After decades of amassing data on every conceivable aspect of your business and marrying that with the seemingly endless statistics compounded by governments and third parties, an analyst should be able to give you a reasonable answer to a pressing business question. But it doesn’t work that way. While data tends to be extremely good at telling us what might happen in the future, it is largely powerless to tell us what will happen when we make a large, strategic decision.

A recent survey of business leaders, for example, found that although 99% of Fortune 1000 companies are investing in data and AI, only 30% feel they have a well-articulated data strategy.

Advertisement
[widget id=”custom_html-68″]

In data terms, this is the difference between predictive and prescriptive analytics. The former, which is used by businesses every day, tell you what is likely to happen. It’s important to note that it’s not a perfect crystal ball, but even knowing the future within a range of certainty is incredibly useful. It can tell us which prospects are likely customers, who are likely to buy something, or when your operations might become overloaded. All of that can inform decisions, especially in limited, tactical ways.

However, it doesn’t address the larger questions that many would like answered. Those questions fall into the domain of prescriptive analytics, which seeks to understand the outcome of a particular action. Prescriptive analytics doesn’t merely stop at the likely future, it tries to identify the best actions you can undertake to affect that future. According to Gartner, it answers questions like “what can we do to have this happen” or “What should we do?”

On a small scale, prescriptive analytics is seeing increasingly widespread use. One of the better-known examples is Amazon’s pre-positioning of products based on predictions of demands. You can also find it working well in industrial maintenance. Companies are using it to make decisions about what to do and where to send workers before they are needed. Such anticipation has also been scaled into systems that greatly increase efficiency and reliability.

With anything like prescriptive analytics, however, it’s extremely important to make a distinction between point solutions that resolve specific situations, and strategic solutions, which can answer any unanticipated decisions you have to make. Prescriptive analytics solutions today tend to “ride on rails.” They work very well on a particular task but cannot generalize beyond it.

Of course, the real promise of prescriptive analytics is not to resolve limited difficulties but to be generally useful to the business as a whole. To get there, we will need to resolve to solve at least four outstanding issues:

Relevant data. Right now, prescriptive analytics tends to be effective when it has large amounts of highly relevant and useful data are readily available. For example, geophysicists are currently using it to find the optimal location for dating oil and gas wells. To do so, they employ models that incorporate ocean surveys, seismographic data, capital cost information, as well as unstructured data sources, such as images taken inside test wells. To make a single decision they analyze literally terabytes of data. Of course, such decisions may also save millions of dollars, so it’s important to get them right.

A similar amount of data is also needed for more generalized business decision-making. We need to be able to incorporate everything from demographic databases to videos and social metrics into the pool for analysis. Of course, this is possible today, but it can typically only be done when compiled by a skilled analyst in products like Excel or more sophisticated tools like SAS. Such ad hoc solutions can deliver value, but they require a considerable investment that isn’t available or reasonable for most decisions.

Collections of ready-made models. While we can create prescriptive models for specific purposes, businesses will need to have ready access to a wide range of models that address the wide range of decisions a business needs to make. While no one could ever anticipate all potential use cases, the world has plenty of examples of shared resources used to overcome analogous problems. Today, it is possible to collect a database of relevant, working models that can cover most likely economic conditions and circumstances — especially if a marketplace could be developed for sharing and selling them.

Modeling tools. Something, for the course, will need to bring this together: an analytics platform that can understand both the inputs of investment, the ongoing financial activities of the company, and other inputs needed to understand bottom-line results. Such a platform should standardize and categorize models — as well as make them available in libraries for analysts.

Analytics. Obviously, we would also need a suite of superior analytics tools that could be used for all kinds of simulations that might be required. This is the smallest of the hurdles we need to overcome, as plenty of sophisticated analytics platforms already exist that is able to undertake these tasks.

The simple fact is that if we take the right approach, we are on the brink of a much wider application for prescriptive analytics than currently exists, and one that would greatly facilitate data-driven decision-making. While AI-driven predictive insight has aided businesses in innumerable ways, we are running up against its limitations every day. Perhaps, with the application of new approaches, CEOs will finally get the value they’ve been seeking from the massive amounts of data they’ve been amassing. In that way, when it comes time to make big strategic decisions around the conference table, analysts will be leading the way.

Decision Making: Check in with Your Body

Decision making is a critical and complex process.  Unconscious drives and biases, interpersonal issues, fear of making a mistake, over confidence, increased tendency to misunderstand the nature of fact and truth, and too much or too little data and experience are all factors.  It occurs in personal relationships, in politics and government, in organizations, and in projects.  Here we address projects, though the same principles apply across the board.

Continue reading

How can you enhance your workforce capacity planning

Reports suggest that employee costs can comprise up to 70% of operating expenses in a highly skilled workforce.

Thus, optimal utilization of your resources is critical for delivering successful projects. However, is just a ‘gut-feel’ approach or conventional offline practices are suitable to manage your workforce efficiently?

Definitely not! Because these offline practices lack accuracy and cannot forecast valuable insights, which can result in the wrongful hiring/firing cycle, thereby incurring extra costs to the company. This is the reason why leading companies find it imperative to take a data-driven approach to plan for the needs of the organization proactively. It will minimize the extra costs of misaligned talent deployment.

An advanced capacity planning solution is designed to provide leaders with foresight into the workforce capacity. It provides enterprise-wide visibility into the available skills that lets them decrease the bench strength. Using these insights, leaders can tap into the right potential and allocate the right resources to the right job.

Moreover, this unmatched visibility into critical skill gaps and foresight into capacity and demand for future projects helps leaders to enhance productivity and deliver the best results. Although capacity planning can facilitate you to scale up your resource management game, one question persists,

How to leverage the tool to the best of its benefits and enhance capacity planning?

Here is the answer, followed by some useful tips:

1. Begin with assigning the key resources

The first step towards enhancing capacity planning is using critical resources with a highly specialized skillset for the high priority projects. The reason is that their talent and expertise can impact billable and strategic work in the future. This allocation will help you hit the ground running, and you can start booking more resources as needed.

By setting up a systematic and organized resource profile, you can be in the know of your resource’s abilities and competencies, and if they have taken training in any particular course. Now, how can you use these skills at your advantage? Recognize opportunities that align with your resource’s capabilities and assign them these tasks. You are not only helping your business grow but also enabling your employees to grow professionally and enhance performance. It’s a win-win for both.

2. Strategize with what-if scenarios

As a manager, you don’t want to be caught off-guard with unforeseen scenarios. A robust capacity planning tool caters to this as well. At the nascent planning stages of a project, you can modify the situations and predict the likely outcomes with accuracy. While doing so, you can plan with what suits best to your project and is profitable for the future.
What-if analysis is quite simply, a plan before the plan. The data-driven actionable insights will let you make strategic decisions and empower performance levels down the line. Moreover, you will be spared from unnecessary budget overruns, project bottlenecks when you have unmatched visibility of all the situations well in advance. It will also help you optimize resource utilization and work assignments for future projects.

3. Predict capacity v/s demand

One of the major benefits you can avail from a capacity planning solution is its ability to generate accurate forecast reports. These reports will forewarn you of the future project resource demand and the actual capacity. By leveraging these reports, you can bridge the capacity vs. demand gap (that can result in project bottlenecks) using appropriate resourcing treatment.

What’s more? These predictions will also give you a comprehensive view of excess and shortage of resources (resources gap) you might face down the line. To cater to this, you can implement adequate remedial measures and ensure uniform utilization of resources. Here is a list of potential solutions you can use:


Advertisement
[widget id=”custom_html-68″]

When there is a shortage of resources: 

  • Adjust project timelines to align with available capacity.
  • Retrain and skill up available employees to fill the gaps.
  • Hire contingent workforce before time to avoid further roadblocks.
  • Optimize Bench Management to minimize bench time.

When resources are in excess:

  • Bring forward project timelines to allocate tasks to the idle resources.
  • Fast-track projects by marketing the capacity available.
  • Bring forward initiatives to meet strategic goals and allow resources to brainstorm and contribute.
  • Allow movement of resources across departments to maximize utilization and reduce bench time.

4. Track the planned v/s actual hours 

Allocating resources is not the end of capacity planning. One major responsibility a manager has is to keep the project costs under control, maximize billability, and reduce resourcing costs. In order to do so, managers must keep a check on the project’s progress, resources’ progress, and the number of hours they log to a task. Why is this information so vital?
The reason behind its significance is, when you have a report of the actual hours utilized by resources, you can compare the same with the planned hours for each task. If your resources exceed the planned hours, you can brainstorm and identify the potential reasons behind it and take corrective measures ahead of the curve. This will keep the project costs in check and keep your firm from incurring any unnecessary costs.

5. Plan, represent and assign

Different types of projects demand different courses of planning. A mature resource manager goes for a hybrid approach to plan these projects. To streamline your staffing, you can follow a 3-step approach of planning, representing, and assigning resources. Here is the detailed explanation of each step:

  • Plan – the most important step to a successful project is a well-channelized plan. You can opt for the best approach that suits your project. For example, you can use work breakdown structures to detail the stages and allocate resources. In other cases when you are not sure of the resources who will work on the tasks, you will require high-level planning and approvals from senior managers to proceed further.
  • Represent – A capacity planning solution is flexible enough to let you represent the resource demand in different units. You can directly represent the demand in headcounts, or FTE (Full-time equivalent- standard working hours of a firm per week), or a number of hours at your convenience. For instance, if you require 4 FTE, that means you need four employees to work for 4 weeks or 4 employees to work for a week and complete the task.
  • Assign – After creating a meticulous plan and listing the resource demand, the last step is to use the advanced filters of the tool, enter your requirements based on role, department, competencies, location, availability, and a number of hours/FTE/headcount and assign the right resources to the right task.

The takeaway

A powerful, advanced, and comprehensive capacity planning tool is pivotal to utilize the talent pool to their best extent. When a resource manager has 360-degree visibility of the resources and their skills, has a provision to toggle and compare scenarios before planning for a project, he/she can build the highest value project and eliminate potential pitfalls.

The above-mentioned tips to scale up your capacity planning process will enable you to empower your decision making, enhance productivity, and help your resources add a more valuable contribution to the firm. Now that the potential benefits of a structured process are clear, have you thought of revitalizing your planning process?

Failing Fast in the Age of Pandemics

When I first heard the phrase “fail fast, fail often,” I was horrified. It was cited as a mantra by the early adopters of Agile methods.

But I didn’t understand what it meant in those relatively early days of Agile. The expression conjured up missed deadlines, broken promises, and an undisciplined approach to software development. I found the term threatening—it went against everything I believed in as a project manager. This quote by a CIO in a recent article summarizes how I felt back then: “I really don’t like the term ‘fail fast.’ I don’t like the term ‘failure.’ It’s got so much laden on it. I understand the folks in Silicon Valley; they mean something different.”

As I learned more about Agile and became certified in 2010, I came to understand that in many respects failing fast was a reaction against the “do it right the first time” slogan of the quality movement of the 80s and 90s. I learned that failing fast does not refer to a lack of success—simply a different meaning of success. Failing fast is a way to experiment quickly and be OK with throwing away unsatisfactory results. It‘s also a way to ensure that “sunk costs,” money already spent, is not a factor in future decisions.

So what exactly does it mean to fast?

Most definitions of ‘fail fast fail often’ include some of these elements:

  • Speed of execution. A project where the “speed of execution is a lot more important than perfect execution.[i]” In other words, it’s more important to get results quickly than to get perfect results. A feature of the old quality models, which required doing things right the first time, was having zero defects. Having implemented several large projects right after this kind of quality training, I now realize that the teams spent an inordinate amount of time trying to ensure that every aspect of the project would work perfectly. Back then even small defects were discouraged. In other words, the cost of preventing defects was far greater than the benefit of an earlier implementation.
  • Taking a large project, breaking it into small pieces, and time boxes (iterations) mean that more features can be completed in shorter amounts of time.
  • Experimentation and extensive testing. This includes trying new things, learning from these trials, and refining the inputs and/or tests in order to achieve different results. The number of tests is less important than learning from the results and making changes.

Failing fast in the age of pandemics

But let’s get back to the topic at hand—the need to fail fast, particularly as it relates to Covid-19. Many in the health care industry are realizing the advantage and even the necessity of “failing fast.” This is particularly true in the area of vaccine development. Vaccines usually take anywhere between 2 and 5 years or longer to develop, test, manufacture, and distribute. There are many reasons for this. Before being approved for manufacturing, vaccines typically go through many phases from animal trials to extensive testing on a variety of different human demographics.[ii] What, then, makes epidemiologists think that a Covid-19 vaccine can be made to fail fast and come to a successful conclusion? Here are some examples of how this is being done.


Advertisement
[widget id=”custom_html-68″]

  • One of the best examples of failing fast is by concurrently developing vaccines at the same time manufacturing and distribution channels for that vaccine are created. For this to work, several things are necessary. Industry and scientific leaders, regulators, and others, groups who do not always work well together have to do just that. And they have to share rather than withhold information–internationally.
  • Traditional thinking in vaccine development was that the creation, manufacturing, and distribution of that vaccine happened sequentially rather than concurrently. There was too much financial risk in thinking about manufacturing and distribution before the vaccine went through all the trials and was approved. Since most vaccines never make it to human trials, let along through all the trials to approval, why incur the huge cost to manufacture and distribute something that was going to get thrown away? That’s different with Covid-19.

Failing fast requires organizations to assume risk that was not thought possible in the past when it was too great a cost to develop these channels before vaccine approval. According to Dr. Fauci, this concurrent development can shave many months off the time that it typically takes to develop vaccines.[iii] Because of the need for speed and financial benefit, this kind of unprecedented international collaboration has begun.[iv]

  • Partnering with other organizations to develop the same vaccine. This is what’s happening in the development of the Covid-19 vaccine. Unlike vaccine development in the past, many organizations are working together on the same vaccine in order to speed up development and approval.[v]
  • Using AI to study the disease’s mutation patterns. This new coronavirus probably mutated from animals to humans and continues to mutate. Most of these mutations appear to be minor, but new evidence suggests that newer mutations help the virus better penetrate the body[vi]. Trials need to take these mutations into account and AI speeds up this process.
  • Using AI to determine what an infected cell looks like. AI can look at the many and complex cell attributes by looking at the problem holistically and predicting which potential vaccines are most likely to succeed in clinical trials. Humans are not good at understanding what a sick cell looks like.

All these fail-fast measures require strategic and innovative thinking, strong executive leadership, and a commitment to work collaboratively rather than competitively. But in a world-wide pandemic, failing fast and failing often is exactly what’s needed.

 

[i] Forbes, Sunnie Giles, 4/30/18, https://www.forbes.com/sites/sunniegiles/2018/04/30/how-to-fail-faster-and-why-you-should/#758c5b92c177

[ii] Rob Grenfell & Trevor Drew, The Conversation, February 17, 2020, https://www.sciencealert.com/who-says-a-coronavirus-vaccine-is-18-months-away

[iii] https://www.npr.org/2020/06/24/882678364/dr-fauci-discusses-recent-covid-19-spikes-in-several-states

[iv]World Economic Forum, Charlotte Edmond, May 14, 2020., https://www.weforum.org/agenda/2020/05/coronavirus-covid-19-vaccine-industry/World

iIbid.

[vi] https://www.washingtonpost.com/science/2020/06/29/coronavirus-mutation-science/?arc404=true, Sara Kaplan and Achenback, June 29, 2020.

AI in the World of Pandemics

A Twitter meme reads as follows: Question: Who led the digital transformation in your company? Answer: COVID-19

Every day during the Covid-19 pandemic US governors emphasize the need to use data to combat the spread of the virus. They look closely at various simulations to predict such things as the peak of the outbreak by location, the potential rate of recovery, death rates, and which works better, lockdowns or extensive social distancing. Such predictions and the ability to analyze massive amounts of data would have been impossible before AI (artificial intelligence). “These are the numbers behind genetic sequencing and artificial intelligence. To a degree never before possible, they give us power to understanding a pandemic even as it races to kill us.[i]

Recently when I was researching BA trends in the world of AI, I read several articles on the use of AI and quantum computing in healthcare, in particular drug development.[ii] One article even suggested that it would help predict pandemics.[iii] At the time that notion seemed far-fetched. Today AI is being used expensively to predict and try to stop the pandemic. Recent articles have cited numerous examples of the use of AI, HPC (high performance computing), and quantum computing[iv] and how they can help during this difficult time. Here are just a few ways these technologies are being used:

  • Diagnosis, treatment, and vaccines to distinguish between pneumonia and COVID-19
  • Identification of COVID-19 antibody candidates
  • Examination of cell membranes to determine how the virus proliferates
  • Predicting the path of disease spread
  • Distribution of vaccines and personal protective equipment (PPE) like masks and ventilators, as well as medical personnel based on what is and will be needed and where
  • Analysis of social media to predict where outbreaks will occur
  • Analysis of geographic anomalies in body temperatures to predict the path of the virus
  • Prioritizing patients with more urgent need and recommending individualized treatments

This is interesting and important information that will certainly help with future waves of this particular coronavirus, as well as future viruses. But what does this have to do with our world of projects? Plenty.

Need for accurate data

Since organizations began undertaking large AI initiatives, they have realized the importance of having not only massively large amounts of data, but of having that data be accurate. Here’s why. At its most fundamental level, machines use historical data (big, big data) to learn and improve. As new data becomes available, machines categorize it, learn more, improve, and make better and better predictions. Their algorithms depend on good data. If the data is unreliable, the outcomes will be less than useful and perhaps even harmful.

BAs have always been involved in helping organizations ensure that their data is accurate. They help organizations develop a business case for deciding whether or not to undertake the effort to cleanse the data, often a large and expensive undertaking. BAs can explain that while the cost to cleanse data is high, the risk of not doing so is also high. BAs can point to study after study of organizations that use inaccurate data and the disappointing results they have gotten from their AI efforts.

In addition, BAs can help organizations with this cleansing effort by doing such things as:

  • Analyzing data to determine how much needs to be cleansed
  • Developing cleansing implementation plans
  • Facilitating conversations to help resolve the conflict related to, for example, who owns the data, where the data comes from, and which source of the data is the one that should be used
  • Analyzing the results of AI simulations and questioning anomalies

Advertisement
[widget id=”custom_html-68″]

BA as Data Translators

More and more organizations are recognizing the need for data translators. This is a perfect role for BAs who have always been good at translating technical complexities into language a variety of business stakeholders can understand. This is harder than it sounds. Stakeholders usually have their own language, acronyms, and idioms—think hospitals, insurance companies, doctors, nurses, medical staff, patients, first responders, and the community at large. BAs can help organizations figure out a way to communicate AI results to various stakeholders so that the communication is understandable and relevant to their specific needs.

In addition, BAs can help by:

  • Looking at AI results and identifying trends, explaining the impacts of those trends, and explaining the importance of the assumptions used to come up with the results (see below for more on assumptions).
  • Helping ensure that the results of simulations and predictions make sense to various stakeholder groups.
  • Helping ensure that the results are visually understandable. Ill-defined and confusing charts and graphs are not useful for decision-making.
  • Helping shape AI strategies as well as helping to implement them.

Strategic, experienced, and well-informed BAs can be consultants to their organizations, resolve conflicts between stakeholder groups, and balance competing needs among them.

Correcting faulty assumptions

BAs are good at questioning assumptions. We know that every assumption is a risk and that we need to be aware of and document them, so that when they change, we can easily change our plans. Take the simulations used in dealing with Covid-19. Since no one simulation provides enough information, multiple ones are being used. Some are based on assumptions about social distancing, although that is just one. Here are just a few more examples of assumptions that have been recently used in predicting the pandemic outcomes[v]:

  • Everyone has the same chance of catching the virus from an infected person because the population is perfectly and evenly mixed, and that people with the disease are all equally infectious until they die or recover.
  • Dividing the population above into smaller groups by age, gender, health status, employment, and so forth.
  • The percentage of infected people vs those who died
  • The number of days before an asymptomatic but infected person spreads the virus to others.
  • That the data being used was accurate. For example, most simulations used data that came from China, which turned out to be inaccurate and therefore skewed the results.

As Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, said, “What we do is that every time we get more data, you feed it back in and relook at the model. Is the model really telling you what is actually going on?. . . Models are as only as good as the assumptions you put into them, and as we get more data, then you put it in and [the results] might change.”

BAs can help ensure that stakeholders understand the assumptions that are used and the effect of assumptions on the results. They can encourage running many simulations and looking at ranges of predictions based on different assumptions and communicating which assumptions were used for which results. And as tempting as it sometimes is, BAs can point out the risk of letting AI make decisions based on the results when assumptions are used.   

 

[i] https://www.washingtonpost.com/opinions/covid-19-numbers-are-terrifying-but-dont-lose-sight-of-the-good-numbers/2020/03/31/234877a4-7373-11ea-85cb-8670579b863d_story.html?utm_campaign=wp_todays_headlines&utm_medium=email&utm_source=newsletter&wpisrc=nl_headlines

[ii] Shohini Gose, Ted Talk A Beginner’s Guide to Quantum Computing, Nov 2018, https://www.youtube.com/watch?v=QuR969uMICM.

[iii] Matt Swayne, March 4, 2020, How Quantum Computers an Be Used to Thwart a Future Pandemic, https://thequantumdaily.com/2020/03/04/how-quantum-computers-could/HPC Wire, March 12, 2020

[iv]    https://www.hpcwire.com/2020/03/12/global-supercomputing-is-mobilizing-against-covid-19/, Global Supercomputer is Mobilizing Against Covid-19

[v] The Simulations Driving the World’s Response to Covid-19, David Adam, Nature, April 2, 2020, https://www.nature.com/articles/d41586-020-01003-6