Understanding Your Project’s Success Metrics

Guiding clients into successful projects and operations often requires having long conversations about goal setting. Sometimes, clients are simply looking for project management to get work done. Most of the time, however, we find that clients have goals for doing those projects but haven’t defined the measures of success. This exercise can be difficult and time consuming, but it is never wasted.

We talk about ‘whys’ often. What is your “why” or your reason for doing any project? “Because they told me to” is not the reason. Most of the time, you decide to do something or you are tasked with doing something to increase profit in a category, to reduce waste, to improve customer perceptions, etc. But even that isn’t very succinct. This is where the tricky part comes in.

To set up your ROI, KPI, Quality Measure, or other indicator of success, you need to get down to the level of defining profit, waste, or perceptions. Clients can get frustrated with what they often see as ‘semantics’; however, having a clear and shared language assigned to your goal is imperative to get buy-in from project teams and business owners. Get your baseline or current state documented. Create a method to track that number regularly to check in on progress as your project is implemented.

Effective Equity Programs: Benchmarking for Better Health Outcomes

One of the considerations of Equity programs is the identification of the benchmark or best performing population. The easiest way to identify a group to measure against, and the method that is often used as a result, is to measure against the population served. This leads to health plans and other types of organizations measuring against the highest performing segment of their customer base. Can we see how this may not result in appropriate targeting of improvements or interventions?

Specifically, but not uniquely, in the health care space, the healthiest people served by the health plan may not be high performing against the population as a whole (the broader community). Therefore, in this example, community level statistical analysis should be leveraged wherever possible to improve quality across all the populations served. When determining the cohort distribution, criteria such as Race, Ethnicity, Language, Disability, Sexual Orientation, Gender Identity and Expression, Financial and Employment Opportunity, Quality of Housing, Access to Education, and other Social Determinants can be used in a multi-factor model.

Tiering performance results and then identifying common characteristics may be appropriate; however, it is sometimes appropriate to use geographic groupings that demonstrate similar patterns of access to resources or services. In deciding how to slice the population to apply improvement interventions, a rigorous data analysis strategy should be used.

Once we have our cohorts defined, an analysis of the barriers to full attainment must be undertaken. Whether this is through compilation of research performed by public/private institutions, feedback and survey of a sample of the cohort, or application of evidence based practice specific to the population profile, the interventions should be designed for the different groups differently.

Finally, the interventions or innovations used to improve the outcomes should always be designed to effect the measurement that was used to identify the disparity in the cohort. Many organizations and regulatory bodies dictate improvement goals or minimum performance standards but, in almost every case, improvement of any amount is rewarded. This may only be realized deep within your financials but it will always be met with a positive even if goals are not met.

Understanding Health Equity in 2025

Recent engagements and readings have led to a strong focus on Health Equity and Equity in Technologies. Diversity, Equity and Inclusion have come under attack recently based on a fundamental misunderstanding of the intent of these efforts. Just as Affirmative Action was turned into a talking point for reverse-prejudgment, Equity is confused with the idea that in order for everyone to have an equal opportunity to achieve their highest potential, those that have achieved this status must be pushed aside in favor of those with less merit.

Equity is about recognizing that success is often the result of structural advantage. The “rich get richer” triggering phrase is no less true because it hurts. Resource distribution is not equal. Access to good schools, medical facilities, healthy food options, and employment are not equal. The goal of equity is not to provide equality, but rather to ensure that everyone has access to tools and resources that allow them to succeed and is afforded protection from technologies and policies that ignore or compound the experience of people that have different access or historical barriers to achievement.

A reading list is presented here that has informed some of the projects we are currently working on with clients. Is your business looking to go deeper on the idea of Equity in 2025?

  • Systemic, How Racism is Making Us Sick, Layal Liverpool
  • More than a Glitch, Confronting Race, Gender, and Ability Bias in Tech, Meredith Broussard
  • Noise, A Flaw in Human Judgment, Daniel Kahneman, Olivier Sibony, Cass R. Sunstein
  • The Best Practice, Charles Kenney
  • The Political Determinants of Health, Dawes
  • A Right to Health, Jerome
  • The Injustice of Place, Edin, Shaefer, NelsonCare Work, Dreaming Disability Justice, Leah Lakshmi Piepzna-Samarasinha
  • Inclusion on Purpose, An Intersectional Approach to Creating a Culture of Belonging at Work, Ruchika, Tulshyan
  • Under the Skin, Linda Villarosa
  • Automating Inequality, Virginia Eubanks
  • Biased, Uncovering the Hidden Prejudice That Shapes What We See, Think and Do, Jennifer L. Eberhardt, PhD
  • On Medicine as Colonialism, Michael Fine
  • Getting Risk Right, Understanding the Science of Elusive Health Risks

The books listed above are a subset of the readings that inform our thoughts on how to identify and mitigate barriers to equity. This is by no means the only topic of interest or expertise for Bjorn Consulting; however, this is an area of knowledge that is relevant to our collective advancement in 2025.

Efforts that are undertaken in these areas must be specific, measurable, attainable, repeatable, and timely (SMART); just as any work plan or project must be designed to achieve goals. Work for work’s sake is useless and expensive. Business and Policy analysis prior to implementation of regulatory requirements and/or contractual imperatives is paramount to designing process that succeed and stand the test of time. Let us help you to achieve something.

Change. Management.

I will always recommend managing the workload before the people. Give your people the training and tools to do the job. Share the big idea and give them a chance to buy into the goals. Then let them do it. It is understood that standards must be installed and met in order to ensure that your company’s promises to customers are met with reliability, but rigidity in the work can stifle innovation and positive change.

How often do we hear the child’s complaint of “if I got the right answer, why do I have to show my work?” The reason that we want to see the work is to make sure that the worker can repeat the process and continue to get the right answer (luck and skill). But once you have proven that the work derives the result reliably, does it matter if it was done differently?

There are lots of people in an organization that are happy to use the tools and do the work exactly as they are trained during orientation. There are also lots of ‘rebels’ who see opportunities in their daily work to use the tools differently or use different tools to get the job done faster or with less direct effort.

I am not suggesting that we let people run amok doing whatever they want. In our regulated industries and public companies with public scrutiny we must be able to produce evidence of the ‘work’ and explain what was done and why. What I am suggesting is that Management be macro and create an environment that allows for staff to develop methods and feel comfortable sharing their ideas and showing their work.

Finding a new way of doing things is exciting. Sharing it with Management is exciting. Having Management say “that’s all fine and good, but it’s not the right way and you need to stop” is disheartening. Having Management say “that’s interesting. it isn’t how we do it, but show me how it works and still meets all of our requirements” is innovation.

You may find there is a flaw. You may find there is an opportunity to improve existing practices. You may find the new normal.

Lessons Learned

When is a lesson really learned? Do you find yourself using the phrase and repeating the mistake? I do. I need to be more observant and reflective both during and after a project.

Why do I do it. Why can’t I learn the lesson? This generally happens for one of two reasons: I felt the sting but didn’t actually look for the source or I assigned the need for correction to the problem not myself.

The first one is easy to explain. If it hurts, I know it hurts and I want that to stop so I say “I get it! Now let’s move on. Lesson learned. I won’t do that again.” But THAT is never defined as an action or a strategy (source) it is defined as the pain. Now this may sound very personal and nothing to do with business; but if we stop for minute and think about the last failure we experienced in a client or team setting it won’t take long to find an example where we just wanted to move on and forgot to really examine the failure point and change our standard approach.

The second is more nuanced. I also have found myself on the team with problems, but determined that the issue wasn’t my failure. In those cases, I have nothing to fix, right? Wrong. There is a lesson to be learned from being a party to a problem even if you are not a part of it. The lesson is to look out for that thing in future efforts and correct early. We rarely work alone these days and just because one person’s poor or hasty decision caused the issue this time around doesn’t mean that they are the only one responsible if it happens again or with a different team.

The best solution to both of these is sometimes referred to as Post Mortem analysis. In many of my clients’ industries that is not a well-regarded term (healthcare professionals and administrators frown upon use of death language), so we can just call it “project aftercare”.

  • What happened?
  • What didn’t work?
  • Why?
  • Next time that happens or we reach this decision point, what should we/shouldn’t we do? Don’t be too rigid here. Just some alternatives that were perhaps discussed. When you work the next project that has this issue, the circumstance may be perfect for the failed solution to work. This isn’t about never doing it again, it is about learning the lesson to pause and consider.
  • Where does our collected learning live? This one also needs discussion. It isn’t the same group every time and lessons shouldn’t be personal; so where can we collect project knowledge in the PMO?