Measure the attitudes of your stakeholders in surveys. Link their scores to financial performance using factor analysis and structural equations. Discuss the value of score changes with the people in your organization. Identify the actions that best create value; and now you can empower, relax and perform.
How will your customers, employees, investors and other stakeholders react when they learn about the actions embedded in your strategy? To what extent will
- Sales and margins improve?
- Employees help the company succeed?
- Investors see financial advantages, be it better profitability or lower risk?
- Suppliers, pressure groups, and other indirect stakeholders support your ambitions?
If your estimates are wrong, your strategy is likely to destroy value. You will be allocating your scarce financial and human resources to areas where you don’t get the best benefits.
So far, the expected effects have primarily been based on gut feel, and the person highest in the corporate hierarchy has been expected to have the best gut feel of everybody involved.
But business life is full of surprises and financial targets are often missed. What went wrong, and why? Who is to blame?
Internal financial reports give little guidance. You can see that the money has been spent but you can’t see why the positive effects didn’t come. Didn’t your stakeholders notice the actions? Didn’t they like them? Didn’t they care? Or were performance improvements in target areas outweighed by mistakes in other areas?
Conventional surveys of stakeholder attitudes may shed some light over the problems but give no conclusive evidence.
So you are left with guesses and more or less well-founded claims that inevitably lead to mistrust, debates, and frustration. Managers will be replaced on unclear grounds, new management layers will be introduced, and new control systems will be built - but only time will tell if the company is better or worse off afterwards.
The solution can be found in a broad, ambitious three-phased approach stretching over several years:
1. Remove mental blocks
2. Reform survey analytics
3. Redesign the management processes
These three phases are summarized below.
The mental blocks go back to people’s experiences from school: High scores are good scores.
This is no absolute truth in business life.
Survey scores are often used to measure how well a company is doing in the eyes of its stakeholders. However, if you link survey scores to measures of companies’ actual financial performance, you are likely to be surprised:
- Customer scores are typically unlinked to customer profitability
- Employee scores are typically negatively linked to ambition and willingness to help the company succeed
- Investor ratings are typically unlinked to all survey scores
This list of surprises can be made much longer. Employees tend to appreciate CSR if it means taking good care of the employees, otherwise they would rather see the money stay within the company. Local politicians may like CSR if they themselves get to decide where to use the resources, and appear as the ones who made the efforts possible. (See my MIX contribution called 'CSR - Loved by Employees, Ignored by Customers, a Headache for Investors' for further discussions and supporting evidence regarding this topic.) Members of a pro bono organization may not be particularly interested in the results they achieve but enjoy the feeling of doing good together with other people.
To overcome the mental blocks, you need first to manage top management’s expectations by showing them examples from other companies. Then you should re-analyze their own existing surveys, linking the scores to actual performance measures, and stress how poorly they relate.
The second phase is to reform survey analytics, a topic which I describe in further detail on the MIX site under the title “Capture Preferences, Manage Attitudes for Success in an Ever-Changing World”.
Survey providers excel at producing cross-tabulated scores in colorful graphs and complex tables. However, when the mental blocks have been removed, such deliveries add little managerial value. See the attached document 'Observations of Absence of Linkages Between Margins and Scores' for a discouraging series of graphs that show no linkages whatsoever between survey scores and financial value, measured as actual contribution margins, for a consulting firm.
Instead you need to simulate the human decision process and translate it into quantitative terms. You want to know how much value a certain action would add, and this can be estimated by combining the scores from purpose-built surveys with actual performance facts using factor analysis and structural equations.
This approach is generic and valid for all categories of stakeholders, albeit with fundamental differences in content.
For most categories, the main traits of the decision process are obvious. We can all imagine what drives people’s willingness to buy a certain product or service, what makes people accept a certain job, or what makes pressure groups vividly express their opinion about a specific circumstance.
The least understood category is the investors, yet they are the ones who ultimately determine the value of an action.
The investors base their decisions about what to buy and what to sell on likely future profitability balanced against the risk that this profitability doesn’t materialize. The net effect is that two companies with identical profitability prospects may be differently valued, and that actions may create value although profitability prospects deteriorate.
Thus, the expressions “value creation” and “financial performance” are used intermittently in this text and imply that the value of actions is a function not only of revenue and costs but also of risk.
Conventional surveys often miss the most important piece needed to estimate linkages to financial performance, namely questions and facts about the behavior that you would like to influence. When reforming survey analytics, the desired behavior is the starting point for new questionnaires.
Once the desired behavior has been defined, you need to specify an overall evaluation that predicts the desired behavior. Tentatively this may be “Satisfaction” but just as well some other abstract concept that appears to be relevant, such as “Attractiveness” for customers, “Motivation” for employees, “Fair Share Price” for investors, and “Responsibility” for indirect stakeholders.
The human brain hates chaos, so we tend to systemize details into broader driving factors, but we are not very selective when we do this. Typically there are only three to five driving factors behind people’s overall evaluation of a company.
Having defined tentative driving factors, it is time to convert them, the desired behavior and the overall evaluation into specific questions to be used in questionnaires.
You should also determine which demographic and behavioral facts to collect directly from the respondents and which facts you can extract from existing databases.
When data have been collected from both respondents and databases, factor analysis is used to confirm that the driving factors are meaningful not only businesswise but also statistically. If need be, the questions may be moved from one factor to another, or create a new factor, or be excluded from the cause-and-effect analysis.
The factor analysis is important as most survey responses in most surveys tend to correlate more or less strongly, see the attached documents called 'Observations of Correlations in Consumer Survey' and 'Observations of Correlations in Employee Survey'.
Many times analysts have erroneously claimed causality by finding correlations between two individual variables. This has led to exaggerated efforts to rely on point solutions rather than addressing the full complexity of the challenge – “We just need to answer the phone faster” or “We just need to better inform our colleagues about our decisions” when the real problem may have been the entire customer service function or the way management trusts the employees.
As a consequence the quantitative linkages between financial performance and survey scores must be done relative statistically verified factors, not individual questions.
This is done in structural equations describing the stakeholder category’s decision process, i.e. the expected linkages between its driving factors, overall evaluation and desired behavior.
Here the calculations can be set up in such a way that they show by how much a certain change in a driving factor is likely to impact on the overall evaluation, and by how much a certain change in overall evaluation is expected to impact on the desired behavior.
Quality measures are used to determine how reliable the links are. No analysis based on survey scores is as exact as the figures may appear. There are confidence intervals, explanatory power and several other quality indicators to consider before basing decisions about resource allocation on the conclusions from the analyses.
The concluding estimates of the value of actions are made outside the structural equations using Excel spreadsheets or equivalent tools.
First you estimate the costs associated with changing a specific score by a specific amount. Then you use the impact estimates from the structural equation, adding facts about the size of the stakeholder group affected - and if relevant also associated profit margins – to arrive at a value that this action could generate.
If you expect the risk level to change, this value has to be adjusted accordingly, for example by using present price/earnings ratios as a starting point.
Finally you repeat the value estimates for alternative actions to get several alternatives to discuss and chose from.
Making such analyses with factor analysis and structural equations may seem complicated, but it is like riding a bicycle – hard before you know how to do it, easy once you know it.
In fact, with proper training and tools, the time for executing this kind of survey analytics from start to end, including generating tailored reports and integrated value simulators, is a matter of hours.
And compared with gut feel, the reformed survey analytics are a quantum leap towards better value estimates, thereby enabling more reliable strategies and better resource allocation.
The third phase, redesigned management processes, is characterized by more teambuilding, new possibilities for proactive decisions, and increased empowerment.
There are two main reasons why teambuilding is strengthened:
- Stakeholders perceive companies in terms of driving factors, not details, so the performance of an organizational unit depends on how well it works in total and together with the other parts of the company, not on the achievements of individuals
- Planning and budgeting needs to be done based on how specific actions are perceived by all stakeholders and for all driving factors, particularly if resources would be reallocated from one area to another area where they are likely to create more value
The possibilities for proactive decisions stem from the fact that changes in attitudes precede changes in behavior by several months. Thus, a company that has taken planned actions but without registering changes in attitudes knows that there will be no impact for the time being on the desired behavior.
This gives the company an early warning, saying that they either need to inform better, take additional actions in high-impact areas, cut costs in low-impact areas, or review the financial forecast.
The empowerment possibility is the logical consequence of better survey analytics, increased involvement during the planning and budgeting processes, and better early warning system giving management a good chance to take early corrective actions and remedy weaknesses in the strategy before they reach the end-of-year P&L.
This also means that the solution described above is a practical way towards increasing trust in the employees. Trust has to be gained. By first agreeing with management on what to achieve, then delivering results according to plan with little support and supervision, employees can prove that they are worth further responsibilities and challenges.
The solution has practical impact in many different areas, such as
- Strategy development and resource allocation
- Budgeting and control
- Training
- Scorecards
- Management hierarchies
Some of them have been touched upon above but anyway deserve to be highlighted here.
The solution makes strategy development and resource allocation easier, faster and more reliable. It is based on facts and objective analyses of stakeholder attitudes, leaving less room for guesswork, preconceived notions, and hidden personal agendas.
Budgeting is done by allocating resources to specific actions that should result in specific survey scores at specific points in time. As score changes typically precede actual changes in behavior by several months, the surveys become an important component in financial control.
However, the methodology is new so everybody involved in planning, budgeting and control needs proper training to know how to best benefit from the new information.
Scorecards may be revitalized and used to increase empowerment without loss of control. The original Balanced Scorecard concept had a fundamental weakness. Companies were erroneously expected to know exactly which measures to apply when balancing the scorecards and no quantitative methodology was suggested for this purpose. In real life companies selected customer and other stakeholder scores that were unrelated to their future financial performance so the scorecards didn’t help them achieve their targets. This problem is resolved with the new survey analytics.
Longer term, the solution may flatten management hierarchies. With more reliable strategies and management information systems that identify the exact weaknesses and the root causes of performance problems, managers can assume a greater responsibility without substantially added workload and yet feel confident that the business is under control.
The full solution is not suited for “quick & dirty” efforts as it spans over several years and involves basic management principles.
However, companies can take a few steps to remove the mental blocks as a preparation for considering the full solution:
- Get access to raw data
- Add facts, preferably per respondent
- Display the variables in scatterplots
These steps require little special competency and could be performed by most business controllers.
Begin by getting access to the raw data from previously conducted surveys.
The data collection firms are often hesitant about supplying the raw data but generally and formally they are the property of the company that ordered the survey. The hesitance may just reflect that the data collection firms are afraid to reveal the poor state of their files and filing systems.
Personal integrity may be a valid reason for not sharing the files, particularly when consumers and employees are the respondents. If so, all personal references must be removed by the data collection firm before sharing the files.
If there are no financial facts per respondent in those files, you can try to have the data collection firm add them afterwards but before removing the personal references and sharing the files.
In either case, it may be valuable for the company to know the demographics of the person who gave a certain open answer, but never the name of that person.
Sometimes it is necessary to rely on higher level of financial data, such as branch office profitability or market share in various segments, when comparing scores and financial performance. This is particularly true for employee surveys where results on respondent level are rare.
Once data sets from surveys are available, display two variables at the same time in a scatterplot, for example in Excel. Use standard features to draw a regression line and include the explanatory power.
If the line has a significant inclination, you have an indication that there might be an interesting positive correlation between the two. Before drawing any conclusions, look at the explanatory power to see how strongly they seem to be related. If one of the variables is a measure of financial performance, expect the explanatory power to be low, probably below 10%.
The methodology described above builds on the methodology developed by Prof. Claes Fornell at the University of Michigan for the American Customer Satisfaction Index (ACSI) and similar national satisfaction indices.
I am also thankful for the input of Dr. Marc Orlitzky, Ass. Prof. at Penn State and a prominent researcher in the field of linkages between CSR and financial performance
You may also want to consider analytics based on sound conceptual frameworks rather than religion (namely popular belief systems about good and bad practices). I have use this approach with remarkable results on several occasions (see my hack "The Trust Extender"). Specifically, read my article "Corporate Governance Best Practices: One size does not fit all" at http://trustenablement.com/local/One_Size_Does_Not_Fit_All-ICPA_Singapor.... Then I would direct you to two papers I wrote "Trust Measures and Indicators for Customers" and "Trust Measures and Indicators for Investors" (see http://trustenablement.com/index.htm#StakeholderTrust), which explain how a predictive framework can help reform analytics.
- Log in to post comments
Please excuse me for a very late response to your valuable comment.
I certainly agree that Trust is a highly valid concept to consider in the kind of analysis that I do in ValueMetrix. For me, Trust is one several Overall Evaluations that may be reliable indicators of future Desried Behavior of both direct stakeholders (customers, employees, investors) and various indirect stakeholders.
However, as you correctly mention, one size does not fit all.
The approach that I now describe from three differnet angles (and soon four) on the MIX site makes a point of selecting that Overall Evaluation that in each specific situation is the best indicator of future Desired Behavior. In some cases this Overall Evaluation may well be Trust, but in other situations it can be Satisfaction, Motivation, Value-For-Money, Attraction Joy or some other similar concepts.
In the ValueMetrix methodology and the associated software, I use statistical quality measureas to determine which is the best. Overall Evaluation to use in a certain situation. To mention an example, for a client in the employee field "Joy for Work" turned out to be at least twice as good as "Employee Satisfaction" as a predictor of willingness to recommend the company. Yet, I don't know if there could be other Overall Evaluations, such as Trust, that would be even better predictors. We did not include Trust questions in that survey so there is no way for me today to tell how Trust would rank against Joy.
Let me also stress that in ValueMetrix, I go beyond classifying trust indicators. The thrust of the methodology would in the Trust case be to quantify by how much the overall Trust would change if the score for a certain Trust indicator would change by a certain amount, typically by 5 units on a 0 to 100 scale. To get the financial value of such changes in overall Trust, I would use the financial value of the likely changes in Desired Behavior as a basis for the calculations, making adjustments both for the cost of the actions and the marginal changes in production costs that the company would incur from the volume changes caused by the changes in overall Trust.
I am now spending a couple of days improving my MIX documents. A better treatment of Trust is no doubt one of the changes I will make!
Thanks for your help.
Best regards,
Anders
- Log in to post comments
You need to register in order to submit a comment.