Hack:
Use Simple Signals to Overthrow the Great Tyranny of Ideas
We don’t lack ideas. They inundate our inboxes and flood our feeds. But we do lack an effective, efficient way to filter out the noise and focus on the best. We can gather simple signals to better identify identify the best and avoid the tyranny of the rest.
"Ninety percent of [science fiction] is crud, but then, ninety percent of everything is crud."
- Sturgeon's Law
Most organizations do not want for ideas. They are packaged as status updates, event talks, online presentations and infographics. They overwhelm our social feeds and our inboxes. We like to call this “crowdstorming” -- its like brainstorming but scaled dramatically as we move our idea search and evaluation online and make use of our expanded networks.
For the most part, we are stuck using a particular pattern of crowdstorming. We call this the “search” pattern because it is focused on sourcing ideas and opportunities. We’ve been quite successful at scaling up access to ideas and in the process we’ve created a new problem: Most organizations have not scaled up their approach to evaluating ideas.
The result is that we’re not able to properly analyze the vastly increased flow of ideas.
The result of our overwhelmed evaluation processes is the misallocation of resources -- making sub-optimal choices about what ideas deserve further exploration and investment of critical resources. In the benign form of this tyranny, the best ideas might be identified but resources are spread too thin because we are unable to make more focused choices. By failing to eliminate more options at the fuzzy front end, resources are squandered in the costly process of realizing ideas.
In a recent survey of the top 1,000 global innovators, Booz&co found that 75% of respondents believe they are marginally effective in the front end process. And the “idea conversion” stage is called out as being particularly important. Some respondents mentioned explicitly that idea sourcing has become less of a problem when compared to effective selection. This supports our findings in various open innovation and co-creation settings. We lack the tools to select from a universe of many, many more options.
Beyond the very tangible issue of mis-allocating scarce resources, there are some softer issues too. When stakeholders understand the less than ideal processes used to evaluate and select ideas, they lose confidence in a system that supports these ideas. Certainly there are some processes that enable stakeholders to have their say and offer feedback, but very often these processes suffer from limited transparency, creating a further concern that stakeholder feedback is not valued. With this, the likelihood of future participation drops and high quality feedback fades, leaving little to no resistance against the tyranny of ideas.
There is a second crowdstorming pattern that we have noticed; we call it the collaboration pattern. Like the search pattern there is an emphasis on sourcing ideas, but there is also a focus on evaluation of ideas. Specifically, priority is given not only to idea creation but to the feedback about the ideas. The feedback role is scaled up by distributing evaluation across many more participants. The collaboration pattern is conrasted with the search pattern in Figure 1, below.
Figure 1: The search crowdstorm pattern is focused on connecting with ideas (and people who source and generate the ideas). The collaboration crowdstorm pattern places as much emphasis on the roles involved in evaluating ideas.
The benefit of scaling up feedback in this way is that a filter is created to address Sturgeon’s Law -- we can more efficiently avoid the distraction of the 90% of crud that threatens to suck up our valuable development resources. We have a chance to reign in the tyranny of ideas.
But how do we gather quality feedback and evaluation at scale, in an efficient way? For this approach to work, we need participation from stakeholders who can represent diverse perspective, spanning a range of expertise. And we need an efficient way to solicit and gather feedback.
Simple Signal Sharing
When we encounter a new idea, we can use simple attributes to judge the relative value of the idea. We might look at who the idea is coming from or where it is published. But there are many signals that can be generated in response to an idea to give us more evaluation capabilities. Here are a few examples:
- Angel.co - Uses responses to startup ideas to highlight the most promising ones. Signals such as number of followers or mix of investors are used to create lists such as trending (based on changes in signal) or based on the network of investors who have expressed interest in the plan.
- LEGO.cuusoo.com - Uses a 10,000 vote threshold before considering an idea. LEGO makes it clear that their internal team will review any ideas that receive 10K votes, thereby drastically limiting the number of ideas that these resources need to evaluate each quarter.
- Kickstarter.com - Uses indications around funding to track the attractiveness of ideas. The rate of funding is interesting, as is the total amount and total number of backers.
- Amazon.com (studios) - Makes use of Amazon’s experience with product reviews to evaluate ideas, or more specifically the elements of TV ideas such as scripts. Ratings and downloads indicate level of interest and feedback about potential.
- Quirky.com - Uses voting and feedback to select ideas for further development. Like Amazon, the exact criteria are not clear but more votes and more positive feedback will ensure consideration.
- MyStarbucksIdea.com - Has been using voting and feedback to identify and prioritize ideas for consideration.
Figure 2: Simple signals collected in the service of creating rankings to identify the best ideas.
These examples span a range of organizations in terms of size, sector and selected signals. (In fact, in this challenge, we see at least 3 types of signals -- Likes, Comments and Shares -- though we don’t have an explicit understanding of how these might feed into evaluation.)
All of these examples ask participants to share simple signals. The signals are easy to understand and they are easy to create (liking, scoring or commenting), or they are just by-products of something we might do anyway (sharing or following).
But, collecting signals may be the easiest part of the process.
Recruiting Participants to Offer Feedback
Who should participate and how do we help ensure the right talent shows up? To start, it will help to list the ideas in one place, so that we can periodically remind stakeholders to visit a single destination. Then we can make use of existing corporate communications channels like email updates, internal social feeds, etc. Idea owners can use whatever communication tools are at their disposal to distribute their ideas, too. And the same is true for idea team members or supporters.
Giving Stakeholders GAME
We want to cater to a wide range of incentives -- from simple recognition for useful feedback to sharing in the upside of an idea’s success (why just give rev-shares to idea inventors, when selecting ideas, also have value). In general we want to cover GAME:
- Good - Doing something positive for the organization or broader community
- Attention - Of the sort that might happen if one were to win this prize
- Money - Some form of financial compensation
- Experience - Learning and being part of a successful idea development.
Each organization needs to determine the right mix of incentives in order to motivate participation for its challenge. But no matter the challenge, it is generally important that an organization considers how to reward people for playing productive roles in the feedback process. Our signals can help us solve this problem too. We can use the signals not just to understand ideas but also assess the people providing feedback about the ideas.
From Signals to Rankings (for Ideas and Participants)
We can transform these simple feedback signals into at least two scores:
- Idea scores - This is a combination of signals (votes, likes, comments, etc) in its simplest form. It is flawed in many ways and open to gaming, but it provides a first order filter. We can layer in weightings and derivatives measures in more advanced formats later.
- Contributor score - Again, the simplest form might just look at volume of activity. It offers a simple way forward to start. Then we can get to more complex questions about things like quality of feedback. We have lots of evidence to suggest that participation will follow a power law, so identifying the top 10% of contributors should be possible, if there are some inaccuracies in exact rankings.
The idea score is likely familiar, so its worth talking a little more about the contributor scores and rankings. Amazon maintains a ranked list of their top reviewers and Quirky goes much further offering percentages of future product revenue based on feedback contributions. Importantly, the signal is the same -- it is being used to sort through ideas AND to evaluate contributors.
With scores, we are then able to rank both ideas and contributors. We then have the means to identify the top ideas as seen from the stakeholders perspective, but just as important, we also have the means to recognize and reward those who are providing feedback.
Our main objective is to do a better job of deciding where NOT to allocate resources. Secondarily, we want to give a more explicit and transparent way for stakeholders to have a say in where resources are allocated (or NOT allocated).
We are seeing some clear comparable examples that are literally about resource allocation in the case of Angel.co and Kickstarter. But we also have some examples that are providing directional help -- by removing 90% of the noise in the case of Amazon (studios) or for LEGO via the Cuusoo platform.
In the case of Angel.co and Kicksarter, the primary value of the signals is to potential supporters and the teams proposing the ideas. But this activity can certainly be used to rank (level of funding, rate of funding, diversity of supporters, etc). In the case of Amazon and LEGO, the process is a little different: The signals are used to lift ideas out of the noise -- high numbers of votes, downloads or high ratings move ideas to the top of the pile signaling that they deserve allocation of resources for further evaluation.
Our expectation is that these collections of signals will spur additional conversation and connection between stakeholders. And, they will also provide the ranked and filtered list that leaders can leverage to overcome the tyranny of ideas.
1. Select a tool to gather signals - That will enable you to post content such as text, images and video and gather signals such as voting, commenting, sharing and rating. Publishing systems like blogging platforms can be adapted to the tasks; the main requirement is that the data from the signals can be gathered and analyzed.
2. Gather a few hundred ideas - They might be generated in response to a theme or they might be existing ideas under consideration by a group or they might be a collection of ideas from elsewhere (such as Angel.co or Kickstarter). Since the focus is on feeling out the process, it might be advantageous to use ideas from elsewhere. Set a time limit -- no more than six weeks to test the process.
3. Invite stakeholders - Diversity is important so finding an existing group that spans multiple parts of the organization hierarchy is important. The group need not be more than 100 people. Share the objectives and context for the test. There should also be a subgroup of those who would traditionally be involved in selecting ideas for further exploration.
4. Solicit feedback - Over a 10 day period, ask participants to review and interact with ideas -- i.e. rate, offer feedback, share, etc.
5. Analyze and share results -- Ideas can be sorted according to specific signals, like average rating or combination of signals. Ideally a number of ranked lists will be created:
- Sorted lists based on simple signals
- A composite lists based on the total amount of activity around an idea
- A list generated by traditional decision-makers (done first or responding to signals).
6. Discuss -- The focus of the discussion should be differences or similarities between the rankings resulting from the analysis of signals vs. the ranked list created by traditional decision-makers. As a first pass, we are interested in what appears in the top 10% (more than the exact rank order) to determine if we can filter out 90% of the less interesting ideas.
Next steps should consider adjustments to the ideas being considered, the number and mix of participants as well as the composite scoring to be used as ranking criteria to arrive at the top 10% of ideas.
We first heard the term “Tyranny of Ideas” from Peter Espersen, Head of Online Communities at LEGO. His initial framing of the problem really helped to clarify the issue for us.
Much of the thinking about evaluation criteria in the innovation process was inspired nearly two decades ago by work of Prof David Wallace at the MIT Computer Aided Design Lab.
Various online examples referenced above point the way to real world potential - Amazon, Quirky, Angelist, Kickstarter, Starbucks and LEGO. There are many more platforms that are harvesting signals in interesting ways but one of our all time favorites is Flickr Interestingness.
More on Contributor Scoring
Wanted to respond to an offline question about examples/inspiration that are more specifically focused on "contributor scoring".
1. http://stackoverflow.com/users - uses signals to arrive at a reputation score and maintains an ongoing ranking of contributors.
2. www.giffgaff.com (from O2/Telefonica) - is not focused on published, ranked lists of contributors, but does make pretty sizable cash payouts. This is a fairly detailed description of how they are assigning points to contributors - http://community.giffgaff.com/t5/Contribute-Innovation-Promotion/More-on...
3. www.quirky.com - offers a good explanation of their points assignment approach - not there are explicit differences between % of points allocated for ideas vs activities related to feedback about the ideas - http://www.quirky.com/learn/influence
4. www.jovoto.com/community - like quirky, jovoto measures contributions in the form of ideas and feedback, but unlike quirky they maintain a community ranking
5. www.klout.com - is one of the more talked about reputation scoring approaches at present, using signals obtained across multiple platforms from RT's on Twitter to comment counts on Facebook. The end goal is similar to the previous examples, to get a relative idea of level of influence.
In the giffgaff process, they are attempting to guard against unwanted behaviors that can result when you share specific points and rankings, so they hide some of this detail. Conversely Quirky makes this very explicit and is using other approaches to keep bad behaviors in check - most notably influence flows to people once ideas are selected - and since Quirky makes idea selections, they can ultimately control the flow of influence to participants.
- Log in to post comments
You need to register in order to submit a comment.