Evolving The Creativity Scan
Editor | On 21, Jun 2018
Our fifteen-year programme of research into ‘effective creativity’ (also known as ‘innovation’) has revealed a number of critical insights into the creative process. From that research we saw a need to separate two important aspects of intelligence: firstly there is what we might think of as our ‘creative intelligence’; second is the more traditional (‘IQ’) interpretation of the word, which we might define as ‘knowledge intelligence’. In simple terms, ‘knowledge intelligence’ is how much stuff we know, and thus how well we perform in school exams and general knowledge quizzes.
Our hypothesis has been that, as we all live our lives there is an innate transfer of intelligence from one of these types to the other:
Figure 1: ‘Creative’ Versus ‘Knowledge’ Intelligence
When we are born, we are all fundamentally born with lots of unconnected neurons inside our brain. Because they are un-connected, we have many, many ways in which they can be connected. Our creative potential – and therefore, we propose, our creative intelligence – is, in this state, at its peak. Never again will we experience such plasticity and flexibility. And a good thing too, since, as we quickly learn, having a certain amount of knowledge about the world is useful for our survival (‘don’t put your hand in the fire’). Certain neurons get connected to others, and get reinforced into immovable, concrete pathways as our knowledge increases. But there is a definite trade-off occurring when this essential advance happens: the more knowledge we acquire, the more neural pathways we make rigid, and hence the less creative potential we have…
…until, ultimately, after, say, finishing a PhD in ‘high Nusselt Number, particle-laden supersonic aerodynamics’ (insert image of one of my best friends here) we have an awful lot of really useful specialized knowledge and very little potential to create new supersonic aerodynamic solutions.
The knowledge-versus-creative-potential trade-off is inherent. Inherent, but, fortunately, not unsolvable. ‘Effective Creativity’, then, is about solving the contradiction:
Figure 2: Solving The Creative-Knowledge Intelligence Contradiction
And it is a contradiction that really has to be resolved: the designers of Concorde, for example, were stuck when it came to reducing noise because their excess of domain knowledge prevented them from seeing out of the box. Conversely, while I’m pretty confident my twelve-year-old niece would be great at generating brand spanking new ideas for a supersonic aircraft, I don’t think any of us will be setting foot inside one of them anytime soon. What is needed here and everywhere else are ways and means of thinking that successfully combine the best attributes of both high knowledge and high creative intelligence.
The measurement of this ‘creative intelligence’ was what inspired us to design and validate the Creativity Scan starting back in the late 1990s. At the time of its inception, in true TRIZ fashion, we started from an assumption that ‘someone, somewhere had already solved our problem’. No point re-inventing any unnecessary wheels, we thought.
Although it was very easy to find a whole swarm of ‘creativity tests’, none seemed to quite get the idea of either ‘effective creativity’ or ‘creative intelligence’. Measuring the ‘creativity’ of an individual, it seemed, was a subject full of difficulties. Some tests – Myers-Briggs, DISC, Belbin, Kirton Adaptor-Innovator to name four – appeared to capture some of the story, while others (Torrance Test of Creative Thinking (TTCT)) seemed more suited to keeping my niece amused for an hour, rather than delivering any objectively meaningful answer.
And so our job became a combination of coupling together of partial solutions, plus filing in the (many) gaps with the perspectives ‘serious’ creative problem-solving philosophies like TRIZ offered up through its fifty-plus year journey to capture the critical differences between ‘invention’ and ‘innovation’. The first task was to compile a comprehensive and coherent list of attributes. We ended up with eight:
Abstraction: the ability to abstract concepts from ideas.
Connection: the ability to make connections between things that don’t logically appear to have any connection with one another.
Perspective: the ability to shift one’s perspective on a situation – in terms of space and time, and interfaces with other people. In many ways, the perspective skill is about ability to empathise with the views of others.
Curiosity: the desire to change or improve things that everyone else accepts as the norm.
Boldness: the confidence to push boundaries beyond accepted conventions. Also the ability to eliminate fear of what others think of you.
Paradox: the ability to simultaneously accept and work with statements that are in conflict or contradictory to one another.
Complexity: the ability to carry large quantities of information and be able to manipulate and manage the relationships between such information.
Persistence: the ability to force oneself to keep trying to derive more and stronger solutions even when good ones have already been generated.
Figure 3: Creativity Self-Assessment Tool Output Format
Having created a structure, the next job was to solve what we saw as three critical problems found in nearly all kinds of psychometric assessment tool:
1) The parallel need for a measurement method that is detailed enough to deliver a meaningful answer, but at the same time simple enough that anyone using the instrument would be motivated to keep going to the end.
2) The need for a measurement tool that participants would not be able to ‘cheat’ on. Many questionnaire-based surveys do a pretty poor job of obscuring the ‘right’ answers, and just about all of them are prone to the ‘4G Effect – people either Gift, Game, Guard or Guess their answers, rather than providing the ‘truth’ (Figure 4). Our job was to try and ensure participants taking the Scan were in the neutral ‘Gold’ part of the model. We ended up achieving the solution to this problem in two ways: firstly, by formulating questions in such a way that it was not possible to know what the ‘right’ answer might be. Then, secondly, and probably more important, creating a suite of questions that impacted our eight different creativity attributes in different ways. So, to take one example, when a person indicates that they enjoy doing crossword puzzles – one of the closed questions – the Scan scoring system will score positively on the ‘Connection’ axis, but negatively against the ‘Complexity’ and ‘Boldness’ axes.
Figure 4: 4G Customer Survey Effect
3) Specific to the assessment of creativity, there is a need to account for the divergent-convergent nature of the creative process. ‘Effective creativity’ in our terms, requires at least two of these divergent convergent cycles, and any measurement instrument therefore had to be able to accommodate each of them. What this meant in practical terms is that there was a need for a combination of open and closed question types.
The first version of the Creativity Scan was eventually launched in 1999, really as a fun way to drive traffic to our website. Then, as luck would have it, it was featured on BBC World, and within the space of a few weeks we had had over 100,000 people taking the test. We had hit upon a nerve it seemed.
A big enough nerve to prompt us to conduct a series of controlled experiments to try and validate the findings of the test. Over the years we’ve had various opportunities to do this with some of our long-term innovation clients. Clients that had already done much work to try and understand the ‘creative potential’ of the people within their work teams.
These trials, plus the continued interest in the Scan (to date the total number of completed tests is well over 500,000) have since prompted us to undertake an ongoing programme of enhancements.
The first of these targeted the open questions featured in the Scan (‘what can you see in this picture?’). The first version of the tool simply tallied the number of answers that a person chose to enter, but did nothing to assess the relevance or quality of those answers. When we first built the tool, computer-based semantic analysis tools were not readily available. Now that they are, it has become possible to do a far better job of gauging the ‘effective creativity’ being demonstrated in a person’s answers. If a respondent enters words or phrases connecting potential uses of a brick to ‘building a house’ for example, this ought not to score as highly as, say, ‘topping a cake’ – a connection that requires two degrees of separation (brick – chocolate – melt). The greater the number of this kind of separation, we hypothesized, the greater the level of creativity.
The second, still not formally launched evolution, brings the Creativity Scan into the fold of the growing suite of PanSensic narrative analysis software tools. Rather than asking someone to fill in a questionnaire, the PanSensic strategy says that we can automatically assess a person’s ‘effective creativity’ by scraping large amounts of their (already existing) narrative data – Facebook page, blog articles, email traffic, etc. We’ve already built the tools to automatically assess a person’s MBTI, Belbin or KAI score. In the next couple of months, we’re expecting to be able to add the Creativity Scan to that roster. In this way, we’re hoping to be able to engage past, present and future users of the Scan to assist us in the job of cross-calibrating the automated Scan output coming from PanSensic with the results obtained when someone takes the questionnaire version of the Scan.
Once that’s done, we’re a mere step away from being able to conduct very wide-ranging analyses of ‘effective creativity’ across wide swathes of society. How creative are British people relative to how creative they think they are? Are surveyors and accountants less creative than other professions? Are the ‘creative arts’ creative at all? We’re pretty close to really knowing for the first time in mankind’s history. I’m feeling a little nervous already.