“There is safety in numbers”, is something you’ll often hear in the smooth and affirmative narration of David Attenborough.

And there is often truth to this claim, beyond just the survivalist ecosystems of the animal kingdom. Yet, in our technology driven world, numbers veil us in a sense of safety quite different altogether: Big data, and a life reliant on its unquenchable thirst for vast amounts of information.

With compelling patterns emerging from the fickle oceans of ones and zeros, we’ve come to breed an affection for quantity – the more the better. But is it?

At Arctic15, we’ve long held the view that quality outweighs quantity. Businesses may thrive on numbers, but conferences should be a melting pot of long-term, strategic and high-quality investments rather than a race to catch every keynote speech and hand out as many business cards as you can without looking like a broken vending machine. You could argue that this is just an opinion, a personal preference if you will, and perhaps you are right.

But I’m not here to roll over so easily (where’s the fun in that).

I’d be very tempted to go down the route of thought experiments and conjecture – you know, the usual meandering dips into the myriad contrarian ways in which you can argue without actually picking sides. However, this affair I’ll confront with its own weapons: quantitative data. If numbers don’t lie, then so won’t their conclusions. By the end you might be more inclined to believe that quality is not just another way to make a business principle sound hip.

Let me introduce a rather intriguing concept discussed in the academic field of market and organizational research: decision effectiveness. This refers to the rather straightforward idea that correct, or “accurate” judgements can be made given a situation, and that the total net sum of these right choices can essentially be evaluated.

The first systematic endeavours by decision scholars to determine whether quality or quantity of information was more likely to lead to good choices were quickly faced with the problem of what constitutes a “good” choice. Some of these early studies (1) figured they would work around this problem by devising tasks in which consumers had to choose between two products based on their nutritional values. Higher the nutrition, then, would indicate the correct choice. Simple.

Hardly. The nutrition we conceive of as good tends to be subjective, and the studies came under fire for that very fact: the correct choice for one person could just as much be the incorrect choice for someone else. With professional nutritionists agreeing that this is in fact true, the information criterion by which to evaluate personal choice was back to square one.

This is where is gets a little complicated. Mathematicians, ever crafty in their art, stepped in and built models which would provide reliable universal measures of the “goodness” of a choice in relation to the attributes of information at one’s disposal.

One such influential model by Stanford University economists (2) begins with so-called “ideal” conditions, that is, choice situations with reliable metrics by which alternatives can be evaluated. The quantity of information, and the relative value this brings to a correct choice, is denoted by ascribing numeric scores for each piece of information considered in terms of its utility. Overall utility in terms quantity is just the sum of the attribute utility scores.

Quality, on the other hand, is captured by scoring the utility of the information on the basis of how much that individual attribute influences the decision between two alternatives. In other words, the total relevance of that particular information in the grand scheme of a decision.

With the distinct yet related notions of quantity and quality demonstrable under Mathematical denotations, the comparative importance of each towards making correct choices can now be modelled and evaluated. But has the issue of subjectivity been resolved?

It has, in fact: In terms of utility, people are free to vary as much as they want (and they will!) – some might subjectively ascribe more importance to certain aspects of information over others when making decisions. This subjective variation is captured by handing out questionnaires where attributes of, for example, a job, are evaluated by respondents on the basis of importance and utility on a 100 point scale (high values meaning more desired). In these questionnaires participants rank things like the length of work week or how prestigious the company is.

The next step is where the, well, effectiveness of decision effectiveness models really begin to show. Participants can be presented with tasks in which they choose between two job alternatives or choose the best from a set of five presented jobs. The jobs which are presented for each subject are individually built based on the aforementioned rankings.

Here’s the real kicker though: the denotations allow for a standardised transformation of the criterions which influence the decision made, and the goodness of the decision is quantified on the basis of the questionnaire rankings. With the framework of decisions turned into values, controlled “situations” can be strategically built where quantity, denoted in terms of the number of attributes, and quality – that is, attributes which ranked high or low – can be controlled for and compared. This way the goodness of the decision, however subjective that choice may be, can be evaluated in terms of weights ascribed to quantity or quality.  

Well, it turns out that people make choices that more accurately reflect their own pre-defined desires when they are provided with high-quality rather than high-quantity information. Furthermore, quality compared to quantity also made participants more confident in the choice they made. Of course, having more information rather than less, up to a certain point, leads to improved decision-making in general, but how well we put to use the amount of information at hand was overall also dependent on its quality.

The likely culprit for this effect is known as information overload, which as a term is fairly self-explanatory. In broad terms when information increases, we become cognitively more encumbered, and this reflects poorly on the quality of the decisions we make. It isn’t intuitively very hard to believe, sure, but it certainly isn’t as straightforward to demonstrate from a technical point of view. In either case, today there seems to be somewhat of a consensus that quality does indeed outweigh quantity when we process information and make choices on its basis. The literature in support of this is, indeed, quite extensive (3-6).

So, taking a jump back to what we started off with, when you’re making preparations for a conference avoid biting off more than you can chew. Choose goals, or points of interest and focus on those; and when putting yourself forth in these situations, make sure you do so with clarity and intention. After all, quality informs the best decisions – as has been quantitatively demonstrated by research.


1 – Muller, Thomas E. (1984), “Buyer Response to Variations in Product Information Load,” Journal of Applied Psychology, 69 (May), 300-306.

2 – Keller, K. L., & Staelin, R. (1987). Effects of quality and quantity of information on decision effectiveness. Journal of consumer research, 14(2), 200-213.

3 – Eppler, M. J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The information society, 20(5), 325-344.

4 – Speier, C., Valacich, J. S., & Vessey, I. (1999). The influence of task interruption on individual decision making: An information overload perspective. Decision Sciences, 30(2), 337-360.

5 – Schneider, S. C. (1987). Information overload: Causes and consequences. Human Systems Management, 7(2), 143-153.

6 – Laker, L. F., Froehle, C. M., Windeler, J. B., & Lindsell, C. J. (2018). Quality and Efficiency of the Clinical Decision‐Making Process: Information Overload and Emphasis Framing. Production and Operations Management, 27(12), 2213-2225.