For many years vision researchers have been investigating how humans use theiw own visual cortex and other perceptions based systems to analyze images. An important initial result was the discovery of a very small subset of visual properties detectable very quickly & for a large part, very accurately, by the lowest of these systems, aptly referred to as low-level visual system.
These properties were initially called “preattentive”, since their detection seemed to come before one actually focused their attention.
Since then, we have a better understand. As it stands today, attention plays a critical role in what we see, even at this early stage of vision. The term preattentive continues to be used, however, since it conveys the speed and ease of visual identification of these properties.
Anne Treisman determined two types of visual search tasks: 1 which is preattentive known as Feature search, and the other which requires conscious attention or Conjunction search. Feature search can be performed fast and pre-attentively for targets defined by primitive features.
The features or properties can be broken into 3 areas: color, orientation and intensity.
And as you might have guessed, Conjunction search is slower and requires the participant’s full attention, something we humans have a hard time giving in certain situations, which is only worsening with the advent of hand held devices and other mobile smart phones to distract us.
“Typically, tasks that can be performed on large multi-element displays in less than 200 to 250 milliseconds (msec) are considered preattentive. Eye movements take at least 200 msec to initiate, and random locations of the elements in the display ensure that attention cannot be prefocused on any particular location, yet viewers report that these tasks can be completed with very little effort. This suggests that certain information in the display is processed in parallel by the low-level visual system.” (“Perception in Visualization by Christopher Healey”)
What does this mean: well, given a target, say a red circle, and distractors being everything else, which is this case are blue objects,one can quickly see in this example which is which, i.e. in < 200 msec, you can glance at these two pictures and define the target from the distractors, right?
As in this example, it seems introducing preattentive cognition to dashboards would result in a healthy and loving relationship and one when carried over time (ie – employed by BI practitioners during design phase of any BI / data visualization project) would result in more meaningful, & less cluttered dashboards, right?
Now, think about your dashboards and BI visualizations – Think about how many of them tell a good and clean story, where the absolute most important information “pops” out to end viewer. One requiring little explanatory text, contextual help or other mechanisms we BI practitioners employ to explain our poorly designed dashboards. And, I am by no means claiming everything I have designed to be fault free– We all learn as we go. But I can say that those designs of today vs. yesterday are better because of my understanding of visual perception, neural processing / substrates and cognitive sciences and how to apply these fields to business intelligence in order to drive better data visualizations.
Why is it that some who work in BI think the more gauges or widgets pushed into a screen, the better?
Instead, I contend that the application of this principle to dashboard design, report design, website design or any type of design would point out that much in our world today is poorly designed, fitting with non complementary colors, over use of of dristractor objects, thus, rendering the user confused or “distracted” from the target object, which could be something as important as revenue of a company, or number of death in an ER wing of a hospital, both of which so important as one might question how such numbers could get lost.
Try it for yourself by reading anything by Stephen Few or Edward Tufte as a starting place.