Eye Tracking, ML and Web Analytics – Correlated Concepts? Absolutely … Not just a Laura-ism, but Confirmed by a Professor at Carnegie

Eye Tracking Studies Required Expensive Hardware in the Past

 

 

  Anyone who has read my blog (shameless self-plug: http://www.lauraedell.com) over the years will know, I am very passionate about drinking my own analytical cool-aid. Whether during my stints as a Programmer, BI Developer, BI Manager, Practice Lead / Consultant or Senior Data Scientist, I believe whole-heartedly in measuring my own success with advanced analytics.  Even my fantasy football success (more on that in a later post) …But you wouldn’t believe how often this type of measurement gets ignored. ​

Introduce Eye-Tracking Studies-Daunting little set of machines in that image above, I know…But this system has been a cornerstone in the measurement practices of advertisement efficacy for eons, and something I latched onto into my early 20’s, in fact, ad-nauseam. I was lucky enough to work for the now uber online travel company who shall go nameless (okay, here is a hint: remember a little ditty that ended with some hillbilly singing “dot commmm” & you will know to whom I refer). This company believed so wholeheartedly in the user experience that they allowed me, young ingénue of the workplace, to spend thousands on eye tracking studies against a series of balanced scorecards that I was developing for the senior leadership team. This is important because you can ASK someone whether a designed visualization is WHAT THEY WERE THINKING or WANTING, even if done iteratively with the intended target, yet 9x out of 10, they will nod ‘yes’ instead of being honest, employing conflict avoidance at its best. Note, this applies to most, but I can think of a few in my new role at MSFT who are probably reading this and shaking their head in disagreement at this very moment <Got ya, you know who you are, ya ol’ negative Nelly’s; but I digress…AND… now we’re back –>

Eye tracking studies are used to measure efficacy by tracking what content areas engage users’ brains vs. areas that fall flat, are lackluster, overdesigned &/or contribute to eye/brain fatigue. It measures this by “tracking” where & for how long your eyes dwell on a quadrant (aka visual / website content / widget on a dashboard) and by recording the path & movement of the eyes between different quadrants’ on a page. It’s amazing to watch these advanced, algorithmic-tuned systems, measure a digital, informational message, in real-time, as it’s relayed to the intended audience, all while generating the statistics necessary to either know you “done a good job, son” or go back to the drawing board if you want to achieve the ‘Atta boy’. “BRILLIANT, I say.”

What I also learned which seems a no-brainer now, but people tend to read from Left to Right & from top to bottom. <duh> So, when I see anything that doesn’t at LEAST follow those two simple principles, I just shake my head and tisk tisk tisk, wondering how these improperly designed <insert content here> will ever relay any sort of meaningful message, destined for the “oh that’s interesting to view once” sphere instead of raising to the levels of usefulness it was designed for. Come on now, how hard is it to remember to stick the most important info in that top left quadrant and the least important in the bottom right, especially when creating visualizations for use in the corporate workplace by senior execs. They have even less time & attention these days to focus on even the most relevant KPIs, those they need to monitor to run their business & will get asked to update the CEO on each QTR, with all those fun distractions that come with the latest vernacular du-jour taking up all their brain space: “give me MACHINE LEARNING or give me death; the upstart that replaced mobile/cloud/big data/business intelligence (you fill in the blank).

But for so long, it was me against the hard reality that no one knew what I was blabbing on about, nor would they give me carte blanche to re-run those studies ever again <silently humming, “Cry me a River”>, And lo and behold, my Laura-ism soapbox has now been vetted, in fact, quantified by a prestigious University professor from Carnegie, all possible because a little know hero named

 

Edmond Huey, now near and dear to my heart, grandfather of the heatmap, followed up his color-friendly block chart by building the first device capable of tracking eye movements while people were reading. This breakthrough initiated a revolution for scientists but it was intrusive and readers had to wear special lenses with a tiny opening and a pointer attached to it like the 1st image pictured above.

heatmapFast-forward 100 years, and combine all ingredients into the cauldron of innovation and technological advancement, sprinkle in my favorite algorithmic pals:  CNN and LSTM, and the result is that grandchild now known as heat mapping. It’s eye tracking analytics without all the cost, basically a measure of the same phenomena (at a fraction of the cost).

Cool history lesson, right?

So, for those non-believers, I say, use some of the web analytic trends of the future (aka Web Analytics 3.0). Be a future-thinker, forward mover, innovator of your data science sphere of influence, and I tell you, you will become so much more informed and able to offer more information to others based on…MEASUREMENT (Intelligent MEASUREMENT in a digital transformational age).

 

Is Machine Learning the New EPM Black?

Aside

I am currently a data scientist & am also a certified lean six sigma black belt. I specialize in the Big Data Finance, EPM, BI & process improvement fields where this convergence of skills has provided me the ability to understand the interactions between people, process and technology/ tools.

I would like to address the need to transform traditional EPM processes by leveraging more machine learning to help reduce forecast error and eliminate unnecessary budgeting and planning rework and cycle time using a  3 step ML approach:

1st, determine which business drivers are statistically meaningful to the forecast (correlation) , eliminating those that are not.

2nd, cluster those correlated drivers by significance to determine those that cause the most variability to the forecast (causation).

3rd, use the output of 1 and 2 as inputs to the forecast, and apply ML in order to generate a statistically accurate forward looking forecast.

 ml

Objection handling, in my experience, focuses on the cost,  time and the sensitive change management aspect- how I have handled these, for example, is as such :

  1. Cost: all of these models can be built using free tools like R and Python data science libraries, so there is minimal to no technology/tool capEx/opEx investment.   
  2. Time: most college grads with either a business, science or computer engineering degree will have undoubtedly worked with R and/or Python (and more) while earning their degree. This reduces the ramp time to get folks acclimated and up to speed. To fill the remaining skill set gap, they can use the vast libraries of work already provided by the R / Python initiatives or the many other data science communities available online for free as a starting point, which also minimizes the time due to unnecessary cycles and rework trying to define drivers based on gut feel only. 
  3. Change: this is the bigger objection that has to be handled according to the business culture and openness to change. Best means of handling this is to simply show them. Proof is in the proverbial pudding so creating a variance analysis of the ML forecast, the human forecast and the actuals will speak volumes, and bonus points if the correlation and clustering analysis also surfaced previously unknown nuggets of information richness.

Even without the finding the golden nugget ticket, the CFO will certainly take notice of a more accurate forecast and appreciate the time and frustration savings from a less consuming budget and planning cycle.

BOTHER – Adding Value through Effective Relativization and Rationalization of KPIs

I posted a comment to a blog about KPIs:

  • http://mybibeat.wordpress.com/2009/11/23/how-to-define-and-select-good-kpis-for-your-business/#comment-10
  • and I thought it cited a perfect next step in my series on how to revamp your investment or the BOTHER program:

    I am typically the consultant of KPIs, so I found it humbling to have a need for some relevant information on KPIs. I say this with the greatest of respect for your posting. While I agree on most points, I wanted to challenge one area, and hope you will take this with a pinch of salt…
    You cite an example for a professional services company: "The survival of a professional service provider depends on the number of ongoing and new projects the company handles" and go on to mention revenue. The follow up you provide is great for measuring the $$ impact yes, but you do not mention the satisfaction of the ongoing client. While measuring revenue of the ongoing client is meaningful, it isnt wholly satisfying – Here’s why:

    In order to grow a professional services company, one must be able to project the likelihood of growing within existing company where ongoing revenue is currently being realized. And why a company may continue to use a services company because they either havent the time to find a new one, or the energy to fire them, it doesnt necessarily mean they will give the next big project to said consultancy because they may actually be dissatisfied, though their continue use of the company for ongoing support may give the feeling that ‘we are ok; they havent fired us after all’;
    In reality, they may be unhappy and as I mentioned for the reasons above, just havent changed the ongoing work to someone else, but certainly is not planning on signing over any new work.

    Instead of just measuring revenue, I would counter that for KPIs to be effective, one must rationalize and relativize by looking at qualitative and quantitative measures like satisfaction points lost or gained by project over dollars (revenue), thus, quantifying the average revenue per satisfaction point gained or lost. One must use statistical analysis to get this metric accurately (Partial Least Squares modeling is a wonderful tool for this), but boy it is powerful. A) You know how satisfied a given client actually is and B) you know that if you start to lose or gain satisfaction points though retention programs or client appreciation, or inversely through lack of attention or project dissatisfaction, you start losing points, how much it means to your bottom line, and where to drive some of your business development resources.
    It takes what you stated very well to the next level of efficacy which analytics is where I believe the true value of BI starts to become realized. Thank you though for your point. I will certainly link my blog back to yours!