Eye Tracking, ML and Web Analytics – Correlated Concepts? Absolutely … Not just a Laura-ism, but Confirmed by a Professor at Carnegie

Eye Tracking Studies Required Expensive Hardware in the Past

 

 

  Anyone who has read my blog (shameless self-plug: http://www.lauraedell.com) over the years will know, I am very passionate about drinking my own analytical cool-aid. Whether during my stints as a Programmer, BI Developer, BI Manager, Practice Lead / Consultant or Senior Data Scientist, I believe whole-heartedly in measuring my own success with advanced analytics.  Even my fantasy football success (more on that in a later post) …But you wouldn’t believe how often this type of measurement gets ignored. ​

Introduce Eye-Tracking Studies-Daunting little set of machines in that image above, I know…But this system has been a cornerstone in the measurement practices of advertisement efficacy for eons, and something I latched onto into my early 20’s, in fact, ad-nauseam. I was lucky enough to work for the now uber online travel company who shall go nameless (okay, here is a hint: remember a little ditty that ended with some hillbilly singing “dot commmm” & you will know to whom I refer). This company believed so wholeheartedly in the user experience that they allowed me, young ingénue of the workplace, to spend thousands on eye tracking studies against a series of balanced scorecards that I was developing for the senior leadership team. This is important because you can ASK someone whether a designed visualization is WHAT THEY WERE THINKING or WANTING, even if done iteratively with the intended target, yet 9x out of 10, they will nod ‘yes’ instead of being honest, employing conflict avoidance at its best. Note, this applies to most, but I can think of a few in my new role at MSFT who are probably reading this and shaking their head in disagreement at this very moment <Got ya, you know who you are, ya ol’ negative Nelly’s; but I digress…AND… now we’re back –>

Eye tracking studies are used to measure efficacy by tracking what content areas engage users’ brains vs. areas that fall flat, are lackluster, overdesigned &/or contribute to eye/brain fatigue. It measures this by “tracking” where & for how long your eyes dwell on a quadrant (aka visual / website content / widget on a dashboard) and by recording the path & movement of the eyes between different quadrants’ on a page. It’s amazing to watch these advanced, algorithmic-tuned systems, measure a digital, informational message, in real-time, as it’s relayed to the intended audience, all while generating the statistics necessary to either know you “done a good job, son” or go back to the drawing board if you want to achieve the ‘Atta boy’. “BRILLIANT, I say.”

What I also learned which seems a no-brainer now, but people tend to read from Left to Right & from top to bottom. <duh> So, when I see anything that doesn’t at LEAST follow those two simple principles, I just shake my head and tisk tisk tisk, wondering how these improperly designed <insert content here> will ever relay any sort of meaningful message, destined for the “oh that’s interesting to view once” sphere instead of raising to the levels of usefulness it was designed for. Come on now, how hard is it to remember to stick the most important info in that top left quadrant and the least important in the bottom right, especially when creating visualizations for use in the corporate workplace by senior execs. They have even less time & attention these days to focus on even the most relevant KPIs, those they need to monitor to run their business & will get asked to update the CEO on each QTR, with all those fun distractions that come with the latest vernacular du-jour taking up all their brain space: “give me MACHINE LEARNING or give me death; the upstart that replaced mobile/cloud/big data/business intelligence (you fill in the blank).

But for so long, it was me against the hard reality that no one knew what I was blabbing on about, nor would they give me carte blanche to re-run those studies ever again <silently humming, “Cry me a River”>, And lo and behold, my Laura-ism soapbox has now been vetted, in fact, quantified by a prestigious University professor from Carnegie, all possible because a little know hero named

 

Edmond Huey, now near and dear to my heart, grandfather of the heatmap, followed up his color-friendly block chart by building the first device capable of tracking eye movements while people were reading. This breakthrough initiated a revolution for scientists but it was intrusive and readers had to wear special lenses with a tiny opening and a pointer attached to it like the 1st image pictured above.

heatmapFast-forward 100 years, and combine all ingredients into the cauldron of innovation and technological advancement, sprinkle in my favorite algorithmic pals:  CNN and LSTM, and the result is that grandchild now known as heat mapping. It’s eye tracking analytics without all the cost, basically a measure of the same phenomena (at a fraction of the cost).

Cool history lesson, right?

So, for those non-believers, I say, use some of the web analytic trends of the future (aka Web Analytics 3.0). Be a future-thinker, forward mover, innovator of your data science sphere of influence, and I tell you, you will become so much more informed and able to offer more information to others based on…MEASUREMENT (Intelligent MEASUREMENT in a digital transformational age).

 

Microsoft Data AMP 2017

Aside

Data AMP 2017 just finished and some really interesting announcements came out specific to our company-wide push into infusing machine learning, cognitive and deep learning APIs into every part of our organization. Some of the announcements are ML enablers while others are direct enhancements.

Here is a summary with links to further information:

  • SQL Server R Services in SQL Server 2017 is renamed to Machine Learning Services since both R and Python will be supported. More info
  • Three new features for Cognitive Services are now Generally Available (GA): Face API, Content Moderator, Computer Vision API. More info
  • Microsoft R Server 9.1 released: Real time scoring and performance enhancements, Microsoft ML libraries for Linux, Hadoop/Spark and Teradata. More info
  • Azure Analysis Services is now Generally Available (GA). More info
  • **Microsoft has incorporated the technology that sits behind the Cognitive Services inside U-SQL directly as functions. U-SQL is part of Azure Data Lake Analytics(ADLA)
  • More Cortana Intelligence solution templates: Demand forecasting, Personalized offers, Quality assurance. More info
  • A new database migration service will help you migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database or SQL Server on Azure virtual machines. Sign up for limited preview
  • A new Azure SQL Database offering, currently being called Azure SQL Managed Instance (final name to be determined):
    • Migrate SQL Server to SQL as a Service with no changes
    • Support SQL Agent, 3-part names, DBMail, CDC, Service Broker
    • **Cross-database + cross-instance querying
    • **Extensibility: CLR + R Services
    • SQL profiler, additional DMVs support, Xevents
    • Native back-up restore, log shipping, transaction replication
    • More info
    • Sign up for limited preview
  • SQL Server vNext CTP 2.0 is now available and the product will be officially called SQL Server 2017:

Those I am most excited about I added ** next to. This includes key innovations with our approach to AI and enhancing our deep learning compete against Google TensorFlor for example. Check out the following blog posting: https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/delivering-ai-with-data-the-next-generation-of-microsofts-data-platform/ :

  1. The first is the close integration of AI functions into databases, data lakes, and the cloud to simplify the deployment of intelligent applications.
  2. The second is the use of AI within our services to enhance performance and data security.
  3. The third is flexibility—the flexibility for developers to compose multiple cloud services into various design patterns for AI, and the flexibility to leverage Windows, Linux, Python, R, Spark, Hadoop, and other open source tools in building such systems.

 

Azure ML + AI (Cognitive Services Deep Learning)Most recent documents

23 items in the toolbar. Use Left or Right to navigate. Use Enter to add the selected web part.

Action Bulleted list performed.

Wonderful World of Sports: Hey NFL, Got RFID?

Aside

As requested by some of my LinkedIn followers, here is the NFL Infographic about RFID tags I shared a while back:

nfl_tech_infographic-100612792-large.idge

I hope @NFL @XboxOne #rfid data becomes more easily accessible. I have been tweeting about the Zebra deal for 6 months now, and the awesome implications this would have on everything from sports betting to fantasy enthusiasts to coaching, drafting and what have you. Similarly, I have built a fantasy football (PPR) league bench/play #MachineLearning model using #PySpark which, as it turns out, is pretty good. But it could be great with the RFID stream.
nfl-tagged-shoulder-pads-100612790-large.idge

This is where the #IoT rubber really hits the road because there are so many more fans of the NFL than there are folks who really grok the “Connected Home” (not knocking it, but it doesn’t have the reach tentacles of the NFL). Imagine measuring the burn-rate output vs. performance degradation of these athletes mid game and one day, being able to stream that on the field or booth for game course corrections. Aah, a girl can only dream…

Is Machine Learning the New EPM Black?

Aside

I am currently a data scientist & am also a certified lean six sigma black belt. I specialize in the Big Data Finance, EPM, BI & process improvement fields where this convergence of skills has provided me the ability to understand the interactions between people, process and technology/ tools.

I would like to address the need to transform traditional EPM processes by leveraging more machine learning to help reduce forecast error and eliminate unnecessary budgeting and planning rework and cycle time using a  3 step ML approach:

1st, determine which business drivers are statistically meaningful to the forecast (correlation) , eliminating those that are not.

2nd, cluster those correlated drivers by significance to determine those that cause the most variability to the forecast (causation).

3rd, use the output of 1 and 2 as inputs to the forecast, and apply ML in order to generate a statistically accurate forward looking forecast.

 ml

Objection handling, in my experience, focuses on the cost,  time and the sensitive change management aspect- how I have handled these, for example, is as such :

  1. Cost: all of these models can be built using free tools like R and Python data science libraries, so there is minimal to no technology/tool capEx/opEx investment.   
  2. Time: most college grads with either a business, science or computer engineering degree will have undoubtedly worked with R and/or Python (and more) while earning their degree. This reduces the ramp time to get folks acclimated and up to speed. To fill the remaining skill set gap, they can use the vast libraries of work already provided by the R / Python initiatives or the many other data science communities available online for free as a starting point, which also minimizes the time due to unnecessary cycles and rework trying to define drivers based on gut feel only. 
  3. Change: this is the bigger objection that has to be handled according to the business culture and openness to change. Best means of handling this is to simply show them. Proof is in the proverbial pudding so creating a variance analysis of the ML forecast, the human forecast and the actuals will speak volumes, and bonus points if the correlation and clustering analysis also surfaced previously unknown nuggets of information richness.

Even without the finding the golden nugget ticket, the CFO will certainly take notice of a more accurate forecast and appreciate the time and frustration savings from a less consuming budget and planning cycle.

Utilizing #PredictiveAnalytics & #BigData To Improve Accuracy of #EPM Forecasting Process

Aside

I was amazed when I read the @TidemarkEPM awesome new white paper on the “4 Steps to a Big Data Finance Strategy.” This is an area I am very passionate about; some might say, it’s become my soap-box since my days as a Business Intelligence consultant. I saw the dawn of a world where EPM, specifically, the planning and budgeting process was elevated from gut feel analytics to embracing #machinelearning as a means of understanding which drivers are statistically significant from those that have no verifiable impact , and ultimately using those to feed a more accurate forecast model.

Big Data Finance

Traditionally (even still today), finance teams sit in a conference room with Excel spreadsheets from Marketing, Customer Service etc., and basically, define the current or future plans based on past performance mixed with a sprinkle of gut feel (sometimes, it was more like a gallon of gut feel to every tablespoon of historical data). In these same meetings just one quarter later, I would shake my head when the same people questioned why they missed their targets or achieved a variance that was greater/less than the anticipated or expected value.

The new world order of Big Data Finance leverages the power of machine learned algorithms to derive true forecasted analytics. And this was a primary driver for my switching from a pure BI focus into data science. And, I have seen so many companies embrace the power of true “advanced predictive analytics” and by doing so, harness the value and benefits of doing so; and doing so, with confidence, instead of fear of this unknown statistical realm, not to mention all of the unsettled glances when you say the nebulous “#BigData” or “#predictiveAnalytics” phrases.

But I wondered, exactlyBig Data Finance, Data Types, Process Use Cases, Forecasting, Budgeting, Planning, EPM, Predictive, Model how many companies are doing this vs. the old way? And I was very surprised to learn from the white-paper that  22.7% of people view predictive capabilities as “essential” to forecasting, with 52.2% claiming it nice to have.  Surprised is an understatement; in fact, I was floored.

We aren’t just talking about including weather data when predicting consumer buying behaviors. What about the major challenge for the telecommunications / network provider with customer churn? Wouldn’t it be nice to answer the question: Who are the most profitable customers WHO have the highest likelihood of churn? And wouldn’t it be nice to not have to assign 1 to several analysts xx number of days or weeks to be able to crunch through all of the relevant data to try to get to an answer to that question? And still probably not have all of the most important internal indicators or be including indicators that have no value or significance to driving an accurate churn outcome?

What about adding in 3rd party external benchmarking data to further classify and correlate these customer indicators before you run your churn prediction model? To manually do this is daunting and so many companies, I now hypothesize, revert to the old ways of doing the forecast. Plus, I bet they have a daunting impression of the cost of big data and the time to implement because of past experiences with things like building the uber “data warehouse” to get to that panacea of the “1 single source of truth”…On the island of Dr. Disparate Data that we all dreamt of in our past lives, right?

I mean we have all heard that before and yet, how many times was it actually done successfully, within budget or in the allocated time frame? And if it was, what kind of quantifiable return on investment did you really get before annual maintenance bills flowed in? Be honest…No one is judging you; well, that is, if you learned from your mistakes or can admit that your pet project perhaps bit off too much and failed.

And what about training your people or the company to utilize said investment as part of your implementation plan? What was your budget for this training and was it successful,  or did you have to hire outside folks like consultants to do the work for you? And by doing so, how long did it actually take the break the dependency on those external resources and still be successful?

Before the days of Apache Spark and other Open Source in-memory or streaming technologies, the world of Big Data was just blossoming into what it was going to grow into as a more mature flower. On top of which, it takes a while for someone to fully grok a new technology, even with the most specialized training, especially if they aren’t organically a programmer, like many Business Intelligence implementation specialists were/are. That is because those who have past experience with something like C++, can quickly apply the same techniques to newer technologies like Scala for Apache Spark or Python and be up and running much faster vs. someone who has no background in programming trying to learn what a loop is or how to call an API to get 3rd party benchmarking data. We programmers take that for granted when applying ourselves to learning something new.

And now that these tools are more enterprise ready and friendly with new integration modules with tools like R or MATLib for the statistical analysis coupled with all of the free training offered by places like University of Berkeley (via eDX online), now is the time to adopt Big Data Finance more than ever.

In a world where the machine learning algorithm can be paired with traditional classification modeling techniques automatically, and said algorithms have been made publicly available for your analysts to use as a starting point or in their entirety for your organization, one no longer needs to be daunted by thought of implementing Big Data Finance or testing out the waters of accuracy to see if you are comfortable with the margin of error between your former forecasting methodology and this new world order.

2015 Gartner Magic Quadrant Business Intelligence – Mind Melding BI & Data Science, a Continuing Trend…

2015 Magic Quadrant Business intelligence

2015 Magic Quadrant Business intelligence

IT WAS the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us…

–Charles Dickens

Truer words were never spoken, whether about the current technological times or deep in our past (remember the good ole enterprise report books, aka the 120 page paper weight?)

And, this data gal couldn’t be happier with the final predictions made by Gartner in their 2015 Magic Quadrant Report for Business Intelligence. Two major trends / differentiators fall right into the sweet spot I adore:

New demands for advanced analytics 

Focus on predictive/prescriptive capabilities 

Whether you think this spells out doom for business intelligence as it exists today or not, you cannot deny that these trends in data science and big data can only force us to finally work smarter, and not harder (is that even possible??)

What are your thoughts…?