Eye Tracking, ML and Web Analytics – Correlated Concepts? Absolutely … Not just a Laura-ism, but Confirmed by a Professor at Carnegie

Eye Tracking Studies Required Expensive Hardware in the Past

 

 

  Anyone who has read my blog (shameless self-plug: http://www.lauraedell.com) over the years will know, I am very passionate about drinking my own analytical cool-aid. Whether during my stints as a Programmer, BI Developer, BI Manager, Practice Lead / Consultant or Senior Data Scientist, I believe whole-heartedly in measuring my own success with advanced analytics.  Even my fantasy football success (more on that in a later post) …But you wouldn’t believe how often this type of measurement gets ignored. ​

Introduce Eye-Tracking Studies-Daunting little set of machines in that image above, I know…But this system has been a cornerstone in the measurement practices of advertisement efficacy for eons, and something I latched onto into my early 20’s, in fact, ad-nauseam. I was lucky enough to work for the now uber online travel company who shall go nameless (okay, here is a hint: remember a little ditty that ended with some hillbilly singing “dot commmm” & you will know to whom I refer). This company believed so wholeheartedly in the user experience that they allowed me, young ingénue of the workplace, to spend thousands on eye tracking studies against a series of balanced scorecards that I was developing for the senior leadership team. This is important because you can ASK someone whether a designed visualization is WHAT THEY WERE THINKING or WANTING, even if done iteratively with the intended target, yet 9x out of 10, they will nod ‘yes’ instead of being honest, employing conflict avoidance at its best. Note, this applies to most, but I can think of a few in my new role at MSFT who are probably reading this and shaking their head in disagreement at this very moment <Got ya, you know who you are, ya ol’ negative Nelly’s; but I digress…AND… now we’re back –>

Eye tracking studies are used to measure efficacy by tracking what content areas engage users’ brains vs. areas that fall flat, are lackluster, overdesigned &/or contribute to eye/brain fatigue. It measures this by “tracking” where & for how long your eyes dwell on a quadrant (aka visual / website content / widget on a dashboard) and by recording the path & movement of the eyes between different quadrants’ on a page. It’s amazing to watch these advanced, algorithmic-tuned systems, measure a digital, informational message, in real-time, as it’s relayed to the intended audience, all while generating the statistics necessary to either know you “done a good job, son” or go back to the drawing board if you want to achieve the ‘Atta boy’. “BRILLIANT, I say.”

What I also learned which seems a no-brainer now, but people tend to read from Left to Right & from top to bottom. <duh> So, when I see anything that doesn’t at LEAST follow those two simple principles, I just shake my head and tisk tisk tisk, wondering how these improperly designed <insert content here> will ever relay any sort of meaningful message, destined for the “oh that’s interesting to view once” sphere instead of raising to the levels of usefulness it was designed for. Come on now, how hard is it to remember to stick the most important info in that top left quadrant and the least important in the bottom right, especially when creating visualizations for use in the corporate workplace by senior execs. They have even less time & attention these days to focus on even the most relevant KPIs, those they need to monitor to run their business & will get asked to update the CEO on each QTR, with all those fun distractions that come with the latest vernacular du-jour taking up all their brain space: “give me MACHINE LEARNING or give me death; the upstart that replaced mobile/cloud/big data/business intelligence (you fill in the blank).

But for so long, it was me against the hard reality that no one knew what I was blabbing on about, nor would they give me carte blanche to re-run those studies ever again <silently humming, “Cry me a River”>, And lo and behold, my Laura-ism soapbox has now been vetted, in fact, quantified by a prestigious University professor from Carnegie, all possible because a little know hero named

 

Edmond Huey, now near and dear to my heart, grandfather of the heatmap, followed up his color-friendly block chart by building the first device capable of tracking eye movements while people were reading. This breakthrough initiated a revolution for scientists but it was intrusive and readers had to wear special lenses with a tiny opening and a pointer attached to it like the 1st image pictured above.

heatmapFast-forward 100 years, and combine all ingredients into the cauldron of innovation and technological advancement, sprinkle in my favorite algorithmic pals:  CNN and LSTM, and the result is that grandchild now known as heat mapping. It’s eye tracking analytics without all the cost, basically a measure of the same phenomena (at a fraction of the cost).

Cool history lesson, right?

So, for those non-believers, I say, use some of the web analytic trends of the future (aka Web Analytics 3.0). Be a future-thinker, forward mover, innovator of your data science sphere of influence, and I tell you, you will become so much more informed and able to offer more information to others based on…MEASUREMENT (Intelligent MEASUREMENT in a digital transformational age).

 

Microsoft Data AMP 2017

Aside

Data AMP 2017 just finished and some really interesting announcements came out specific to our company-wide push into infusing machine learning, cognitive and deep learning APIs into every part of our organization. Some of the announcements are ML enablers while others are direct enhancements.

Here is a summary with links to further information:

  • SQL Server R Services in SQL Server 2017 is renamed to Machine Learning Services since both R and Python will be supported. More info
  • Three new features for Cognitive Services are now Generally Available (GA): Face API, Content Moderator, Computer Vision API. More info
  • Microsoft R Server 9.1 released: Real time scoring and performance enhancements, Microsoft ML libraries for Linux, Hadoop/Spark and Teradata. More info
  • Azure Analysis Services is now Generally Available (GA). More info
  • **Microsoft has incorporated the technology that sits behind the Cognitive Services inside U-SQL directly as functions. U-SQL is part of Azure Data Lake Analytics(ADLA)
  • More Cortana Intelligence solution templates: Demand forecasting, Personalized offers, Quality assurance. More info
  • A new database migration service will help you migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database or SQL Server on Azure virtual machines. Sign up for limited preview
  • A new Azure SQL Database offering, currently being called Azure SQL Managed Instance (final name to be determined):
    • Migrate SQL Server to SQL as a Service with no changes
    • Support SQL Agent, 3-part names, DBMail, CDC, Service Broker
    • **Cross-database + cross-instance querying
    • **Extensibility: CLR + R Services
    • SQL profiler, additional DMVs support, Xevents
    • Native back-up restore, log shipping, transaction replication
    • More info
    • Sign up for limited preview
  • SQL Server vNext CTP 2.0 is now available and the product will be officially called SQL Server 2017:

Those I am most excited about I added ** next to. This includes key innovations with our approach to AI and enhancing our deep learning compete against Google TensorFlor for example. Check out the following blog posting: https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/delivering-ai-with-data-the-next-generation-of-microsofts-data-platform/ :

  1. The first is the close integration of AI functions into databases, data lakes, and the cloud to simplify the deployment of intelligent applications.
  2. The second is the use of AI within our services to enhance performance and data security.
  3. The third is flexibility—the flexibility for developers to compose multiple cloud services into various design patterns for AI, and the flexibility to leverage Windows, Linux, Python, R, Spark, Hadoop, and other open source tools in building such systems.

 

Azure ML + AI (Cognitive Services Deep Learning)Most recent documents

23 items in the toolbar. Use Left or Right to navigate. Use Enter to add the selected web part.

Action Bulleted list performed.

Wonderful World of Sports: Hey NFL, Got RFID?

Aside

As requested by some of my LinkedIn followers, here is the NFL Infographic about RFID tags I shared a while back:

nfl_tech_infographic-100612792-large.idge

I hope @NFL @XboxOne #rfid data becomes more easily accessible. I have been tweeting about the Zebra deal for 6 months now, and the awesome implications this would have on everything from sports betting to fantasy enthusiasts to coaching, drafting and what have you. Similarly, I have built a fantasy football (PPR) league bench/play #MachineLearning model using #PySpark which, as it turns out, is pretty good. But it could be great with the RFID stream.
nfl-tagged-shoulder-pads-100612790-large.idge

This is where the #IoT rubber really hits the road because there are so many more fans of the NFL than there are folks who really grok the “Connected Home” (not knocking it, but it doesn’t have the reach tentacles of the NFL). Imagine measuring the burn-rate output vs. performance degradation of these athletes mid game and one day, being able to stream that on the field or booth for game course corrections. Aah, a girl can only dream…

Utilizing #PredictiveAnalytics & #BigData To Improve Accuracy of #EPM Forecasting Process

Aside

I was amazed when I read the @TidemarkEPM awesome new white paper on the “4 Steps to a Big Data Finance Strategy.” This is an area I am very passionate about; some might say, it’s become my soap-box since my days as a Business Intelligence consultant. I saw the dawn of a world where EPM, specifically, the planning and budgeting process was elevated from gut feel analytics to embracing #machinelearning as a means of understanding which drivers are statistically significant from those that have no verifiable impact , and ultimately using those to feed a more accurate forecast model.

Big Data Finance

Traditionally (even still today), finance teams sit in a conference room with Excel spreadsheets from Marketing, Customer Service etc., and basically, define the current or future plans based on past performance mixed with a sprinkle of gut feel (sometimes, it was more like a gallon of gut feel to every tablespoon of historical data). In these same meetings just one quarter later, I would shake my head when the same people questioned why they missed their targets or achieved a variance that was greater/less than the anticipated or expected value.

The new world order of Big Data Finance leverages the power of machine learned algorithms to derive true forecasted analytics. And this was a primary driver for my switching from a pure BI focus into data science. And, I have seen so many companies embrace the power of true “advanced predictive analytics” and by doing so, harness the value and benefits of doing so; and doing so, with confidence, instead of fear of this unknown statistical realm, not to mention all of the unsettled glances when you say the nebulous “#BigData” or “#predictiveAnalytics” phrases.

But I wondered, exactlyBig Data Finance, Data Types, Process Use Cases, Forecasting, Budgeting, Planning, EPM, Predictive, Model how many companies are doing this vs. the old way? And I was very surprised to learn from the white-paper that  22.7% of people view predictive capabilities as “essential” to forecasting, with 52.2% claiming it nice to have.  Surprised is an understatement; in fact, I was floored.

We aren’t just talking about including weather data when predicting consumer buying behaviors. What about the major challenge for the telecommunications / network provider with customer churn? Wouldn’t it be nice to answer the question: Who are the most profitable customers WHO have the highest likelihood of churn? And wouldn’t it be nice to not have to assign 1 to several analysts xx number of days or weeks to be able to crunch through all of the relevant data to try to get to an answer to that question? And still probably not have all of the most important internal indicators or be including indicators that have no value or significance to driving an accurate churn outcome?

What about adding in 3rd party external benchmarking data to further classify and correlate these customer indicators before you run your churn prediction model? To manually do this is daunting and so many companies, I now hypothesize, revert to the old ways of doing the forecast. Plus, I bet they have a daunting impression of the cost of big data and the time to implement because of past experiences with things like building the uber “data warehouse” to get to that panacea of the “1 single source of truth”…On the island of Dr. Disparate Data that we all dreamt of in our past lives, right?

I mean we have all heard that before and yet, how many times was it actually done successfully, within budget or in the allocated time frame? And if it was, what kind of quantifiable return on investment did you really get before annual maintenance bills flowed in? Be honest…No one is judging you; well, that is, if you learned from your mistakes or can admit that your pet project perhaps bit off too much and failed.

And what about training your people or the company to utilize said investment as part of your implementation plan? What was your budget for this training and was it successful,  or did you have to hire outside folks like consultants to do the work for you? And by doing so, how long did it actually take the break the dependency on those external resources and still be successful?

Before the days of Apache Spark and other Open Source in-memory or streaming technologies, the world of Big Data was just blossoming into what it was going to grow into as a more mature flower. On top of which, it takes a while for someone to fully grok a new technology, even with the most specialized training, especially if they aren’t organically a programmer, like many Business Intelligence implementation specialists were/are. That is because those who have past experience with something like C++, can quickly apply the same techniques to newer technologies like Scala for Apache Spark or Python and be up and running much faster vs. someone who has no background in programming trying to learn what a loop is or how to call an API to get 3rd party benchmarking data. We programmers take that for granted when applying ourselves to learning something new.

And now that these tools are more enterprise ready and friendly with new integration modules with tools like R or MATLib for the statistical analysis coupled with all of the free training offered by places like University of Berkeley (via eDX online), now is the time to adopt Big Data Finance more than ever.

In a world where the machine learning algorithm can be paired with traditional classification modeling techniques automatically, and said algorithms have been made publicly available for your analysts to use as a starting point or in their entirety for your organization, one no longer needs to be daunted by thought of implementing Big Data Finance or testing out the waters of accuracy to see if you are comfortable with the margin of error between your former forecasting methodology and this new world order.

Finance is the Participation Sport of the BI Olympics

IT is no longer the powerhouse that it once was, and unfortunately for CIOs who haven’t embraced change, much of their realm was commoditized by cloud computing powered by the core principles of grid computational engines and schema-less database designs. The whole concept of spending millions of dollars to bring all disparate systems together into one data warehouse has proven modesty beneficial but if we are being truly honest, what has all that money and time actually yielded, especially towards the bottom line?
And by the time you finished with the EDW, I guarantee it was missing core operational data streams that were then designed into their own sea of data marts. Fast forward a few years, and you probably have some level of EDW, many more data marts , probably one or more cube (ROLAP/MOLAP) applications and n-number of cubes or a massive 1+ hyper-cube(s) and still, the business depends of spreadsheets to sit on top of these systems, creating individual silos of information under the desk or in the mind of one individual.

Wait<<<rewind<<< Isn’t that where we started?

Having disparate, ungoverned and untrusted data sources being managed by individuals instead of by enterprise systems of record?

And now we’re back>>>press play to continue>>>

When you stop to think about the last ten years, fellow BI practitioners, you might be scared of your ever-changing role. From a grass-roots effort to a formalized department team, Business Intelligence went from the shadows to the mainstream, and brought with it reports then dashboards, then KPIs and scorecards, managing by exception, proactive notifications and so on. And bam! We were hit by the first smattering of changes to come when Hadoop and others hit the presses. But we really didnt grok what the true potential and actual meaning of said systems unless you come from a background like myself, either competitively, or from a big data friendly industry group like telecommunications, or from a consultant/implementation p.o.v.
And then social networking took off like gang busters and mobile became a reality with the introduction of the tablet device (though, I hate to float my boat as always by mentioning my soap box dream spewed at a TDWI conference about the future of mobile BI when the 1st generation iPhone released).

But that is neither here nor there. And, as always, I digress and am back…

At the same time as we myopically focused on the technological changing landscape around us, a shifting power paradigm was building wherein the Finance organization, once relegated to the back partition of cubicles, where a pin drop was heard ’round the world (or at least, the floor), was growing more and more despondent with not being able to access the data they needed without IT intervention in order to update their monthly forecasts and produce their subsequent P&L, Balance Sheet and Cash Flow Planning statements. And IT’s response was to acquire (for additional millions of dollars) a “BI tool” aka an ad-hoc reporting application that would allow them to pull their own data. But it had been installed and the data had been pulled, and validated and by the time of completion, the Finance team had either found an alternate solution or found the system useful for a very small sliver of analysis but went outside of IT to get additional sources of information that wanted and needed to adapt to the changing business pressures from the convergence of social, mobile and unstructured datasets. And suddenly those once, shiny BI tools, seemed like antiquated relics, and simply could not handle the sheer data volumes that were now expected from it or would crash (unless filtered beyond the point of value). Businesses need not adapt their queries to the tool but need a tool that can adapt to their ever-changing processes and needed.

Drowning in data but starving for information...

Drowning in data but starving for information…

So if necessity if the mother of invention, Finance was its well deserving child. And why? The business across the board is starving for information but drowning in data. And Finance is no longer a game of solitaire, understood by few and ignored by many. In fact, Finance has become the participation sport of the BI Olympics, and rightfully so, where departmental collaboration at the fringe of the organization has proven as the missing link that before prevented successful top-down planning efforts. Where visualizations demands made dashboards a thing of the past, and demanded and better story, vis-a-vie storylines / infographics, to help disseminate more than just the numbers, but the story behind the numbers to the rest of the organization, or what I like to call the “fringe”.

I remember a few years ago when the biggest challenge was getting data, and often, we joked about how nice it would be to have a sea of data to drown in; an analysts’ buffet-du-jour; a happy paralysis-induced-by said analysis plate was the special of the day, yet only for a few, while the rest was but a gleam in our data-starved eyes.

Looking forward from there, I ask, dear reader, where do we go from here…If it’s a Finance party and we are all invited, what do we bring to the party table as BI practitioners of value? Can we provide the next critical differentiator?

Well, I believe that we can, and that critical differentiator is forward-looking data. Why?

Gartner Group stated that “Predictive data will increase profitability by 20% and that historical data will become a thing of the past” (for a BI practitioner, the last part of that statement should worry you, if you are still resisting the plunge into the predictive analytics pool).

Remember, predictive is a process that allows an organization to get true insight and has been executed amongst a larger group of people to drive faster, smarter business users. This is perfect for enterprise needs because by definition, they offer a larger group of people to work with.

Smooth sailingIn fact, it was Jack Welch would said  An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage” 

If you haven’t already, go out and started learning one of the statistical application packages. I suggest “R” and in the coming weeks, I will provide R and SAS scripts (I have experience with both) for those interested in growing their chosen profession and remaining relevant as we weather the sea of business changes

.

How Do You Use LinkedIn? (Social Media Infographics)

How often do you refresh your LinkedIn profile pic? Or worse, the content within your profile? Unless you are a sales exec trolling the social networking site or a job seeker, I would surmise not that often; in fact, rarely is most apropos of a description. Thoughts…? ( yes, she’s back ( again), but this time, for good dear readers…@Laura_E_Edell (#infographics) says thanks to designinfographics.com for her latest content postings!

And just because I call it out, doesn’t mean you will know the best approach to updating your LinkedIn profile. And guess what …there’s an infographic for that! (http://www.linkedin.com/in/lauraerinedell)

Check out my profile on LinkedIn by clicking infographic

Check out my profile on LinkedIn by clicking infographic

MicroStrategy Personal Cloud – a Great **FREE** Cloud-based, Mobile Visualization Tool

Have you ever needed to create a prototype of a larger Business Intelligence project focused on data visualizations? Chances are, you have, fellow BI practitioners. Here’s the scenario for you day-dreamers out there:

Think of the hours spent creating wire-frames, no matter what tool you used, even if said tool was your hand and a napkin (ala ‘back of the napkin’ drawing) or the all-time-favorite white board, which later becomes a permanent drawing with huge bolded letters to the effect of ‘DO NOT ERASE OR ITS OFF WITH YOUR HEAD’ annotations dancing merrily around your work. Even better: electronic whiteboards which yield you hard copies of your hard work (so aptly named), which at first, seems like the panacea of all things cool (though it has been around for eons) but still, upon using, deemed the raddest piece of hardware your company has, until, of course, you look down at the thermal paper printout which has already faded in the millisecond since you tore it from machine to hand, which after said event, leaves the print out useless to the naked eye, unless you have super spidey sense optic nerves, but now I digress even further and in the time it took you to try to read thermal printout, it has degraded further because anything over 77 degrees is suboptimal (last I checked we checked in at around 98.6 but who’s counting), thus last stand on thermal paper electronic whiteboards is that they are most awesome when NOT thermoregulate ;).

OK, and now We are back…rewind to sentence 1 –

Prototyping is to dashboard design or any data visualization design as pencils and grid paper are to me. Mano y mano – I mean, totally symbiotic, right?

But, wireframing is torturous when you are in a consultative or pre-sales role, because you can’t present napkin designs to a client, or pictures of a whiteboard, unless you are showing them the process behind the design. (And by the way, this is an effective “presentation builder” when you are going for a dramatic effect –> ala “first there were cavemen, then the chisel and stone where all one had to create metrics –> then the whiteboard –> then the…wait!

This is where said BI practitioner needs to have something MORE for that dramatic pop, whiz-AM to give to their prospective clients/customers in their leave behind presentation.

And finally, the girl gets to her point (you are always so patient, my loving blog readers)…While I biased, if you forget whom I work for, and just take into account the tool, you will see the awesomeness that the new MicroStrategy Personal Cloud is for (drum roll please) PROTOTYPING a new dashboard — or just building, distributing, mobilizing etc your spreadsheet of data in a highly stylized, graphical means that tell a story far better than a spreadsheet can in most situations. (Yes, neighseyers, I know that for the 5% of circumstances which you can name, a spreadsheet is more àpropos, but HA HA, I say: this cloud personal product has the ability to include the data table along with the data visualizations!)

Best of all it is free.

I demoed this recently and was able to time it took to upload and spreadsheet, render 3 different data visualizations, generate the link to send to mobile devices (iPads and iPhones), network latency for said demo-ees to receive the email with the link and for them to launch the dashboard I created, and guess what the total time was?

Next best of all, it took only 23.7 minutes from concept to mobilization!

Mind you, I was also using data from the prospect that I had never seen or had any experience with.

OK, here is how it was done:

1) Create a FREE account or login to your existing MicroStrategy account (by existing, I mean, if you have ever signed up for the MicroStrategy forums or discussion boards, or you are an employee, then use the same login) at https://www.microstrategy.com/cloud/personal

Cloud Home

Landing Page After Logged in to Personal Cloud

2) Click the button to Create New Dashboard:

Create Dashboard Icon

  • Now, you either need to have a spreadsheet of data OR you can choose one of the sample spreadsheets that MicroStrategy provides (which is helpful if you want to see how others set up their data in Excel, or how others have used Cloud personal to create dashboards; even though it is sample data , it is actually REAL data that has been scrub-a-dub-dubbed for your pleasure!) If using a sample data set, I recommend the FAA data. It is real air traffic data, with carrier, airport code, days of the week, etc, which you can use to plan your travel by; I do…See screenshot below. There are some airports and some carriers who fly into said airports whom I WILL not fly given set days of the week in which I must travel. If there is a choice, I will choose to fly alternate carriers/routes. This FAA data set will enable you to analyze this information to make the most informed decision (outside of price) when planning your travel. Trust me…VERY HELPFUL! Plus, you can look at all the poor slobs without names sitting at the Alaska Air gate who DIDNT use this information to plan their travel, and as you casually saunter to your own gate on that Tuesday between 3 – 6 PM at SeaTac airport , you will remember that they look so sad because their Alaska Air flight has a 88% likelihood of being delayed or cancelled. (BTW, before you jump on me for my not so nice reference to said passengers), it is merely a quotation from my favorite movie ‘Breakfast at Tiffany’s’ …says Holly Golightly: “Poor cat…poor old slob without a name”.

On time Performance (Live FAA Data)

If using your own data, select the spreadsheet you want to upload

3) Preview your data; IMPORTANT STEP: make sure that you change any fields which to their correct type (either Attribute or Metric or Do Not Import).

Cloud Import - Preview Data

Keep in mind the 80/20 rule: 80% of the time, MicroStrategy will designate your data as either an Attribute or Metric correctly using a simple rule of thumb: Text or VarChar/NVarChar if using SQL Server, will always be designated as an Attribute (i.e. your descriptor/Dimension) and your numerals designated as your Metrics. BUT, if your spreadsheet uses ID fields, like Store ID, or Case ID, along with the descriptor like Store DESC or Case DESC, most likely MicroStrategy will assume the Store ID/Case ID are Metrics (since the fields are numeric in the source). This is an Easy Change! You just need to make sure ahead of time to make that change using the drop down indicator arrows in the column headings – To find them, hover over the column names with your mouse icon until you see the drop down indicator arrow. Click on the arrow to change an Attribute column to a Metric column and vice-versa (see screenshot):

Change Attribute to Metric

Once you finish with previewing your data, and everything looks good, click OK at the bottom Right of your screen.

In about 30-35 seconds, MicroStrategy will have imported your data into the Cloud for you to start building your awesome dashboards.

4) Choose a visualization from the menu that pops up on your screen upon successfully importing your spreadsheet:

Dashboard Visualization Selector
Change data visualization as little or as often as you choose

Here is the 2010 NFL data which I uploaded this morning. It is a heatmap showing the Home teams as well as any teams they played in the 2010 season. The size of the box is HOW big the win or loss was. The color indicates whether they won or lost (Green = Home team won // Red = Home team lost).

For all you, dear readers, I bid you a Happy New Year. May your ideas flow a plenty, and your data match your dreams (of what it should be) :). Go fearlessly into the new world order of business intelligence, and know that I , Laura E. your Dashboard Design Diva, called Social Intelligence the New Order, in 2005, again in 2006 and 2007. 🙂 Cheers, ya’ll.

http://tinyurl.com/ckfmya8

https://my.microstrategy.com/MicroStrategy/servlet/mstrWeb?pg=shareAgent&RRUid=1173963&documentID=4A6BD4C611E1322B538D00802F57673E&starget=1

Continue reading

Business Intelligence Clouds – The Skies the Limit

I am back…(for now, or so it seems these days) – I promise to get back to one post a month if not more.

Yes, I am known for my frequent use of puns, bordering on the line between cheesy and relevant. Forgive the title. It has been over 110 days since I last posted, which for me is a travesty. Despite my ever growing list of activities both professional and personally, I have always put my blog in the top priority quadrant.

Enough ranting…I diverged; and now I am back.

Ok, cloud computing (BI tools related) seems to be all the rage. Right up there with Mobile

BI, big data and social. I dare use my own term coined back in 2007 ‘Social Intelligence’ as now others have trade marked this phrase (but we, dear readers, know the truth –> we have been thinking about the marriage between social networks / social media data sets and business intelligence for years now)…Alas, I diverge again. Today, I have been thinking a lot about cloud computing and Business Intelligence.

Think about BI and portals, like Sharepoint (just to name 1)…It was all of the rage (or perhaps, still is)…”Integrate my BI reporting with my intranet / portal /Sharepoint web parts…OK, once that was completed successfully, did it buy much in terms of adoption or savings or any number of those ROI / savings catch – “Buy our product, and your employees will literally save so much time they will be basket weaving their reports into TRUE analysis'” What they didnt tell you, was that more bandwidth meant less need for those people, which in turn, meant people went into scarcity mode/tactics trying to make themselves seem or be relevant…And I dont fault them for this…Companies were not ready or did not want to think about what they were going to do with the newly freed up resources that they would have when the panacea of BI deployments actually came to fruition…And so, the wheel turned. What was next…? Reports became dashboards; dashboards became scorecards (became the complements for the former); Scorecards introduced proactive notification / alerting; alerting introduced threshold based notification across multiple devices/methods, one of which was mobile; mobile notification brought the need for mobile BI –> and frankly, and I will say it: Apple brought us the hardware to see the latter into fruition…Swipe, tap, double tap –> drill down was now fun. Mobile made portals seem like child’s play. But what about when you need to visualize something and ONLY have it on a spreadsheet?

(I love hearing this one; as if the multi-billion dollar company whose employee is claiming to only have the data on a spreadsheet didnt get it from somewhere else; I know, I know –> in the odd case, yes, this is true…so I will play along)…

The “only on a spreadsheet” crowd made mobile seem restrictive; enter RoamBI and the likes of others like MicroStrategy (yes, MicroStrategy now has a data import feature for spreadsheets with advanced visualizations for both web and mobile)…Enter Qlikview for the web crowd. The “I’m going to build-a dashboard in less than 30 minutes” salesforce “wait…that’s not all folks….come now (to the meeting room) with your spreadsheet, and watch our magicians create dashboards to take with you from the meeting”

But no one cared about maintenance, data integrity, cleanliness or accuracy…I know…they are meant to be nimble, and I see their value in some instances and some circumstances…Just like the multi-billion dollar company who only tracks data on spreqadsheets…I get it; there are some circumstances where they exist…But, it is not the norm.

So, here we are …mobile offerings here and there; build a dashboard on the fly; import spreadsheets during meetings; but, what happens when you go back to your desk and have to open up your portal (still) and now have a new dashboard that only you can see unless you forward it out manually?

Enter cloud computing for BI; but not at the macro scale; let’s talk , personal…Personal clouds; individual sandboxes of a predefined amount of space which IT has no sanction over other than to bless how much space is allocated…From there, what you do with it is up to you; Hackles going up I see…How about this…

Image representing Salesforce as depicted in C...
Image via CrunchBase

Salesforce.com –> The biggest CRM cloud today. And for the last many years, SFDC has

enbraced Cloud Computing. And big data for that matter; and databases (database.com in fact) in the cloud…Lions and tigers and bears, oh my!

So isnt it natural for BI to follow CRM into cloud computing ?? Ok, ok…for those of you whose hackles are still up, some rules (you IT folks will want to read further):

Rules of the game:

1) Set an amount of space (not to be exceeded; no matter what) – But be fair and realistic; a 100 MB is useless; in today’s world, a 4 GB zip drive was advertised for $4.99 during the back to school sales, so I think you can pony up enough to help make the cloud useful.

2) If you delete it, there is a recycling bin (like on your PC/Mac); if you permanently delete it, too bad/so sad…We need to draw the line somewhere. Poor Sharepoint admins around the world are having to drop into STSADM commands to restore Alvin Analyst’s Most Important Analysis that he not only moved into recycling bin but then permanently deleted.

3) Put some things of use in this personal cloud at work like BI tools; upload a spreadsheet and build a dashboard in minutes wiht visualizations like the graph matrix (a crowd pleasure) or a time series slider (another crowd favorite; people just love time based data 🙂 But I digress (again)…

4) Set up BI reporting on the logged events; understand how many users are using your cloud environment; how many are getting errors; what and why are they getting errors; this simple type of event based logging is very informative. (We BI professionals tend to overthink things, especially those who are also physicists).

5) Take a look at what people are using the cloud for; if you create and add meaningful tools like BI visualizations and data import and offer viewing via mobile devices like iPhone/iPad and Android or web, people will use it…

This isnt a corporate iTunes or MobileMe Cloud; this isnt Amazon’s elastic cloud (EC2). This is a cloud wiht the sole purpase of supporting BI; wait, not just supporting, but propelling users out of the doldrums of the current state of affairs and into the future.

It’s tangible and just cool enough to tell your colleagues and work friends “hey, I’ve got a BI cloud; do you?”

BIPlayBook.Com is Now Available!

As an aside, I’m excited to announce my latest website: http://www.biplaybook.com is finally published. Essentially, I decided that you, dear readers, were ready for the next step.  What comes next, you ask?

After Measuring BI data –> Making Measurements Meaningful –> and –>Massaging Meaningful Data into Metrics, what comes next is to discuss the age-old question of ‘So What’? & ‘What Do I Do About it’?

BI PlayBook offers readers the next level of real-world scenarios now that BI has become the nomenclature of yesteryear & is used by most to inform decisions. Basically, it is the same, with the added bonus of how to tie BI back into the original business process, customer service/satisfaction process or really any process of substance within a company.

This is quite meaningful to me because so often, as consumers of goods and services, we find our voices go unheard, especially when we are left dissatisfied. Can you muster the courage to voice your issue (dare I say, ‘complain’?) using the only tools provided: poor website feedback forms, surveys or (gasp) relaying our issue by calling into a call center(s) or IVR system (double gasp)? I don’t know if I can…

How many times do we get caught in the endless loop of an IVR, only to be ‘opted-out’ (aka – hung up on) when we do not press the magical combination of numbers on our keypads to reach a live human being, or when we are sneaky, pressing ‘0’ only to find out the company is one step ahead of us, having programmed ‘0’ to automatically transfer your call to our friend:  ‘ReLisa Boutton’ – aka the Release Button().

Feedback is critical, especially as our world has become consumed by social networks. The ‘chatter’ of customers that ensues, choosing to ‘Like’ or join our company page or product, or tweet about the merits or demerits of one’s value proposition, is not only rich if one cares about understanding their customer. But, it is also a key into how well you are doing in the eyes of your customer. Think about how many customer satisfaction surveys you have taken ask you whether or not your would recommend a company to a friend or family member.

This measure defines one’s NPR, or Net Promoter Rank, and is a commonly shared KPI or key performance indicator for a company.

Yet, market researchers like myself know that what a customer says on a survey isn’t always how they will behave. This discrepancy between what someone says and what someone does is as age-old as our parents telling us as children “do not as I do, but as I say.” However, no longer does this paradigm hold true. Therefore, limiting oneself by their NPR score will restrict the ability to truly understand one’s Voice of the Customer. And further, if you do not understand your customer’s actual likelihood to recommend to others or repeat purchase from you, how can you predict their lifetime value or propensity for future revenue earnings? You can’t.

Now, I am ranting. I get it.

But I want you to understand that social media content that is available from understanding the social network spheres can fill that gap. They can help you understand how your customers truly perceive your goods or services. Trust me, customers are more likely to tweet (use Twitter) to vent in 140 characters or less about a negative experience than they are to take the time to fill out a survey. Likewise, they are more likely to rave about a great experience with your company.

So, why shouldn’t this social ‘chatter’ be tied back into the business intelligence platforms, and further, mined out specifically to inform customer feedback loops, voice of the customer & value stream maps, for example?

Going one step further, having a BI PlayBook focuses the attention of the metric owners on the areas that needs to be addressed, while filtering out the noise that can detract from the intended purpose.

If we are going to make folks responsible for the performance of a given metric, shouldn’t we also help them understand what is expected of them up front, as opposed to when something goes terribly wrong, signified by the “text message” tirade of an overworked CEO waking you out of your slumber at 3 AM?

Further, understanding how to address an issue, who to communicate to and most importantly, how to resolve and respond to affected parties are all part of a well conceived BI playbook.

It truly takes BI to that next level. In fact, two years ago, I presented this very topic at the TDWI Executive Summit in San Diego (Tying Business Processes into your Business Intelligence). While I got a lot of  stares ala ‘dog tilting head to the side in that confused glare at owner look’, I hope people can draw back on that experience with moments of ‘ah ha – that is what she meant’ now that they have evolved ( a little) in their BI maturation growth.

To Start Quilting, One Just Needs a Set of Patterns: Deconstructing Neural Networks (my favorite topic de la journée, semaine ou année)

 

How a Neural Network Works:

Neural NetworkA neural network (#neuralnetwork) uses rules it “learns” from patterns in data to construct a hidden layer of logic. The hidden layer then processes inputs, classifying them based on the experience of the model. In this example, the neural network has been trained to distinguish between valid and fraudulent credit card purchases.

This is not your mom’s apple pie or the good old days of case-based reasoning or fuzzy logic. (Although, the latter is still one of my favorite terms to say. Try it: fuzzzzyyyy logic. Rolls off the tongue, right?)…But I digress…

And, now, we’re back.

To give you a quick refresher:

image

Case based reasoning represents knowledge as a database of past cases and their solutions. The system uses a six-step process to generate solutions to new problems encountered by the user.

We’re talking old school, folks…Think to yourself, frustrating FAQ pages, where you type a question into a search box, only to have follow on questions prompt you for further clarification and with each one, further frustration. Oh and BTW, the same FAQ pages which e-commerce sites laughably call ‘customer support’ –

“ And, I wonder why your ASCI customer service scores are soo low Mr. or Mrs. e-Retailer :),” says this blogger facetiously, to her audience .

 

 

 

And, we’re not talking about fuzzy logic either – Simply put, fuzzy logic is fun to say, yes, and technically is:

fuzzy logic

–> Rule-based technology with exceptions (see arrow 4)

–> Represents linguistic categories (for example, “warm”, “hot”) as ranges of values

–> Describes a particular phenomenon or process and then represents in a diminutive number of flexible rules

–> Provides solutions to scenarios typically difficult to represent with succinct IF-THEN rules

(Graphic: Take a thermostat in your home and assign membership functions for the input called temperature. This becomes part of the logic of the thermostat to control the room temperature. Membership functions translate linguistic expressions such as “warm” or “cool” into quantifiable numbers that computer systems can then consume and manipulate.)

 

Nope, we are talking Neural Networks – the absolute Bees-Knees in my mind, right up there with social intelligence and my family (in no specific order :):

–> Find patterns and relationships in massive amounts of data that are too complicated for human to analyze

–> “Learn” patterns by searching for relationships, building models, and correcting over and over again model’s own mistakes

–> Humans “train” network by feeding it training data for which inputs produce known set of outputs or conclusions, to help neural network learn correct solution by example

–> Neural network applications in medicine, science, and business address problems in pattern classification, prediction, financial analysis, and control and optimization

 

Remember folks: Knowledge is power and definitely an asset. Want to know more? I discuss this and other intangibles further in part 1 of a multi-part study I am conducting called:

weemee Measuring Our Intangible Assets, by Laura Edell

Investigative Analysis Part 1: Quantifying the Market Value of an Organization’s Intangible Asset Known as ‘Knowledge’

OK, so I’ve decided to conduct another multi-part study similar to what I did last year.

This time, I will be analyzing and attempting the quantify an organization’s intangible assets. Specifically, the following:

• knowledge, brands, reputations, and unique business processes

So, starting with knowledge:  Firstly, the chart is a little outdated but I will source the last two years and updated the graph later in the series.  Regardless, it is interesting none-the-less. And since I am the Queen advocate for measuring what matters and managing what you can measure, then consider the following my attempt to drink my own cool-aid – the following chart  depicts revenue growth over a 7 year period ending in 2008 – Those of you, my dear readers, who are also fellow Business Intelligence practitioners, should be able to attest at first glance to this statistical representation of Content Management Systems (CMS) and Portals YoY Revenue growth.

In fact, many of us have been asked to integrate BI dashboards and reports into existing corporate portals, like Microsoft SharePoint or into the native portals bundled with most Enterprise grade BI products like MicroStrategy or SAP/Business Objects, right? Many of us have been tasked with drafting data dictionaries, data governance documentation, source protected project and code repositories; ie – knowledge capture areas. But even in my vast knowledge (no pun intended), I was unaware that the growth spurt specific to CMS’ was as dramatic as this, depicted below and sourced from Prentice Hall

Laura Found This Interesting Folks!In fact, between 2001 and 2008, CMS’ revenue growth went from ~$2.5B to ~$22B, with the greatest spurt beginning in 2003 and skyrocketing up from there.

 

Conversely, the portal revenue growth was substantially less. This was a surprise. I must have heard the words SharePoint and Implementation more than any other between 2007 – 2009, whereas the sticker shock that came with an enterprise grade CMS sent many a C-level into the land of Nod, never to return until the proven VALUE cloud could ride them home against the nasty cop known as COST.

Aah – Ha moment, folks. Portal products were far less costly than the typical Documentum or IBM CMS.’

In fact, Jupiter’s recent report on CMS’ stated

“In some cases, an organization will deploy several seemingly redundant systems. In our sampling of about 800 companies that use content management packages, we discovered that almost 15 percent had implemented more than one CMS, often from competing vendors. That’s astounding, especially when you consider that an organization that deploys two content management systems can rack up more than $1 million in licensing fees and as much as $300,000 in yearly maintenance costs. Buying a second CMS should certainly raise a red flag for any CIO or CFO about to approve a purchase order.”

That’s 120 companies from the Jupiter study spending $1M in licensing, or $120M baseline. Extend that to all organizations leveraging CMS technology and therein lies the curious case of the revenue growth spurt.

To that, I say, Kiss My Intangible Assets! Knowledge is power, except when parked in someone’s head – Now, when will someone invent the physical drainage system for exactly said knowledge with or without permission of said holder? This gatekeepers need to go, and are often the dinosaurs fearing the newbie college grads and worst of all, CHANGE.

In part 2, we will discuss another fave of mine: Brand You!

50 Ways to Drive Traffic Online

I wanted to share with you this great article I came across – It walks you through the 50 ways to increase online traffic; I would add that if you are interested in building your own personal brand, or the “Brand YOU”, then this article is a must read:

http://www.gathersuccess.com/blogging-tips/50-ways-to-drive-traffic-online