Wonderful World of Sports: Hey NFL, Got RFID?

As requested by some of my LinkedIn followers, here is the NFL Infographic about RFID tags I shared a while back:

nfl_tech_infographic-100612792-large.idge

I hope @NFL @XboxOne #rfid data becomes more easily accessible. I have been tweeting about the Zebra deal for 6 months now, and the awesome implications this would have on everything from sports betting to fantasy enthusiasts to coaching, drafting and what have you. Similarly, I have built a fantasy football (PPR) league bench/play #MachineLearning model using #PySpark which, as it turns out, is pretty good. But it could be great with the RFID stream.
nfl-tagged-shoulder-pads-100612790-large.idge

This is where the #IoT rubber really hits the road because there are so many more fans of the NFL than there are folks who really grok the “Connected Home” (not knocking it, but it doesn’t have the reach tentacles of the NFL). Imagine measuring the burn-rate output vs. performance degradation of these athletes mid game and one day, being able to stream that on the field or booth for game course corrections. Aah, a girl can only dream…

Advertisements

Is Machine Learning the New EPM Black?

I am currently a data scientist & am also a certified lean six sigma black belt. I specialize in the Big Data Finance, EPM, BI & process improvement fields where this convergence of skills has provided me the ability to understand the interactions between people, process and technology/ tools.

I would like to address the need to transform traditional EPM processes by leveraging more machine learning to help reduce forecast error and eliminate unnecessary budgeting and planning rework and cycle time using a  3 step ML approach:

1st, determine which business drivers are statistically meaningful to the forecast (correlation) , eliminating those that are not.

2nd, cluster those correlated drivers by significance to determine those that cause the most variability to the forecast (causation).

3rd, use the output of 1 and 2 as inputs to the forecast, and apply ML in order to generate a statistically accurate forward looking forecast.

 ml

Objection handling, in my experience, focuses on the cost,  time and the sensitive change management aspect- how I have handled these, for example, is as such :

  1. Cost: all of these models can be built using free tools like R and Python data science libraries, so there is minimal to no technology/tool capEx/opEx investment.   
  2. Time: most college grads with either a business, science or computer engineering degree will have undoubtedly worked with R and/or Python (and more) while earning their degree. This reduces the ramp time to get folks acclimated and up to speed. To fill the remaining skill set gap, they can use the vast libraries of work already provided by the R / Python initiatives or the many other data science communities available online for free as a starting point, which also minimizes the time due to unnecessary cycles and rework trying to define drivers based on gut feel only. 
  3. Change: this is the bigger objection that has to be handled according to the business culture and openness to change. Best means of handling this is to simply show them. Proof is in the proverbial pudding so creating a variance analysis of the ML forecast, the human forecast and the actuals will speak volumes, and bonus points if the correlation and clustering analysis also surfaced previously unknown nuggets of information richness.

Even without the finding the golden nugget ticket, the CFO will certainly take notice of a more accurate forecast and appreciate the time and frustration savings from a less consuming budget and planning cycle.

Utilizing #PredictiveAnalytics & #BigData To Improve Accuracy of #EPM Forecasting Process

I was amazed when I read the @TidemarkEPM awesome new white paper on the “4 Steps to a Big Data Finance Strategy.” This is an area I am very passionate about; some might say, it’s become my soap-box since my days as a Business Intelligence consultant. I saw the dawn of a world where EPM, specifically, the planning and budgeting process was elevated from gut feel analytics to embracing #machinelearning as a means of understanding which drivers are statistically significant from those that have no verifiable impact , and ultimately using those to feed a more accurate forecast model.

Big Data Finance

Traditionally (even still today), finance teams sit in a conference room with Excel spreadsheets from Marketing, Customer Service etc., and basically, define the current or future plans based on past performance mixed with a sprinkle of gut feel (sometimes, it was more like a gallon of gut feel to every tablespoon of historical data). In these same meetings just one quarter later, I would shake my head when the same people questioned why they missed their targets or achieved a variance that was greater/less than the anticipated or expected value.

The new world order of Big Data Finance leverages the power of machine learned algorithms to derive true forecasted analytics. And this was a primary driver for my switching from a pure BI focus into data science. And, I have seen so many companies embrace the power of true “advanced predictive analytics” and by doing so, harness the value and benefits of doing so; and doing so, with confidence, instead of fear of this unknown statistical realm, not to mention all of the unsettled glances when you say the nebulous “#BigData” or “#predictiveAnalytics” phrases.

But I wondered, exactlyBig Data Finance, Data Types, Process Use Cases, Forecasting, Budgeting, Planning, EPM, Predictive, Model how many companies are doing this vs. the old way? And I was very surprised to learn from the white-paper that  22.7% of people view predictive capabilities as “essential” to forecasting, with 52.2% claiming it nice to have.  Surprised is an understatement; in fact, I was floored.

We aren’t just talking about including weather data when predicting consumer buying behaviors. What about the major challenge for the telecommunications / network provider with customer churn? Wouldn’t it be nice to answer the question: Who are the most profitable customers WHO have the highest likelihood of churn? And wouldn’t it be nice to not have to assign 1 to several analysts xx number of days or weeks to be able to crunch through all of the relevant data to try to get to an answer to that question? And still probably not have all of the most important internal indicators or be including indicators that have no value or significance to driving an accurate churn outcome?

What about adding in 3rd party external benchmarking data to further classify and correlate these customer indicators before you run your churn prediction model? To manually do this is daunting and so many companies, I now hypothesize, revert to the old ways of doing the forecast. Plus, I bet they have a daunting impression of the cost of big data and the time to implement because of past experiences with things like building the uber “data warehouse” to get to that panacea of the “1 single source of truth”…On the island of Dr. Disparate Data that we all dreamt of in our past lives, right?

I mean we have all heard that before and yet, how many times was it actually done successfully, within budget or in the allocated time frame? And if it was, what kind of quantifiable return on investment did you really get before annual maintenance bills flowed in? Be honest…No one is judging you; well, that is, if you learned from your mistakes or can admit that your pet project perhaps bit off too much and failed.

And what about training your people or the company to utilize said investment as part of your implementation plan? What was your budget for this training and was it successful,  or did you have to hire outside folks like consultants to do the work for you? And by doing so, how long did it actually take the break the dependency on those external resources and still be successful?

Before the days of Apache Spark and other Open Source in-memory or streaming technologies, the world of Big Data was just blossoming into what it was going to grow into as a more mature flower. On top of which, it takes a while for someone to fully grok a new technology, even with the most specialized training, especially if they aren’t organically a programmer, like many Business Intelligence implementation specialists were/are. That is because those who have past experience with something like C++, can quickly apply the same techniques to newer technologies like Scala for Apache Spark or Python and be up and running much faster vs. someone who has no background in programming trying to learn what a loop is or how to call an API to get 3rd party benchmarking data. We programmers take that for granted when applying ourselves to learning something new.

And now that these tools are more enterprise ready and friendly with new integration modules with tools like R or MATLib for the statistical analysis coupled with all of the free training offered by places like University of Berkeley (via eDX online), now is the time to adopt Big Data Finance more than ever.

In a world where the machine learning algorithm can be paired with traditional classification modeling techniques automatically, and said algorithms have been made publicly available for your analysts to use as a starting point or in their entirety for your organization, one no longer needs to be daunted by thought of implementing Big Data Finance or testing out the waters of accuracy to see if you are comfortable with the margin of error between your former forecasting methodology and this new world order.

2015 Gartner Magic Quadrant – Boundaries Blur Between BI & Data Science

2015 Magic Quadrant Business intelligence

2015 Magic Quadrant Business intelligence

…a continuing trend which I gladly welcome… 

IT WAS the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us…

–Charles Dickens

Truer words were never spoken, whether about the current technological times or deep in our past (remember the good ole enterprise report books, aka the 120 page paper weight?)

And, this data gal couldn’t be happier with the final predictions made by Gartner in their 2015 Magic Quadrant Report for Business Intelligence. Two major trends / differentiators fall right into the sweet spot I adore:

New demands for advanced analytics 

Focus on predictive/prescriptive capabilities 

Whether you think this spells out doom for business intelligence as it exists today or not, you cannot deny that these trends in data science and big data can only force us to finally work smarter, and not harder (is that even possible??)

What are your thoughts…?

Awesome Article “Views from the C-Suite: Who’s Big on Big Data” from The Economist

This is an awesome article discussing the whole “big data” thing from the C-level point of view. It is easy to get mired down in the technical weeds of big data, especially since it generates a ton of different definitions depending on who you ask and usually, where they work *department, wise*.

http://pages.platfora.com/rs/platfora/images/Economist-Intelligence-Unit-Big-Data-Exec-Summary.pdf

Let me know what you think.

Big shout out to @platfora for sharing this!

Finance is the Participation Sport of the BI Olympics

IT is no longer the powerhouse that it once was, and unfortunately for CIOs who haven’t embraced change, much of their realm was commoditized by cloud computing powered by the core principles of grid computational engines and schema-less database designs. The whole concept of spending millions of dollars to bring all disparate systems together into one data warehouse has proven modesty beneficial but if we are being truly honest, what has all that money and time actually yielded, especially towards the bottom line?
And by the time you finished with the EDW, I guarantee it was missing core operational data streams that were then designed into their own sea of data marts. Fast forward a few years, and you probably have some level of EDW, many more data marts , probably one or more cube (ROLAP/MOLAP) applications and n-number of cubes or a massive 1+ hyper-cube(s) and still, the business depends of spreadsheets to sit on top of these systems, creating individual silos of information under the desk or in the mind of one individual.

Wait<<<rewind<<< Isn’t that where we started?

Having disparate, ungoverned and untrusted data sources being managed by individuals instead of by enterprise systems of record?

And now we’re back>>>press play to continue>>>

When you stop to think about the last ten years, fellow BI practitioners, you might be scared of your ever-changing role. From a grass-roots effort to a formalized department team, Business Intelligence went from the shadows to the mainstream, and brought with it reports then dashboards, then KPIs and scorecards, managing by exception, proactive notifications and so on. And bam! We were hit by the first smattering of changes to come when Hadoop and others hit the presses. But we really didnt grok what the true potential and actual meaning of said systems unless you come from a background like myself, either competitively, or from a big data friendly industry group like telecommunications, or from a consultant/implementation p.o.v.
And then social networking took off like gang busters and mobile became a reality with the introduction of the tablet device (though, I hate to float my boat as always by mentioning my soap box dream spewed at a TDWI conference about the future of mobile BI when the 1st generation iPhone released).

But that is neither here nor there. And, as always, I digress and am back…

At the same time as we myopically focused on the technological changing landscape around us, a shifting power paradigm was building wherein the Finance organization, once relegated to the back partition of cubicles, where a pin drop was heard ’round the world (or at least, the floor), was growing more and more despondent with not being able to access the data they needed without IT intervention in order to update their monthly forecasts and produce their subsequent P&L, Balance Sheet and Cash Flow Planning statements. And IT’s response was to acquire (for additional millions of dollars) a “BI tool” aka an ad-hoc reporting application that would allow them to pull their own data. But it had been installed and the data had been pulled, and validated and by the time of completion, the Finance team had either found an alternate solution or found the system useful for a very small sliver of analysis but went outside of IT to get additional sources of information that wanted and needed to adapt to the changing business pressures from the convergence of social, mobile and unstructured datasets. And suddenly those once, shiny BI tools, seemed like antiquated relics, and simply could not handle the sheer data volumes that were now expected from it or would crash (unless filtered beyond the point of value). Businesses need not adapt their queries to the tool but need a tool that can adapt to their ever-changing processes and needed.

Drowning in data but starving for information...

Drowning in data but starving for information…

So if necessity if the mother of invention, Finance was its well deserving child. And why? The business across the board is starving for information but drowning in data. And Finance is no longer a game of solitaire, understood by few and ignored by many. In fact, Finance has become the participation sport of the BI Olympics, and rightfully so, where departmental collaboration at the fringe of the organization has proven as the missing link that before prevented successful top-down planning efforts. Where visualizations demands made dashboards a thing of the past, and demanded and better story, vis-a-vie storylines / infographics, to help disseminate more than just the numbers, but the story behind the numbers to the rest of the organization, or what I like to call the “fringe”.

I remember a few years ago when the biggest challenge was getting data, and often, we joked about how nice it would be to have a sea of data to drown in; an analysts’ buffet-du-jour; a happy paralysis-induced-by said analysis plate was the special of the day, yet only for a few, while the rest was but a gleam in our data-starved eyes.

Looking forward from there, I ask, dear reader, where do we go from here…If it’s a Finance party and we are all invited, what do we bring to the party table as BI practitioners of value? Can we provide the next critical differentiator?

Well, I believe that we can, and that critical differentiator is forward-looking data. Why?

Gartner Group stated that “Predictive data will increase profitability by 20% and that historical data will become a thing of the past” (for a BI practitioner, the last part of that statement should worry you, if you are still resisting the plunge into the predictive analytics pool).

Remember, predictive is a process that allows an organization to get true insight and has been executed amongst a larger group of people to drive faster, smarter business users. This is perfect for enterprise needs because by definition, they offer a larger group of people to work with.

Smooth sailingIn fact, it was Jack Welch would said  An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage” 

If you haven’t already, go out and started learning one of the statistical application packages. I suggest “R” and in the coming weeks, I will provide R and SAS scripts (I have experience with both) for those interested in growing their chosen profession and remaining relevant as we weather the sea of business changes

.