Win @NFLFantasy PPR Leagues w/ ML

So, the past 3 years I have been using #machine-learning (ML) to help me in my family based PPR #fantasy-football league. When I joined the league, the commissioner and my partner’s father, said I would never win using statistics as the basis of my game play. Being cut from the “I’ll show you” aka “well, fine…I’ll prove it to you” cloth that some of us gals working in the tech industry dawn as we break down stereotypical walls and glass ceilings, is something I’ve always enjoyed about my career and love that there is an infographic to tell the tale courtesy of mscareergirl.com (only complaint is the source text in white is basically impossible to read but at least the iconography and salaries are legible):

glass-ceiling-2-620x400

infographic provided by Danyel Surrency Jones on mscareergirl.com

I’ve never whined that in my industry, I tend to work primarily with the male species or that they are “apparently” paid more *on average* because some survey says so.  My work ethic doesn’t ride on gender lines — This train departs from the “proven value based on achievements earned & result in commensurate remuneration” station (woah, that’s a mouthful).  I take challenges head-on not to prove to others, but to prove to myself, that I can do something I set my sights on, and do that something as well, if not better, than counterparts. Period. Regardless of gender. And that track has led to the figurative money ‘line’ (or perhaps it’s literal in the case of DFS or trains – who’s to say? But I digress…more on that later).

Money_Train

So, I joined his league on NFL.com, so aptly nicknamed SassyDataMinxes (not sure who the other minxes are in my 1 woman “crew” but I never said I was grammarian; mathematician, aka number ninja, well, yes;  but lingualist, maybe not.

Year 1, as to be expected *or with hindsight*,  was an abysmal failure. Keep in mind that I knew almost absolute nothing when it came to Pro Football or Fantasy sports.  I certainly did not know players or strategies or that fantasy football extended beyond Yahoo Pick ‘Em leagues, which again, in hindsight, would have been a great place to start my learning before jumping head 1st into the world of PPR/DFS.

At its core, it requires you pick the weekly winning team from 2 different competitors and assuming you have the most correct picks, you win that week. If there are no teams on BYE that week, you have 32 teams or 16 games to “predict outcomes” ; a binary 0 or 1 for lose or win in essence. Right? Never, she exclaims, because WAIT, THERE’S MORE: you have to pick a winner based on another factor: point spread. Therefore, if 0 means lost and 1 means won, you get a 1 per win EXCEPT if the spread of points is less than what the “book makers” out of Vegas determine to be the “winning spread” – You could technically pick the winning team and still get a goose egg for that matchup if the team did not meet their point spread (ooh, it burns when that happens). The team that wins happily prancing around the field singing “We are the Champions” while you are the loser for not betting against them because they were comfortable winning by a paltry lesser amount than necessary – Ooh, the blood boils relieving those early games – especially since my Grandmother who won that year picked her teams based on cities she liked or jersey colors that her ‘Color Me Happy’ wheel said were HER best COOL tones; the most unscientific approach worked for her so many times I now think she actually IS A bookie running an illegal operation out of her basement, which fronts as ‘her knitting circle” – Yah, as if any of us believe that one, Grandma :)! (She is a walking football prediction algorithm).

So, something as seemingly simple as Yahoo Pick ‘Em can actually be harder than it appears unless you are her. But, still…markedly easier than a PPR league; and light years easier than DFS/Auction style fantasy leagues when it comes to predicting gaming outcomes at the player, weekly matchup and league perspectives.

Hindsight is such a beautiful thing (*I think I have said that before*) because to espouse all of these nuggets of knowledge as though I am the Alliteration Arbiter of All –> The Socratic Seer of Scoring Strategies…And again, as always I digress (but, ain’t it fun!).

OK, let’s continue…So, we’ve established that fantasy football gaming outcomes requires a lot of *something* — And we’ve established that just cuz it seems simple, or did, when trying to predict outcomes along a massively mutable set of variables  *wait, why didn’t I just READ that sentence or THINK it when I started! If I could go back in time and ask my 3 year younger:

“Self, should I stop this nonsense now, alter the destination or persevere through what, at times, might seems like a terrible journey? *HUM, I think most pensively*.

And then answer myself, just like the good only-child I am:

“NEVER – Self, nothing worth getting is easy to get, but the hardest fought wins are the most worthwhile when all is said and done and remember,  don’t let the bedbugs bite; YOU’RE bigger than them/that.”

Or something along those lines, perhaps…

NEVER was my answer because in 2015 and 2016 (Years 2 and 3),  I was #1 in the league and won those coveted NFL.com trophies and a small pool of money. But what I won most of all was bragging rites.

Champion

Oh baby, you can’t buy those…

Not even on the Dark Web from some Onion-Routed Darker Market. Especially the right to remind a certain commissioner / neighsayer du jour / father-in-law-like that my hypothesis of using ML & statistics ALONE could beat his years of institutional football knowledge and know-how. I also won a 2nd NFL-managed league that I joined in Year 3 to evaluate my own results with a different player composition.

So Year 1 was a learning year, a failure to others in the league but super valuable to me. Year 2 was my 1st real attempt to use the model, though with much supervision and human “tweaking” ; Year 3, non-family league (league #2 for brevity sake; snarky voice in head “missed that 4 paragraphs ago” – Burn!), I had drafted an ideal team, rated A- .

Year 3 being a double test to ensure Year 1 wasn’t a double fluke.  Two leagues played:

League #1 with the family, was a team comprised of many non-ideal draft picks chosen during non-optimized rounds (QB in round 2, DEF in round 4 etc). But the key in both Year 2 and 3, was spotting the diamonds in the rookie rough – my model bubbled up unknown players or as they are known to enthusiasts: “deep sleepers” that went on to become rookie-of-the-year type players: in 2015, that was Devonta Freeman (ATL); in 2016, that was ‘Ty-superfreak’ Hill aka Tyreek Hill (Chiefs) and even better, Travis Kelce (who had been on my roster since 2015 but rose to the occasion in 2016, BEEEG-TIME) .  That has always been a strength of my approach to solving this outcomes conundrum.

So, that all being said, in this Year 4, I plan to blog PRE-GAME with my predictions for my team with commentary on some of the rankings of other players. Remember, it isnt just a player outcome, but it is player outcome in relation to your matchup that week within your league and in the context of who best to play vs. bench given those weekly changing facets. Some weeks, you might look like a boss according to NFL.com predictions; but in fact, should be playing someone else who might have a lower-than-you-are-comfortable-putting-into-your-lineup prediction. Those predictions folks HAVE ISSUES – But I believe in the power of model evaluation and learning, hence the name Machine Learning or better yet, Deep Learning approaches.

Side note: reminds me of that “SAP powered” player comparison tool:

fantasyimageswhich was DOWN / not accessible most of the aforementioned season when it was being hammered on by fans in need of a fantasy fix (reminds me of an IBM Watson joke but I was keep that one to myself ) – whomever is at fault – you should make sure your cloud provider “models” out an appropriate growth-based capacity & utilization plan IF you are going to feature it on your fantasy football site, NFL.com.

Next posting will be all about how I failed during Year 4’s draft (2017) and what I am planning to do to make up for it using the nuggets of knowledge that is an offshoot of retraining the MODEL(s) during the week – Plus, I will blog my play/bench predictions which will hopefully secure a week 1 win (hopefully because I still need to retrain this week but not until Wednesday :)).

In a separate post, we’ll talk through the train…train…train phases, which datasets are most important to differentiate statistically important features from the sea of unworthy options sitting out waiting for you to pluck them into your world. But dont fall prey to those sinister foe…They might just be the “predictable” pattern of noise  that clouds one’s senses. And of course, scripting and more scripting; so many lines of code were written and rewritten covering the gamut of scripting languages from the OSS data science branch (no neg from my perspective on SaS or SPSS other than they cost $$$ and I was trained on R in college *for free* like most of my peers) – well, free is a relative term, and you take the good with the bad when you pull up your OSS work-boots –> R has its drawbacks when it comes to the viability of processing larger than life datasets without herculean sampling efforts just to be able to successfully execute a .R web scraping script without hitting the proverbial out of memory errors, or actually train the requisite models that are needed to solve said self-imposed ML fantasy football challenges such as this. Reader thinks to oneself, “she sure loves those tongue twisting alliterations.”

And gals, I love helping out a fellow chica (you too boys/men, but you already know that, eh) — Nobody puts baby in the corner, and I never turn my back on a mind in need or a good neg/dare.

Well, Year 4 — Happy Fantasy Football Everyone — May the wind take you through the playoffs and your scores take you all the way to the FF Superbowl 🙂

Utilizing #PredictiveAnalytics & #BigData To Improve Accuracy of #EPM Forecasting Process

I was amazed when I read the @TidemarkEPM awesome new white paper on the “4 Steps to a Big Data Finance Strategy.” This is an area I am very passionate about; some might say, it’s become my soap-box since my days as a Business Intelligence consultant. I saw the dawn of a world where EPM, specifically, the planning and budgeting process was elevated from gut feel analytics to embracing #machinelearning as a means of understanding which drivers are statistically significant from those that have no verifiable impact , and ultimately using those to feed a more accurate forecast model.

Big Data Finance

Traditionally (even still today), finance teams sit in a conference room with Excel spreadsheets from Marketing, Customer Service etc., and basically, define the current or future plans based on past performance mixed with a sprinkle of gut feel (sometimes, it was more like a gallon of gut feel to every tablespoon of historical data). In these same meetings just one quarter later, I would shake my head when the same people questioned why they missed their targets or achieved a variance that was greater/less than the anticipated or expected value.

The new world order of Big Data Finance leverages the power of machine learned algorithms to derive true forecasted analytics. And this was a primary driver for my switching from a pure BI focus into data science. And, I have seen so many companies embrace the power of true “advanced predictive analytics” and by doing so, harness the value and benefits of doing so; and doing so, with confidence, instead of fear of this unknown statistical realm, not to mention all of the unsettled glances when you say the nebulous “#BigData” or “#predictiveAnalytics” phrases.

But I wondered, exactlyBig Data Finance, Data Types, Process Use Cases, Forecasting, Budgeting, Planning, EPM, Predictive, Model how many companies are doing this vs. the old way? And I was very surprised to learn from the white-paper that  22.7% of people view predictive capabilities as “essential” to forecasting, with 52.2% claiming it nice to have.  Surprised is an understatement; in fact, I was floored.

We aren’t just talking about including weather data when predicting consumer buying behaviors. What about the major challenge for the telecommunications / network provider with customer churn? Wouldn’t it be nice to answer the question: Who are the most profitable customers WHO have the highest likelihood of churn? And wouldn’t it be nice to not have to assign 1 to several analysts xx number of days or weeks to be able to crunch through all of the relevant data to try to get to an answer to that question? And still probably not have all of the most important internal indicators or be including indicators that have no value or significance to driving an accurate churn outcome?

What about adding in 3rd party external benchmarking data to further classify and correlate these customer indicators before you run your churn prediction model? To manually do this is daunting and so many companies, I now hypothesize, revert to the old ways of doing the forecast. Plus, I bet they have a daunting impression of the cost of big data and the time to implement because of past experiences with things like building the uber “data warehouse” to get to that panacea of the “1 single source of truth”…On the island of Dr. Disparate Data that we all dreamt of in our past lives, right?

I mean we have all heard that before and yet, how many times was it actually done successfully, within budget or in the allocated time frame? And if it was, what kind of quantifiable return on investment did you really get before annual maintenance bills flowed in? Be honest…No one is judging you; well, that is, if you learned from your mistakes or can admit that your pet project perhaps bit off too much and failed.

And what about training your people or the company to utilize said investment as part of your implementation plan? What was your budget for this training and was it successful,  or did you have to hire outside folks like consultants to do the work for you? And by doing so, how long did it actually take the break the dependency on those external resources and still be successful?

Before the days of Apache Spark and other Open Source in-memory or streaming technologies, the world of Big Data was just blossoming into what it was going to grow into as a more mature flower. On top of which, it takes a while for someone to fully grok a new technology, even with the most specialized training, especially if they aren’t organically a programmer, like many Business Intelligence implementation specialists were/are. That is because those who have past experience with something like C++, can quickly apply the same techniques to newer technologies like Scala for Apache Spark or Python and be up and running much faster vs. someone who has no background in programming trying to learn what a loop is or how to call an API to get 3rd party benchmarking data. We programmers take that for granted when applying ourselves to learning something new.

And now that these tools are more enterprise ready and friendly with new integration modules with tools like R or MATLib for the statistical analysis coupled with all of the free training offered by places like University of Berkeley (via eDX online), now is the time to adopt Big Data Finance more than ever.

In a world where the machine learning algorithm can be paired with traditional classification modeling techniques automatically, and said algorithms have been made publicly available for your analysts to use as a starting point or in their entirety for your organization, one no longer needs to be daunted by thought of implementing Big Data Finance or testing out the waters of accuracy to see if you are comfortable with the margin of error between your former forecasting methodology and this new world order.

2015 Gartner Magic Quadrant – Boundaries Blur Between BI & Data Science

2015 Magic Quadrant Business intelligence

2015 Magic Quadrant Business intelligence

…a continuing trend which I gladly welcome… 

IT WAS the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us…

–Charles Dickens

Truer words were never spoken, whether about the current technological times or deep in our past (remember the good ole enterprise report books, aka the 120 page paper weight?)

And, this data gal couldn’t be happier with the final predictions made by Gartner in their 2015 Magic Quadrant Report for Business Intelligence. Two major trends / differentiators fall right into the sweet spot I adore:

New demands for advanced analytics 

Focus on predictive/prescriptive capabilities 

Whether you think this spells out doom for business intelligence as it exists today or not, you cannot deny that these trends in data science and big data can only force us to finally work smarter, and not harder (is that even possible??)

What are your thoughts…?

KPIs in Retail & Store Analytics

I like this post. While I added some KPIs to their list, I think it is a good list to get retailers on the right path…

KPIs in Retail and Store Analytics (continuation of a post made by Abhinav on kpisrus.wordpress.com:
A) If it is a classic brick and mortar retailer:

Retail / Merchandising KPIs:

-Average Time on Shelf

-Item Staleness

-Shrinkage % (includes things like spoilage, shoplifting/theft and damaged merchandise)

Marketing KPIs:

-Coupon Breakage and Efficacy (which coupons drive desired purchase behavior vs. detract)

-Net Promoter Score (“How likely are you to recommend xx company to a friend or family member” – this is typically measured during customer satisfaction surveys and depending on your organization, it may fall under Customer Ops or Marketing departments in terms of responsibility).

-Number of trips (in person) vs. e-commerce site visits per month (tells you if your website is more effective than your physical store at generating shopping interest)

B) If it is an e-retailer :

Marketing KPIs:

-Shopping Cart Abandonment %

-Page with the Highest Abandonment

-Dwell time per page (indicates interest)

-Clickstream path for purchasers (like Jamie mentioned do they arrive via email, promotion, flash sales source like Groupon), and if so, what are the clickstream paths that they take. This should look like an upside down funnel, where you have the visitors / unique users at the top who enter your site, and then the various paths (pages) they view in route to a purchase).

-Clickstream path for visitors (take Expedia for example…Many people use them as a travel search engine but then jump off the site to buy directly from the travel vendor – understanding this behavior can help you monetize the value of the content you provide as an alternate source of revenue).

-Visit to Buy %

-If direct email marketing is part of your strategy, analyzing click rate is a close second to measuring conversion rate. 2 different KPIs, one the king , the other the queen and both necessary to understand how effective your email campaign was and whether it warranted the associated campaign cost.

Site Operations KPIs / Marketing KPIs:

-Error % Overall

-Error % by Page (this is highly correlated to the Pages that have the Highest Abandonment, which means you can fix something like the reason for the error, and have a direct path to measure the success of the change).

Financial KPIs:

-Average order size per transaction

-Average sales per transaction

-Average number of items per transaction

-Average profit per transaction

-Return on capital invested

-Margin %

-Markup %

I hope this helps. Let me know if you have any questions.

You can reach me at mailto://lauraedell@me.com or you can visit my blog where I have many posts listing out various KPIs by industry and how to best aggregate them for reporting and executive presentation purposes ( http://www.lauraedell.com ).

It was very likely that I would write on KPIs in Retail or Store Analytics since my last post on Marketing and Customer Analytics. The main motive behind retailers looking into BI is ‘customer’ and how they can quickly react to changes in customer demand, rather predict customer demand, remove wasteful spending by target marketing, exceeding customer expectation and hence improve customer retention.

I did a quick research on what companies have been using as a measure of performance in retail industry and compiled a list of KPIs that I would recommend for consideration.

Customer Analytics

Customer being the key for this industry it is important to segment customers especially for strategic campaigns and to develop relationships for maximum customer retention. Understanding customer requirements and dealing with ever-changing market conditions is the key for a retail industry to survive the competition.

  • Average order size per transaction
  • Average sales per transaction

View original post 278 more words