Microsoft Data AMP 2017

Aside

Data AMP 2017 just finished and some really interesting announcements came out specific to our company-wide push into infusing machine learning, cognitive and deep learning APIs into every part of our organization. Some of the announcements are ML enablers while others are direct enhancements.

Here is a summary with links to further information:

  • SQL Server R Services in SQL Server 2017 is renamed to Machine Learning Services since both R and Python will be supported. More info
  • Three new features for Cognitive Services are now Generally Available (GA): Face API, Content Moderator, Computer Vision API. More info
  • Microsoft R Server 9.1 released: Real time scoring and performance enhancements, Microsoft ML libraries for Linux, Hadoop/Spark and Teradata. More info
  • Azure Analysis Services is now Generally Available (GA). More info
  • **Microsoft has incorporated the technology that sits behind the Cognitive Services inside U-SQL directly as functions. U-SQL is part of Azure Data Lake Analytics(ADLA)
  • More Cortana Intelligence solution templates: Demand forecasting, Personalized offers, Quality assurance. More info
  • A new database migration service will help you migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database or SQL Server on Azure virtual machines. Sign up for limited preview
  • A new Azure SQL Database offering, currently being called Azure SQL Managed Instance (final name to be determined):
    • Migrate SQL Server to SQL as a Service with no changes
    • Support SQL Agent, 3-part names, DBMail, CDC, Service Broker
    • **Cross-database + cross-instance querying
    • **Extensibility: CLR + R Services
    • SQL profiler, additional DMVs support, Xevents
    • Native back-up restore, log shipping, transaction replication
    • More info
    • Sign up for limited preview
  • SQL Server vNext CTP 2.0 is now available and the product will be officially called SQL Server 2017:

Those I am most excited about I added ** next to. This includes key innovations with our approach to AI and enhancing our deep learning compete against Google TensorFlor for example. Check out the following blog posting: https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/delivering-ai-with-data-the-next-generation-of-microsofts-data-platform/ :

  1. The first is the close integration of AI functions into databases, data lakes, and the cloud to simplify the deployment of intelligent applications.
  2. The second is the use of AI within our services to enhance performance and data security.
  3. The third is flexibility—the flexibility for developers to compose multiple cloud services into various design patterns for AI, and the flexibility to leverage Windows, Linux, Python, R, Spark, Hadoop, and other open source tools in building such systems.

 

Azure ML + AI (Cognitive Services Deep Learning)Most recent documents

23 items in the toolbar. Use Left or Right to navigate. Use Enter to add the selected web part.

Action Bulleted list performed.

Is Machine Learning the New EPM Black?

Aside

I am currently a data scientist & am also a certified lean six sigma black belt. I specialize in the Big Data Finance, EPM, BI & process improvement fields where this convergence of skills has provided me the ability to understand the interactions between people, process and technology/ tools.

I would like to address the need to transform traditional EPM processes by leveraging more machine learning to help reduce forecast error and eliminate unnecessary budgeting and planning rework and cycle time using a  3 step ML approach:

1st, determine which business drivers are statistically meaningful to the forecast (correlation) , eliminating those that are not.

2nd, cluster those correlated drivers by significance to determine those that cause the most variability to the forecast (causation).

3rd, use the output of 1 and 2 as inputs to the forecast, and apply ML in order to generate a statistically accurate forward looking forecast.

 ml

Objection handling, in my experience, focuses on the cost,  time and the sensitive change management aspect- how I have handled these, for example, is as such :

  1. Cost: all of these models can be built using free tools like R and Python data science libraries, so there is minimal to no technology/tool capEx/opEx investment.   
  2. Time: most college grads with either a business, science or computer engineering degree will have undoubtedly worked with R and/or Python (and more) while earning their degree. This reduces the ramp time to get folks acclimated and up to speed. To fill the remaining skill set gap, they can use the vast libraries of work already provided by the R / Python initiatives or the many other data science communities available online for free as a starting point, which also minimizes the time due to unnecessary cycles and rework trying to define drivers based on gut feel only. 
  3. Change: this is the bigger objection that has to be handled according to the business culture and openness to change. Best means of handling this is to simply show them. Proof is in the proverbial pudding so creating a variance analysis of the ML forecast, the human forecast and the actuals will speak volumes, and bonus points if the correlation and clustering analysis also surfaced previously unknown nuggets of information richness.

Even without the finding the golden nugget ticket, the CFO will certainly take notice of a more accurate forecast and appreciate the time and frustration savings from a less consuming budget and planning cycle.

Utilizing #PredictiveAnalytics & #BigData To Improve Accuracy of #EPM Forecasting Process

Aside

I was amazed when I read the @TidemarkEPM awesome new white paper on the “4 Steps to a Big Data Finance Strategy.” This is an area I am very passionate about; some might say, it’s become my soap-box since my days as a Business Intelligence consultant. I saw the dawn of a world where EPM, specifically, the planning and budgeting process was elevated from gut feel analytics to embracing #machinelearning as a means of understanding which drivers are statistically significant from those that have no verifiable impact , and ultimately using those to feed a more accurate forecast model.

Big Data Finance

Traditionally (even still today), finance teams sit in a conference room with Excel spreadsheets from Marketing, Customer Service etc., and basically, define the current or future plans based on past performance mixed with a sprinkle of gut feel (sometimes, it was more like a gallon of gut feel to every tablespoon of historical data). In these same meetings just one quarter later, I would shake my head when the same people questioned why they missed their targets or achieved a variance that was greater/less than the anticipated or expected value.

The new world order of Big Data Finance leverages the power of machine learned algorithms to derive true forecasted analytics. And this was a primary driver for my switching from a pure BI focus into data science. And, I have seen so many companies embrace the power of true “advanced predictive analytics” and by doing so, harness the value and benefits of doing so; and doing so, with confidence, instead of fear of this unknown statistical realm, not to mention all of the unsettled glances when you say the nebulous “#BigData” or “#predictiveAnalytics” phrases.

But I wondered, exactlyBig Data Finance, Data Types, Process Use Cases, Forecasting, Budgeting, Planning, EPM, Predictive, Model how many companies are doing this vs. the old way? And I was very surprised to learn from the white-paper that  22.7% of people view predictive capabilities as “essential” to forecasting, with 52.2% claiming it nice to have.  Surprised is an understatement; in fact, I was floored.

We aren’t just talking about including weather data when predicting consumer buying behaviors. What about the major challenge for the telecommunications / network provider with customer churn? Wouldn’t it be nice to answer the question: Who are the most profitable customers WHO have the highest likelihood of churn? And wouldn’t it be nice to not have to assign 1 to several analysts xx number of days or weeks to be able to crunch through all of the relevant data to try to get to an answer to that question? And still probably not have all of the most important internal indicators or be including indicators that have no value or significance to driving an accurate churn outcome?

What about adding in 3rd party external benchmarking data to further classify and correlate these customer indicators before you run your churn prediction model? To manually do this is daunting and so many companies, I now hypothesize, revert to the old ways of doing the forecast. Plus, I bet they have a daunting impression of the cost of big data and the time to implement because of past experiences with things like building the uber “data warehouse” to get to that panacea of the “1 single source of truth”…On the island of Dr. Disparate Data that we all dreamt of in our past lives, right?

I mean we have all heard that before and yet, how many times was it actually done successfully, within budget or in the allocated time frame? And if it was, what kind of quantifiable return on investment did you really get before annual maintenance bills flowed in? Be honest…No one is judging you; well, that is, if you learned from your mistakes or can admit that your pet project perhaps bit off too much and failed.

And what about training your people or the company to utilize said investment as part of your implementation plan? What was your budget for this training and was it successful,  or did you have to hire outside folks like consultants to do the work for you? And by doing so, how long did it actually take the break the dependency on those external resources and still be successful?

Before the days of Apache Spark and other Open Source in-memory or streaming technologies, the world of Big Data was just blossoming into what it was going to grow into as a more mature flower. On top of which, it takes a while for someone to fully grok a new technology, even with the most specialized training, especially if they aren’t organically a programmer, like many Business Intelligence implementation specialists were/are. That is because those who have past experience with something like C++, can quickly apply the same techniques to newer technologies like Scala for Apache Spark or Python and be up and running much faster vs. someone who has no background in programming trying to learn what a loop is or how to call an API to get 3rd party benchmarking data. We programmers take that for granted when applying ourselves to learning something new.

And now that these tools are more enterprise ready and friendly with new integration modules with tools like R or MATLib for the statistical analysis coupled with all of the free training offered by places like University of Berkeley (via eDX online), now is the time to adopt Big Data Finance more than ever.

In a world where the machine learning algorithm can be paired with traditional classification modeling techniques automatically, and said algorithms have been made publicly available for your analysts to use as a starting point or in their entirety for your organization, one no longer needs to be daunted by thought of implementing Big Data Finance or testing out the waters of accuracy to see if you are comfortable with the margin of error between your former forecasting methodology and this new world order.

2015 Gartner Magic Quadrant Business Intelligence – Mind Melding BI & Data Science, a Continuing Trend…

2015 Magic Quadrant Business intelligence

2015 Magic Quadrant Business intelligence

IT WAS the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us…

–Charles Dickens

Truer words were never spoken, whether about the current technological times or deep in our past (remember the good ole enterprise report books, aka the 120 page paper weight?)

And, this data gal couldn’t be happier with the final predictions made by Gartner in their 2015 Magic Quadrant Report for Business Intelligence. Two major trends / differentiators fall right into the sweet spot I adore:

New demands for advanced analytics 

Focus on predictive/prescriptive capabilities 

Whether you think this spells out doom for business intelligence as it exists today or not, you cannot deny that these trends in data science and big data can only force us to finally work smarter, and not harder (is that even possible??)

What are your thoughts…?

Awesome Article “Views from the C-Suite: Who’s Big on Big Data” from The Economist

This is an awesome article discussing the whole “big data” thing from the C-level point of view. It is easy to get mired down in the technical weeds of big data, especially since it generates a ton of different definitions depending on who you ask and usually, where they work *department, wise*.

http://pages.platfora.com/rs/platfora/images/Economist-Intelligence-Unit-Big-Data-Exec-Summary.pdf

Let me know what you think.

Big shout out to @platfora for sharing this!

Finance is the Participation Sport of the BI Olympics

IT is no longer the powerhouse that it once was, and unfortunately for CIOs who haven’t embraced change, much of their realm was commoditized by cloud computing powered by the core principles of grid computational engines and schema-less database designs. The whole concept of spending millions of dollars to bring all disparate systems together into one data warehouse has proven modesty beneficial but if we are being truly honest, what has all that money and time actually yielded, especially towards the bottom line?
And by the time you finished with the EDW, I guarantee it was missing core operational data streams that were then designed into their own sea of data marts. Fast forward a few years, and you probably have some level of EDW, many more data marts , probably one or more cube (ROLAP/MOLAP) applications and n-number of cubes or a massive 1+ hyper-cube(s) and still, the business depends of spreadsheets to sit on top of these systems, creating individual silos of information under the desk or in the mind of one individual.

Wait<<<rewind<<< Isn’t that where we started?

Having disparate, ungoverned and untrusted data sources being managed by individuals instead of by enterprise systems of record?

And now we’re back>>>press play to continue>>>

When you stop to think about the last ten years, fellow BI practitioners, you might be scared of your ever-changing role. From a grass-roots effort to a formalized department team, Business Intelligence went from the shadows to the mainstream, and brought with it reports then dashboards, then KPIs and scorecards, managing by exception, proactive notifications and so on. And bam! We were hit by the first smattering of changes to come when Hadoop and others hit the presses. But we really didnt grok what the true potential and actual meaning of said systems unless you come from a background like myself, either competitively, or from a big data friendly industry group like telecommunications, or from a consultant/implementation p.o.v.
And then social networking took off like gang busters and mobile became a reality with the introduction of the tablet device (though, I hate to float my boat as always by mentioning my soap box dream spewed at a TDWI conference about the future of mobile BI when the 1st generation iPhone released).

But that is neither here nor there. And, as always, I digress and am back…

At the same time as we myopically focused on the technological changing landscape around us, a shifting power paradigm was building wherein the Finance organization, once relegated to the back partition of cubicles, where a pin drop was heard ’round the world (or at least, the floor), was growing more and more despondent with not being able to access the data they needed without IT intervention in order to update their monthly forecasts and produce their subsequent P&L, Balance Sheet and Cash Flow Planning statements. And IT’s response was to acquire (for additional millions of dollars) a “BI tool” aka an ad-hoc reporting application that would allow them to pull their own data. But it had been installed and the data had been pulled, and validated and by the time of completion, the Finance team had either found an alternate solution or found the system useful for a very small sliver of analysis but went outside of IT to get additional sources of information that wanted and needed to adapt to the changing business pressures from the convergence of social, mobile and unstructured datasets. And suddenly those once, shiny BI tools, seemed like antiquated relics, and simply could not handle the sheer data volumes that were now expected from it or would crash (unless filtered beyond the point of value). Businesses need not adapt their queries to the tool but need a tool that can adapt to their ever-changing processes and needed.

Drowning in data but starving for information...

Drowning in data but starving for information…

So if necessity if the mother of invention, Finance was its well deserving child. And why? The business across the board is starving for information but drowning in data. And Finance is no longer a game of solitaire, understood by few and ignored by many. In fact, Finance has become the participation sport of the BI Olympics, and rightfully so, where departmental collaboration at the fringe of the organization has proven as the missing link that before prevented successful top-down planning efforts. Where visualizations demands made dashboards a thing of the past, and demanded and better story, vis-a-vie storylines / infographics, to help disseminate more than just the numbers, but the story behind the numbers to the rest of the organization, or what I like to call the “fringe”.

I remember a few years ago when the biggest challenge was getting data, and often, we joked about how nice it would be to have a sea of data to drown in; an analysts’ buffet-du-jour; a happy paralysis-induced-by said analysis plate was the special of the day, yet only for a few, while the rest was but a gleam in our data-starved eyes.

Looking forward from there, I ask, dear reader, where do we go from here…If it’s a Finance party and we are all invited, what do we bring to the party table as BI practitioners of value? Can we provide the next critical differentiator?

Well, I believe that we can, and that critical differentiator is forward-looking data. Why?

Gartner Group stated that “Predictive data will increase profitability by 20% and that historical data will become a thing of the past” (for a BI practitioner, the last part of that statement should worry you, if you are still resisting the plunge into the predictive analytics pool).

Remember, predictive is a process that allows an organization to get true insight and has been executed amongst a larger group of people to drive faster, smarter business users. This is perfect for enterprise needs because by definition, they offer a larger group of people to work with.

Smooth sailingIn fact, it was Jack Welch would said  An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage” 

If you haven’t already, go out and started learning one of the statistical application packages. I suggest “R” and in the coming weeks, I will provide R and SAS scripts (I have experience with both) for those interested in growing their chosen profession and remaining relevant as we weather the sea of business changes

.

Futures According to Laura… Convergence of Cloud and Neural Networking with Mobility and Big Data

It’s been longer and longer between my posts and as always, life can be inferred as the reason for my delay.

But I was also struggling with feeling a sense of “what now” as it relates to Business Intelligence.

Many years ago, when I first started blogging, I would write about where I thought BI needed to move in order to remain relevant in the future. And those futures have come to fruition lately. Gamuts ranging from merging social networking datasets into traditional BI frameworks to a more common use case of applying composite visualizations to data (microcharts, as an example). Perhaps more esoteric was my staunch stance on the Mobile BI marriage which when iPhone 1 was released was a future many disputed with me. In fact, most did not own the first release of the iPhone, and many were still RIM subscribers. And it was hard for the Blackberry crowd to fathom a world unbounded by keyboards and scroll wheels and how that would be a game changer for mobile BI. And of course, once the iPad was introduced, it was a game over moment. Execs everywhere wanted their iPads to have the latest and greatest dashboards/KPIs/apps. From Angry Birds to their Daily Sales trend, CEOs and the like had new brain candy to distract them during those drawn out meetings. And instead of wanting that PDF or PowerPoint update, they wanted to receive the same data on their iPad. Once they did, they realized that having the “WHAT” is happening understanding was only the crack to get them hooked for a while. Unfortunately, the efficacy of KPI colors and related numbers only satisfies the one person show – but as we know, it isn’t the CEO who analyzes why a RED KPI indicator shows up. Thus, more levels of information (beyond the “WHAT” and  “HOW OFTEN”)  were needed to answer the “WHY” and “HOW TO FIX” the underlying / root cause issue.

The mobile app was born.

It is the reborn mobile dashboard that has been transformed into a new mobile workflow, more akin to the mobile app. 

But it took time for people to understand the marriage between BI dashboards, the mobile wave, especially the game change that Apple introduced with it’s swipe and pinch to zoom gestures, the revolution of the App Stores for the “need to have access to it now” generation of Execs, the capability to write-back from mobile devices to any number of source systems and how functionally, each of these seemingly unrelated functions would and could be weaved together to create the next generation of Mobile Apps for Business Intelligence. 

But that’s not what I wanted to write about today. It was a dream of the past that has come to fruition. 

Coming into 2013, cloud went from being something that very few understood to another game changer in terms of how CIOs are thinking about application support of the future. And that future is now.

But there are still limitations that we are bound by. Either we have a mobile device or not, either it is on 3 or 4G or wifi. Add to that our laptops (yes, something I believe will not dominate the business world in a future someday). And compound that with other devices like smartphones, eReaders, desktop computers et al. 

So, I started thinking about some of the latest research regarding Neural Networks (another set of posts I have made about the future of communication via Neural networks) published recently by Cornell University here (link points to http://arxiv.org/abs/1301.3605).

And my nature “plinko” thought process (before you ask, search for the Price is Right game and you will understand “Plinko Thoughts”) bounced from Neural Networks to Cloud Networks and from Cloud Networks to the idea of a Personal Cloud. 

A cloud of such personal nature that all of our unique devices are forever connected in our own personal sphere and all times when on our person. We walk around and we each have our own personal clouds. Instead of a mass world wide web, we have our own personal wide area network and our own personal wide web.

When we interact with other people, those people can choose to share their Personal networks with us via Neural Networking or some other sentient process, or in the example, where we bump into a friend and we want to share details with them, all of our devices have the capability to interlink to each other via our Personal Clouds. 

Devices are always connected to your Personal Cloud which is authenticated to your person, so that passwords which are already reaching their shelf life (see: article for more information on this point), are no longer the annoying constraint when we try to seamlessly use our mobile devices while on the go. Instead, they are authenticated to our Personal Cloud following similar principles as where IAM (Identity and Access Management) is moving towards in future. And changes in IAM are not only necessary for this idea to come to fruition but are on the horizon.

In fact, Gartner published an article in July 2012, called “Hype Cycle for Identity and Access Management Technologies, 2012” in which Gartner recognized that the growing adoption of mobile devices, cloud computing, social media and big data were converging to help drive significant changes in the identity and access management market.

For background purposes, IAM processes and technologies work across multiple systems to manage:

■ Multiple digital identities representing individual users, each comprising an identifier (name or key) and a set of data that represent attributes, preferences and traits

■ The relationship of those digital identities to each user’s civil identity

■ How digital user identities communicate or otherwise interact with those systems to handle
information or gain knowledge about the information contained in the systems

If you extrapolate that 3rd bullet out, and weave in what you might or might not know/understand about Neural Networking or brain-to-brain communication (see recent Duke findings by Dr. Miguel Nicolelis here) (BTW – the link points to http://www.nicolelislab.net/), one can start to fathom the world of our future. Add in cloud networking, big data, social data and mobility, and perhaps, the Personal Cloud concept I extol is not as far fetched as you initially thought when you read this post. Think about it.

My dream like with my other posts is to be able to refer back to this entry years from now with a sense of pride and “I told you so.” 

Come on – any blogger who makes predictions which come true years later deserves some bragging rites. 

Or at least, I think so…

MicroStrategy Personal Cloud – a Great **FREE** Cloud-based, Mobile Visualization Tool

Have you ever needed to create a prototype of a larger Business Intelligence project focused on data visualizations? Chances are, you have, fellow BI practitioners. Here’s the scenario for you day-dreamers out there:

Think of the hours spent creating wire-frames, no matter what tool you used, even if said tool was your hand and a napkin (ala ‘back of the napkin’ drawing) or the all-time-favorite white board, which later becomes a permanent drawing with huge bolded letters to the effect of ‘DO NOT ERASE OR ITS OFF WITH YOUR HEAD’ annotations dancing merrily around your work. Even better: electronic whiteboards which yield you hard copies of your hard work (so aptly named), which at first, seems like the panacea of all things cool (though it has been around for eons) but still, upon using, deemed the raddest piece of hardware your company has, until, of course, you look down at the thermal paper printout which has already faded in the millisecond since you tore it from machine to hand, which after said event, leaves the print out useless to the naked eye, unless you have super spidey sense optic nerves, but now I digress even further and in the time it took you to try to read thermal printout, it has degraded further because anything over 77 degrees is suboptimal (last I checked we checked in at around 98.6 but who’s counting), thus last stand on thermal paper electronic whiteboards is that they are most awesome when NOT thermoregulate ;).

OK, and now We are back…rewind to sentence 1 –

Prototyping is to dashboard design or any data visualization design as pencils and grid paper are to me. Mano y mano – I mean, totally symbiotic, right?

But, wireframing is torturous when you are in a consultative or pre-sales role, because you can’t present napkin designs to a client, or pictures of a whiteboard, unless you are showing them the process behind the design. (And by the way, this is an effective “presentation builder” when you are going for a dramatic effect –> ala “first there were cavemen, then the chisel and stone where all one had to create metrics –> then the whiteboard –> then the…wait!

This is where said BI practitioner needs to have something MORE for that dramatic pop, whiz-AM to give to their prospective clients/customers in their leave behind presentation.

And finally, the girl gets to her point (you are always so patient, my loving blog readers)…While I biased, if you forget whom I work for, and just take into account the tool, you will see the awesomeness that the new MicroStrategy Personal Cloud is for (drum roll please) PROTOTYPING a new dashboard — or just building, distributing, mobilizing etc your spreadsheet of data in a highly stylized, graphical means that tell a story far better than a spreadsheet can in most situations. (Yes, neighseyers, I know that for the 5% of circumstances which you can name, a spreadsheet is more àpropos, but HA HA, I say: this cloud personal product has the ability to include the data table along with the data visualizations!)

Best of all it is free.

I demoed this recently and was able to time it took to upload and spreadsheet, render 3 different data visualizations, generate the link to send to mobile devices (iPads and iPhones), network latency for said demo-ees to receive the email with the link and for them to launch the dashboard I created, and guess what the total time was?

Next best of all, it took only 23.7 minutes from concept to mobilization!

Mind you, I was also using data from the prospect that I had never seen or had any experience with.

OK, here is how it was done:

1) Create a FREE account or login to your existing MicroStrategy account (by existing, I mean, if you have ever signed up for the MicroStrategy forums or discussion boards, or you are an employee, then use the same login) at https://www.microstrategy.com/cloud/personal

Cloud Home

Landing Page After Logged in to Personal Cloud

2) Click the button to Create New Dashboard:

Create Dashboard Icon

  • Now, you either need to have a spreadsheet of data OR you can choose one of the sample spreadsheets that MicroStrategy provides (which is helpful if you want to see how others set up their data in Excel, or how others have used Cloud personal to create dashboards; even though it is sample data , it is actually REAL data that has been scrub-a-dub-dubbed for your pleasure!) If using a sample data set, I recommend the FAA data. It is real air traffic data, with carrier, airport code, days of the week, etc, which you can use to plan your travel by; I do…See screenshot below. There are some airports and some carriers who fly into said airports whom I WILL not fly given set days of the week in which I must travel. If there is a choice, I will choose to fly alternate carriers/routes. This FAA data set will enable you to analyze this information to make the most informed decision (outside of price) when planning your travel. Trust me…VERY HELPFUL! Plus, you can look at all the poor slobs without names sitting at the Alaska Air gate who DIDNT use this information to plan their travel, and as you casually saunter to your own gate on that Tuesday between 3 – 6 PM at SeaTac airport , you will remember that they look so sad because their Alaska Air flight has a 88% likelihood of being delayed or cancelled. (BTW, before you jump on me for my not so nice reference to said passengers), it is merely a quotation from my favorite movie ‘Breakfast at Tiffany’s’ …says Holly Golightly: “Poor cat…poor old slob without a name”.

On time Performance (Live FAA Data)

If using your own data, select the spreadsheet you want to upload

3) Preview your data; IMPORTANT STEP: make sure that you change any fields which to their correct type (either Attribute or Metric or Do Not Import).

Cloud Import - Preview Data

Keep in mind the 80/20 rule: 80% of the time, MicroStrategy will designate your data as either an Attribute or Metric correctly using a simple rule of thumb: Text or VarChar/NVarChar if using SQL Server, will always be designated as an Attribute (i.e. your descriptor/Dimension) and your numerals designated as your Metrics. BUT, if your spreadsheet uses ID fields, like Store ID, or Case ID, along with the descriptor like Store DESC or Case DESC, most likely MicroStrategy will assume the Store ID/Case ID are Metrics (since the fields are numeric in the source). This is an Easy Change! You just need to make sure ahead of time to make that change using the drop down indicator arrows in the column headings – To find them, hover over the column names with your mouse icon until you see the drop down indicator arrow. Click on the arrow to change an Attribute column to a Metric column and vice-versa (see screenshot):

Change Attribute to Metric

Once you finish with previewing your data, and everything looks good, click OK at the bottom Right of your screen.

In about 30-35 seconds, MicroStrategy will have imported your data into the Cloud for you to start building your awesome dashboards.

4) Choose a visualization from the menu that pops up on your screen upon successfully importing your spreadsheet:

Dashboard Visualization Selector
Change data visualization as little or as often as you choose

Here is the 2010 NFL data which I uploaded this morning. It is a heatmap showing the Home teams as well as any teams they played in the 2010 season. The size of the box is HOW big the win or loss was. The color indicates whether they won or lost (Green = Home team won // Red = Home team lost).

For all you, dear readers, I bid you a Happy New Year. May your ideas flow a plenty, and your data match your dreams (of what it should be) :). Go fearlessly into the new world order of business intelligence, and know that I , Laura E. your Dashboard Design Diva, called Social Intelligence the New Order, in 2005, again in 2006 and 2007. 🙂 Cheers, ya’ll.

http://tinyurl.com/ckfmya8

https://my.microstrategy.com/MicroStrategy/servlet/mstrWeb?pg=shareAgent&RRUid=1173963&documentID=4A6BD4C611E1322B538D00802F57673E&starget=1

Continue reading