Music has always been an integral part of human culture. From ancient times to modern-day, music has evolved and transformed in various ways. With the advent of technology, music creation and discovery have taken a wild and increasingly influential new turn. Enter Generative AI – For those living under a rock, Artificial Intelligence (AI) is a technology that has revolutionized the way we do almost everything, including how we create and discover music.

@OpenAI #Jukebox (https://www.openai.com/research/jukebox)is a prime example of how AI is bridging the gap between music and future technology. #OpenAI-Jukebox “produces a wide range of music and singing styles, and generalizes to lyrics not seen during training. All the lyrics below have been co-written by a language model and OpenAI researchers” that can generate original songs and tracks in various genres and styles similar to creativity powerhouse known simply as DALL-E for image creation. It uses deep learning algorithms to analyze existing songs’ patterns and structures to create new ones.

So does the future of music creation and discovery lie in generative AI tools like OpenAI Jukebox or design and art creation in tools like DALL-E? Either way, it’s the season of all things #OpenAI. These days you can’t escape a chatGPT meme or SNL skit powered by the little chatterbox, providing endless possibilities and countless hours of entertainment for experimentation with different words, phrases, images, sounds, styles, and genres. And did I mention they can also help you to work smarter not harder with e-mail responses and documentation creation or even building apps for you ( ie – code scripting )?

Generative AI tools like OpenAI Jukebox are not only limited to creating new songs but also have the ability to remix existing ones. This opens up a whole new world of possibilities for artists who want to experiment with their work or collaborate with other musicians.

The use of generative AI tools in music creation also raises questions about copyright laws and ownership rights. As these tools become more advanced, it will be interesting to see how they impact traditional copyright laws.
As a former EDM DJ, Im excited to see where OpenAI takes its Jukebox research – it is just one example of how AI can revolutionize the world of music creation and discovery IMHO. As technology continues to evolve at this crazy rapid pace, it’s exciting to think about what other possibilities lie ahead for the world of AI and ( fill in here ). The future looks bright for humans and  musicians and artists and fans alike as we march on this yellow brick journey towards what ? Emerald city or a more innovative musical landscape powered by artificial intelligence? Who knows ? Let’s ask ChatGPT! 

CICD (DevOps) for Data Science: Part Deux

It’s been some time since I presented Part 1 of this DevOps for Data Science short anthology. Since then I have been working on scripting out this solution for a series of presentations I am doing as part of a Discovery & Insights Roadshow.

The example draws inference (pun intended) from anomaly detection and predictive maintenance scenarios since I seem spend a chunk of time working in this space. This is an implementation specifically of a Continuous Integration (CI) / Continuous Delivery (CD) pipeline in support of the applications that support those scenarios – Typically, when one is developing an AI application, two parallel motions take place:

a) A data science team is working on building out the machine learning (ML) models; often the handoff occurs when they publish out an endpoint that gets integrated into the app.

b) AppDev or developers work on the build of the application as well as are typically responsible for exposing it to end users via a web /mobile app or take the pre-trained model endpoint and call it from within an internal business process application.

The example use case for this Continuous Integration (CI)/Continuous Delivery (CD) pipeline is a fairly standard / typical Anomaly Detection and Predictive Maintenance machine learning scenario.

In short, the pipeline is designed to kick off for each new commit, run the test suite using model parameters required as input (features, eg) and tests the outputs generated (usually in the form of a score.py file) – if the test passes, the latest build branch is merged into the master which gets packaged into a Docker container and possibly Kubernetes if the operationalization or requirements demand cluster-scale support.

DevOps for AI/ML Architecture



I have included a #Jupyter notebook for you to check out the process of deploying to @Docker and then productionalizing using @Kubernetes – Part of the latter productionalized code set includes the process for gathering data from how your users interact with your model. This is important from a retraining perspective because you need to have a closed loop architecture in order to enable said functionality.

Using Azure to Retrain Production Models Deployed Using Kubernetes

As a side note, Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster much like if not identical to the Jupyter Notebook icon above. Often, I will get asked about DevOps specifically for Databricks in addition to Azure ML so wanted to explictly reference the fact that you can use either or both.

In this section, you will find additional information on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks just like the example above. In additionn, you can also use Automated ML capability (public preview) of Azure ML SDK with Azure Databricks or without. A natural cloud convergence between ML/AI service offerings is currently underway across all vendors. This enables things like:

  • Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
  • You can keep the data within the same cluster.
  • You can leverage the local worker nodes with autoscale and auto termination capabilities.
  • You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
  • You can further tune the model generated by automated machine learning if you chose to.
  • Every run (including the best run) is available as a pipeline, which you can tune further if needed.
  • The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.

Explaining #Containers / #Kubernetes to a Child : How To Become a Storytelling Steward Using Gamification & Graphic Novels (Comics)

How To Become a Storytelling Steward Using Gamification & Graphic Novels (Comics)

If someone asked you to explain the benefits of #Containers / #Kubernetes as though you were speaking to a child, do it in under 2 minutes & guarantee said kiddo’s comprehension, could you do it?

I was tasked recently with doing the same thing by my COO using Machine Learning as the main talking point . To elucidate, I decided to pose the same question to several peers who graciously entertained this whim ‘o mine. While they provided technically correct explanations’, often, they parked their response somewhere between boredom-block and theoretical thoroughfare. Yawn!

What those very intelligent practitioners failed to remember was this was NOT the latest round of stump-the-chump; that the goal was to explain Machine Learning in a way that a child could grok without “adult-splaining” / “grownup-eze” or other explanatory methods – Add to this, the goal of keeping the child engaged for the full 2 minutes –> well, shoot,  ++ to you, dear reader turned supreme storytelling savant if you made that happen. While we are at it, why not add the ability to explain ML while avoiding the dreaded “eye gloss over” affect that most listeners dawn when tuning out their brain. This ‘Charlie Brown’ <whoah whoah whoah> adult vernacular riposte is nearly always reflected back to the speaker via those truth telling eyes of yours, enabling the Edgar Allen Poe in us all. Huh? What I mean is the tell-tale Pavlovian heart response to any “Data Science-based” summarization, in my experience.

Instead, I described two scenarios involving exotic fruit –> the 1st, included the name for each of those curious fruits aka labels which was the basis for her being able to label them on demand accordingly. The 2nd scenario,  also involved exotic fruits BUT the difference was that she was NOT provided any names ahead of time yet still was tasked with naming said items –  And for those data scientists reading this, naturally, were metaphors for supervised and unsupervised learning.

Originally, I had prepared a similar talk for a ML centric presentation I was set to give to a community based data science event (later shared on this blog) – It contains about 90% comics / image iconography instead of laborious text per slide & was received incredibly well. In fact, when delivered to a 200+ audience, it was met with much applause and higher than normal attendee survey satisfaction scores. Simplicity & pacing; Remember, an image speaks 1000 words and to a child who often learns experientially / visually, well, it becomes the storytellers handbook to the hive mind of children everywhere :).

By the way, you can stop me next time when I diverge so sharply from the path.

But now we are back –> putting to bed my thorough digression & pulling you back to my 1st sentence above:

If someone asked you to explain the benefits of #Containers / #Kubernetes as though you were speaking to a child, do it in under 2 minutes & guarantee said kiddo’s comprehension, could you do it?

This awesome comic/bedtime story is one way to answer with a resounding YES ! Meet Phippy & Zee and follow their adventures as they head off to the Zoo: Phippy Goes To The Zoo_A_Kubernetes_Story: https://azure.microsoft.com/en-us/resources/phippy-goes-to-the-zoo/en-us/

Integrating Databricks with Azure DW, Cosmos DB & Azure SQL (part 1 of 2)

I tweeted a data flow earlier today that walks through an end-to-end ML scenario using the new Databricks on Azure service (currently in preview). It also includes the orchestration pattern for ETL (populating tables, transforming data, loading into Azure DW etc), as well as the SparkML model creation stored on CosmosDB along with the recommendations output. Here is a refresher:

Some ndatabricksDataflowonAzureuances that are really helpful to understand: Reading data in as CSV but writing results as parquet. This parquet file is then the input for populating a SQL DB table as well as the normalized DIM table in SQL DW both by the same name.

Selecting the latest Databricks on Azure version (4.0 version as of 2/10/18).

Using #ADLS (DataLake Storage , my pref) &/or blob.

Azure #ADFv2 (Data Factory v2) makes it incredibly easy to orchestrate the data movement from 3rd party clouds like S3 or on-premise data sources in a hybrid scenario to Azure with the scheduling / tumbling one needs for effective data pipelines in the cloud.

I love how easy it is to connect BI tools as well.  Power BI Desktop can connect to any ODBC data source and specifically to your Databricks clusters by using the Databricks ODBC driver. Power BI Service is a fully managed web application running in Azure. As of November 2017, it only supports Spark running on HDInsight. However, you can create a report using Power BI Desktop and upload it to an Azure service.

The next post will cover using @databricks on @Azure with #Event Hubs !

How Do You Use LinkedIn? (Social Media Infographics)

How often do you refresh your LinkedIn profile pic? Or worse, the content within your profile? Unless you are a sales exec trolling the social networking site or a job seeker, I would surmise not that often; in fact, rarely is most apropos of a description. Thoughts…? ( yes, she’s back ( again), but this time, for good dear readers…@Laura_E_Edell (#infographics) says thanks to designinfographics.com for her latest content postings!

And just because I call it out, doesn’t mean you will know the best approach to updating your LinkedIn profile. And guess what …there’s an infographic for that! (http://www.linkedin.com/in/lauraerinedell)

Check out my profile on LinkedIn by clicking infographic

Check out my profile on LinkedIn by clicking infographic

BIPlayBook.Com is Now Available!

As an aside, I’m excited to announce my latest website: http://www.biplaybook.com is finally published. Essentially, I decided that you, dear readers, were ready for the next step.  What comes next, you ask?

After Measuring BI data –> Making Measurements Meaningful –> and –>Massaging Meaningful Data into Metrics, what comes next is to discuss the age-old question of ‘So What’? & ‘What Do I Do About it’?

BI PlayBook offers readers the next level of real-world scenarios now that BI has become the nomenclature of yesteryear & is used by most to inform decisions. Basically, it is the same, with the added bonus of how to tie BI back into the original business process, customer service/satisfaction process or really any process of substance within a company.

This is quite meaningful to me because so often, as consumers of goods and services, we find our voices go unheard, especially when we are left dissatisfied. Can you muster the courage to voice your issue (dare I say, ‘complain’?) using the only tools provided: poor website feedback forms, surveys or (gasp) relaying our issue by calling into a call center(s) or IVR system (double gasp)? I don’t know if I can…

How many times do we get caught in the endless loop of an IVR, only to be ‘opted-out’ (aka – hung up on) when we do not press the magical combination of numbers on our keypads to reach a live human being, or when we are sneaky, pressing ‘0’ only to find out the company is one step ahead of us, having programmed ‘0’ to automatically transfer your call to our friend:  ‘ReLisa Boutton’ – aka the Release Button().

Feedback is critical, especially as our world has become consumed by social networks. The ‘chatter’ of customers that ensues, choosing to ‘Like’ or join our company page or product, or tweet about the merits or demerits of one’s value proposition, is not only rich if one cares about understanding their customer. But, it is also a key into how well you are doing in the eyes of your customer. Think about how many customer satisfaction surveys you have taken ask you whether or not your would recommend a company to a friend or family member.

This measure defines one’s NPR, or Net Promoter Rank, and is a commonly shared KPI or key performance indicator for a company.

Yet, market researchers like myself know that what a customer says on a survey isn’t always how they will behave. This discrepancy between what someone says and what someone does is as age-old as our parents telling us as children “do not as I do, but as I say.” However, no longer does this paradigm hold true. Therefore, limiting oneself by their NPR score will restrict the ability to truly understand one’s Voice of the Customer. And further, if you do not understand your customer’s actual likelihood to recommend to others or repeat purchase from you, how can you predict their lifetime value or propensity for future revenue earnings? You can’t.

Now, I am ranting. I get it.

But I want you to understand that social media content that is available from understanding the social network spheres can fill that gap. They can help you understand how your customers truly perceive your goods or services. Trust me, customers are more likely to tweet (use Twitter) to vent in 140 characters or less about a negative experience than they are to take the time to fill out a survey. Likewise, they are more likely to rave about a great experience with your company.

So, why shouldn’t this social ‘chatter’ be tied back into the business intelligence platforms, and further, mined out specifically to inform customer feedback loops, voice of the customer & value stream maps, for example?

Going one step further, having a BI PlayBook focuses the attention of the metric owners on the areas that needs to be addressed, while filtering out the noise that can detract from the intended purpose.

If we are going to make folks responsible for the performance of a given metric, shouldn’t we also help them understand what is expected of them up front, as opposed to when something goes terribly wrong, signified by the “text message” tirade of an overworked CEO waking you out of your slumber at 3 AM?

Further, understanding how to address an issue, who to communicate to and most importantly, how to resolve and respond to affected parties are all part of a well conceived BI playbook.

It truly takes BI to that next level. In fact, two years ago, I presented this very topic at the TDWI Executive Summit in San Diego (Tying Business Processes into your Business Intelligence). While I got a lot of  stares ala ‘dog tilting head to the side in that confused glare at owner look’, I hope people can draw back on that experience with moments of ‘ah ha – that is what she meant’ now that they have evolved ( a little) in their BI maturation growth.