Sideline Comparative Predictions: Gartner’s 2010 Technology Trends

As promised in my last blog post, here is the comparison list of predictions for Top 10 Strategic Technologies for 2010 – I’ve highlighted the 2nd trend on Gartner’s list for this drum has been beaten by yours truly for years now, only to be shot down and sometimes embraced for said belief that advanced analytics are where the value within BI truly lay, and those who adopt now, will beat the curve of the trend and reap the ill-gotten rewards long due to companies who have invested millions into BI programs without realizing much gain (*no matter which service implementer was used, although {shameless plug} if reader had used Mantis Technology Group (my company), this would be moot as you would be reveling in the realized value that we bring since yours truly, is an employee and implementer of these very BI systems). When it comes to the broad realm of BI or the facets within BI, like social intelligence *(another prediction)*, advanced analytics or cloud computing *(yet another prediction)*, Mantis excels at infusing value into even the smallest scale implementation – Having come from being the client to now the service provider, I have worked with the very largest and those that claim to be the best, down to the niche providers like ourselves on a slightly bigger scale…I say with all earnestness that Mantis’ offering truly stands above those in both spaces that I had previously hired, often left with that disappointing feeling when one realizes that they did not get what they expected, and when they confront those who provided the end result they got, often being lead down the “lets get the SOW and look at what you asked for route” which never ends well…Clients, such as myself in my former life, often don’t know what they don’t know especially when implementing technologies that may not be something they are well versed – As I have belabored and will do again quickly now, it is up to the service provider to hang up their $$ hat and help the client understand enough to be dangerous and make educated choices, not just those that will return the greatest financial gains, but those that will truly help deliver on the value proposition that IS POSSIBLE from well implemented BI programs.

As said before, please share your predictions, comments or anecdotes with our readership. I (we) would love to hear your opinion too!

The top 10 strategic technologies as predicted by Gartner for 2010 include:

Cloud Computing. Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. Cloud-based services can be exploited in a variety of ways to develop an application or a solution. Using cloud resources does not eliminate the costs of IT solutions, but does re-arrange some and reduce others. In addition, consuming cloud services enterprises will increasingly act as cloud providers and deliver application, information or business process services to customers and business partners.

Advanced Analytics. Optimization and simulation is using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution. This can be viewed as a third step in supporting operational business decisions. Fixed rules and prepared policies gave way to more informed decisions powered by the right information delivered at the right time, whether through customer relationship management (CRM) or enterprise resource planning (ERP) or other applications. The new step is to provide simulation, prediction, optimization and other analytics, not simply information, to empower even more decision flexibility at the time and place of every business process action. The new step looks into the future, predicting what can or will happen.

Client Computing. Virtualization is bringing new ways of packaging client computing applications and capabilities. As a result, the choice of a particular PC hardware platform, and eventually the OS platform, becomes less critical. Enterprises should proactively build a five to eight year strategic client computing roadmap outlining an approach to device standards, ownership and support; operating system and application selection, deployment and update; and management and security plans to manage diversity.

IT for Green. IT can enable many green initiatives. The use of IT, particularly among the white collar staff, can greatly enhance an enterprise’s green credentials. Common green initiatives include the use of e-documents, reducing travel and teleworking. IT can also provide the analytic tools that others in the enterprise may use to reduce energy consumption in the transportation of goods or other carbon management activities.

Reshaping the Data Center. In the past, design principles for data centers were simple: Figure out what you have, estimate growth for 15 to 20 years, then build to suit. Newly-built data centers often opened with huge areas of white floor space, fully powered and backed by a uninterruptible power supply (UPS), water-and air-cooled and mostly empty. However, costs are actually lower if enterprises adopt a pod-based approach to data center construction and expansion. If 9,000 square feet is expected to be needed during the life of a data center, then design the site to support it, but only build what’s needed for five to seven years. Cutting operating expenses, which are a nontrivial part of the overall IT spend for most clients, frees up money to apply to other projects or investments either in IT or in the business itself.

Social Computing. Workers do not want two distinct environments to support their work – one for their own work products (whether personal or group) and another for accessing “external” information. Enterprises must focus both on use of social software and social media in the enterprise and participation and integration with externally facing enterprise-sponsored and public communities. Do not ignore the role of the social profile to bring communities together.

Security – Activity Monitoring. Traditionally, security has focused on putting up a perimeter fence to keep others out, but it has evolved to monitoring activities and identifying patterns that would have been missed before. Information security professionals face the challenge of detecting malicious activity in a constant stream of discrete events that are usually associated with an authorized user and are generated from multiple network, system and application sources. At the same time, security departments are facing increasing demands for ever-greater log analysis and reporting to support audit requirements. A variety of complimentary (and sometimes overlapping) monitoring and analysis tools help enterprises better detect and investigate suspicious activity – often with real-time alerting or transaction intervention. By understanding the strengths and weaknesses of these tools, enterprises can better understand how to use them to defend the enterprise and meet audit requirements.

Flash Memory. Flash memory is not new, but it is moving up to a new tier in the storage echelon. Flash memory is a semiconductor memory device, familiar from its use in USB memory sticks and digital camera cards. It is much faster than rotating disk, but considerably more expensive, however this differential is shrinking. At the rate of price declines, the technology will enjoy more than a 100 percent compound annual growth rate during the new few years and become strategic in many IT areas including consumer devices, entertainment equipment and other embedded IT systems. In addition, it offers a new layer of the storage hierarchy in servers and client computers that has key advantages including space, heat, performance and ruggedness.

Virtualization for Availability. Virtualization has been on the list of top strategic technologies in previous years. It is on the list this year because Gartner emphases new elements such as live migration for availability that have longer term implications. Live migration is the movement of a running virtual machine (VM), while its operating system and other software continue to execute as if they remained on the original physical server. This takes place by replicating the state of physical memory between the source and destination VMs, then, at some instant in time, one instruction finishes execution on the source machine and the next instruction begins on the destination machine.

However, if replication of memory continues indefinitely, but execution of instructions remains on the source VM, and then the source VM fails the next instruction would now place on the destination machine. If the destination VM were to fail, just pick a new destination to start the indefinite migration, thus making very high availability possible. 

The key value proposition is to displace a variety of separate mechanisms with a single “dial” that can be set to any level of availability from baseline to fault tolerance, all using a common mechanism and permitting the settings to be changed rapidly as needed. Expensive high-reliability hardware, with fail-over cluster software and perhaps even fault-tolerant hardware could be dispensed with, but still meet availability needs. This is key to cutting costs, lowering complexity, as well as increasing agility as needs shift.

Mobile Applications. By year-end 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing a rich environment for the convergence of mobility and the Web. There are already many thousands of applications for platforms such as the Apple iPhone, in spite of the limited market and need for unique coding. It may take a newer version that is designed to flexibly operate on both full PC and miniature systems, but if the operating system interface and processor architecture were identical, that enabling factor would create a huge turn upwards in mobile application availability.

“This list should be used as a starting point and companies should adjust their list based on their industry, unique business needs and technology adoption mode,” said Carl Claunch, vice president and distinguished analyst at Gartner. “When determining what may be right for each company, the decision may not have anything to do with a particular technology. In other cases, it will be to continue investing in the technology at the current rate. In still other cases, the decision may be to test/pilot or more aggressively adopt/deploy the technology.”

Article copied from: http://www.gartner.com/it/page.jsp?id=1210613

Sideline Topic: Looking for Feedback on What YOU THINK about CMS’ 2010 Technology Predications

Can it be true that finally, in 2010, the market focus within the technology sector will finally shift to customer-facing systems and internal applications delivering more meaningful content applicability?

Looking back at the content of my own blog, me and my readers (thank you, lovely readers!) have been feeling the need for business intelligence to step back into customer intelligence once again, a place we (BI practitioners) have been for a while. And while, we, the go forward and capture the world Gen X, Y and Z’ers have shifted the realm of what we need in terms of content delivery (these are the generations of the “serve it up TO US in a Google-style fashion, otherwise, I am too busy to look for the information on your website” crowds where texting is the preferred vehicle for communications & anything that requires more than two hops deep to get to the information we need is one step too many – sad, but true ; and those that realize this fact of life now will adjust and survive when this generation, now in college, graduates and enters into our realm of the workplace). And so I bring you CMS’ predications, followed by the tried and true Gartner predications for comparison sake – Please let me know what you think, what your own predications are or any other comments you want to share! J’accueille un nouvel an – how about you? 🙂

Article copied from : http://www.information-management.com/news/ecm_serach_cloud_sharpoint_mobile_document_management-10016801-1.html?msite=cloudcomputing

The current recessionary period in particular will yield many content technology investments focused on customer-facing systems, CMS Watch founder, Tony Byrne was quoted to say. “In 2010 we will see a renewed focus on internal applications.”

  1. Enterprise content management and document management will go their separate ways.
  2. Faceted search will pervade enterprise applications.
  3. Digital asset management vendors will focus on SharePoint integration over geographic expansion.
  4. Mobile will come of age for document management and enterprise search.
  5. Web content management vendors will give more love to intranets.
  6. Enterprises will lead thick client backlash.
  7. Cloud alternatives will become pervasive.
  8. Document services will become an integrated part of enterprise content management.
  9. Gadgets and Widgets will sweep the portal world.
  10. Records managers face renewed resistance.
  11. Internal and external social and collaboration technologies will diverge.
  12. Multilingual requirements will rise to the fore.

BOTHER – Adding Value through Effective Relativization and Rationalization of KPIs

I posted a comment to a blog about KPIs:

  • http://mybibeat.wordpress.com/2009/11/23/how-to-define-and-select-good-kpis-for-your-business/#comment-10
  • and I thought it cited a perfect next step in my series on how to revamp your investment or the BOTHER program:

    I am typically the consultant of KPIs, so I found it humbling to have a need for some relevant information on KPIs. I say this with the greatest of respect for your posting. While I agree on most points, I wanted to challenge one area, and hope you will take this with a pinch of salt…
    You cite an example for a professional services company: "The survival of a professional service provider depends on the number of ongoing and new projects the company handles" and go on to mention revenue. The follow up you provide is great for measuring the $$ impact yes, but you do not mention the satisfaction of the ongoing client. While measuring revenue of the ongoing client is meaningful, it isnt wholly satisfying – Here’s why:

    In order to grow a professional services company, one must be able to project the likelihood of growing within existing company where ongoing revenue is currently being realized. And why a company may continue to use a services company because they either havent the time to find a new one, or the energy to fire them, it doesnt necessarily mean they will give the next big project to said consultancy because they may actually be dissatisfied, though their continue use of the company for ongoing support may give the feeling that ‘we are ok; they havent fired us after all’;
    In reality, they may be unhappy and as I mentioned for the reasons above, just havent changed the ongoing work to someone else, but certainly is not planning on signing over any new work.

    Instead of just measuring revenue, I would counter that for KPIs to be effective, one must rationalize and relativize by looking at qualitative and quantitative measures like satisfaction points lost or gained by project over dollars (revenue), thus, quantifying the average revenue per satisfaction point gained or lost. One must use statistical analysis to get this metric accurately (Partial Least Squares modeling is a wonderful tool for this), but boy it is powerful. A) You know how satisfied a given client actually is and B) you know that if you start to lose or gain satisfaction points though retention programs or client appreciation, or inversely through lack of attention or project dissatisfaction, you start losing points, how much it means to your bottom line, and where to drive some of your business development resources.
    It takes what you stated very well to the next level of efficacy which analytics is where I believe the true value of BI starts to become realized. Thank you though for your point. I will certainly link my blog back to yours!

    “BOTHER” (Before Offering To Heavily Expense, R$Invent): Unearthing BI Insights in the Most Unlikely of Places: the Existing Pools of Information Within Your Workplace

    Yes. As many of you know, I took a hiatus from my blog to really get down to the nuts and bolts of business intelligence. As a BI solutions architect working for a leading BI consultancy, I get the wonderful benefit of getting to experience many different perspectives of BI in the workplace. Each year, I also get to watch how the industry grows in it’s implicit pervasiveness (and no, I do not mean BI pervasiveness, as in the latest “catch phrase” // one I would release BTW if you have added it to your vocabulary of late, but I digress)…No, what I mean, is the textbook definition : I get to watch how business intelligence is infecting the lives of countless employees, all with the promise of making lives easier, better, faster, eliminating manual practices without eliminating their jobs, enhancing decision making with data, empowering, illuminating, targeting, and the list goes on and on. But the reality of what I get to witness is that business intelligence has become the CRM that we all wanted to avoid; the nomenclature of late; the popular trend in the workplace, where one gets to order a dashboard on the side with their reporting platform. Oh, and for just a few dollars more, you can super size your order and get those ridiculous pie charts, with not just 3-D rendering capabilities but also in every hue and shade of the color spin-wheel of life…Spin a red, and lose your job, spin a green, and move up the proverbial corporate hierarchy – Opening up the paradigm of pandora’s BI box is not for the light hearted…Stop reading now if you are already getting queasy.

     

    What was once a new area of interest for me, the uncovering of KPIs or key performance indicators, and building strategically targeted performance management programs around them, has sadly manifested into the CRM nightmare I predicated over 3 years ago in this very blog. When any one thing gets overexposed (think of that infamous hotel heiress), we are systematically programmed to shift focus elsewhere; the new, being more interesting than the old. And the ‘WHAT’ that dashboards tell (*it’s Red, it’s Green*) gets very old, very quickly. And before one measures the efficacy of their measurement system, (i.e. – metadata measurements of the usefulness *qualitatively* of their BI program, practitioners leap the quantum conclusion that BI isn’t effective anymore /wasn’t ever effective, in their respective work environments. 9 x out of 10, it is the leaders, who thought they wanted that extra helping of BI, after attending that year’s conference, flavored with some Norton or Kaplan or more generalized BI user conferences sponsored by the software vendors (slight shudder thinking about all of those workplace leaders who don’t know what they don’t know, and are invited by self-interested ‘teachers’ whose altruism stops as soon the invoice is signed and dated.) But again I digress.

    And really, when it is all boiled down, besides being what you may perceive to be a rant on my part, is an impassioned plea for the next wave of BI to begin…A call to arms, for the vets of our industry to stop self-promoting for one second and to start helping those build better with what they already have. To stop buying the new flavors of this month, and stick with the vanilla or chocolate or strawberry . That’s what is so great about keeping it simple. Not only can you slice something individual and enjoy its flavor for a lifetime of richness personally; but if and when the flavor stops providing what you need, you can always layer on to the basics and actually create something new, like 2-tiered Swirls or for those even more risky, the 3-tiered über swirl, known to us ice cream aficionados as Neapolitan. 

    Now, I realize one must crawl before they walk, and walk before they run, or at least, so I am told whenever I step onto my soapbox and herald for change in the way BI is being implemented today. Whether I champion change on the street or in this case, the cubicle aisles of the workplace, beating my drum, that yes, Virginia, there is a Santa Claus, and he likes it when BI delivers what it promises AND CAN deliver. It is up to the practitioners to hang up their green-colored glasses and start thinking about all of those reasons they got into BI in the 1st place: and it is a powerful step back to take; one I can personally speak to having just returned from my sojourn, now with a greater understanding that understanding ‘WHAT’ about a business is the surface level cut that indicators or operational reporting will yield. But looking further at the usefulness of one’s metrics, asking those painful questions like “what are you going to do with that data” instead of just becoming a reporting jockey, driving the ‘WHAT’ down to the ‘WHY’ is only half the battle; it is the ‘WHAT CAN BE DONE ABOUT IT’ that takes it to a whole other level. And only those analysts, truly inundated with the data from all areas of the company, not just finance, or operations, but market research, retail, development, etc., who can truly answer the timeless question of ‘So what do we do now;’ because they have the data to steer the powers that be in that direction –

    This isn’t a tool that I am prescribing; it might be spreadsheets and hours of analyst bandwidth to finally arrive where you need to be to make your BI programs and platforms useful. And the only way to get there, is to take a step back and examine your business frankly, ask the right people the right questions, and finally, question (with respect) the answers you get or keep asking the FIVE WHY’s, until you get to root cause of an existing platforms efficacy. Otherwise, if you don’t change your approach, you will always get what you have always got’ and trust me, there are only a handful (< 5% of companies) doing this today; start now, by pulling of the band-aid, or kicking the crutch of expenditure away, and use what you got to explore what you have in your data stores, manual as it may be, to find the nuggets of gold you want to be successful. No, let me rephrase. The nuggets you NEED to be successful. Check back over the next few days for actual steps to achieve success. I will prescribe a 5 step DON’T BOTHER plan of attach for reinventing your BI program before you reinvest starting with where to start getting down and dirty with your existing analytics – This is not for the faint hearted – you’ll find many of today’s business intelligence practitioners tend to avoid, not know about, or are too intimidated to uncover what I will reveal – As we move into the new year, why not shift the paradigm of your existing BI mindset by taking a bite from the beefiest side of all: the analytic?!!!

    More to come tomorrow…

    A Data Architect’s Brain on Drugs, or Your Worst Nightmare

    Cold Sweats as you begin testing cardinality – with palms shaking, you open the data architecture of what others’ have called a spaghetti coded mess…

    You are instructed to look for the blue line;

    Come on…it isn’t that hard to see they say…

    Did you catch it, the one you need to test the correct relationship on? 😉

    Talk about a data architect’s spaghetti coded nightmare!!!

    Business intelligence: Adding value to daily decisions

     

    Business intelligence in hospitality: Adding value to daily decisions

    Insight you can act on equals business success


    Related Links


    Microsoft Business Intelligence Web site


    Microsoft Hospitality Web site


    IDC vendor share 2005


    Integrated IT platform integrates, personalizes guest experience


    Related Products

    Microsoft Windows Server "Longhorn"

    Microsoft SQL Server

    2007 Microsoft Office system

    Microsoft Office PerformancePoint Server 2007

    After more than 10 years, business intelligence (BI) is catching on. In many organizations, everyone from C-level executives to the controller to the chef rely on dashboards, scorecards, and daily reports to provide information about their business and the entire enterprise.

    A recent IDC Research study ("Worldwide Business Intelligence Tools 2005 Vendor Shares," October 2006, #202603) found that organizations are looking for more than just tools for queries and reports. People want insight from their BI solution to support collaborative analysis, forecasting, and decision-making, so that BI can help drive better business processes—and results. Microsoft BI solutions can provide such support—and have helped companies such as Hilton and Expedia save money, provide superior guest service, and improve business performance and the bottom line. In this article, we’ll discuss how the Microsoft Business Intelligence platform can help your company.

    On This Page


    Insight you can act on


    Keeping scorecards to track BI


    A system everyone can use


    It’s all about forecasting


    Business intelligence and beyond


    The Microsoft Business Intelligence solution


    Next steps

    Insight you can act on

    "The trend now is to move from reporting about the past to studying targeted information about how key metrics or key performance indicators (KPIs) compare to current goals," says Sandra Andrews, industry solutions director in the Retail & Hospitality group at Microsoft. "Delivering the right information to the right people in the right format at the right time is critical. Empowering employees with real-time views of where the business is now and where it’s headed adds value to daily decisions."

    To accurately manage and forecast, you need an integrated system that provides one version of the truth, and then you need that information to be easily accessible to your teams. But many organizations in the hotel industry are still using different BI tools in different departments. Complicating the matter more—companies use separate systems for different locations. As a result, it can be extremely difficult to standardize information and reports, forecast staffing and supply needs, let alone provide real-time analytics. However, the business benefits for delivering information to people in a format they can use to take action or make better business decisions far outweighs the costs.

    Top of page

    Keeping scorecards to track BI

    Scorecarding is an efficient, immediate way to capture the key data you need. Recently, Expedia implemented a scorecard solution to better serve online customers and put complex Web performance metrics and KPI at its analysts’ fingertips. The result? Automated data collection saved time and effort, allowing analysts to spend their time developing answers rather than crunching numbers.

    "Customer satisfaction is essential to helping make Expedia a great company. With scorecarding, we have the means to evaluate how well we are doing to make the company even greater," says Laura Gibbons, manager of Customer Satisfaction & Six Sigma at Expedia. "And if scorecarding is adopted throughout the company, I believe we are that much closer to becoming the largest and most profitable seller of travel in the world."

    Top of page

    A system everyone can use

    Making sense of enormous quantities of rapidly changing data, visualizing and prioritizing that information, and holding the organization accountable for specific performance metrics is essential for success. If you have insight that you can act on, then you can align those activities with corporate goals and forecasts. And by empowering people through familiar tools, you make it easier for your employees to access the information they need to build relationships with guests.

    The Microsoft Business Intelligence platform leverages the Microsoft Office system on the front end, helping you create a BI solution that your people can use easily, without a steep learning curve. "Managers and executives can create reports in Excel, link them to PowerPoint, and easily update their reports and presentations. Hotel managers are already using Excel," Andrews says. "No matter what BI tool organizations adopt, ultimately the user extracts the data into an Excel file to manipulate it. By giving your people the information they need in the Office system right from the start, you reach all employees and increase collaboration. You change the way your company works."

    Top of page

    It’s all about forecasting

    To provide the type of service that generates customer loyalty, you need to be able to pull data from multiple systems to analyze guest profiles, forecast trends, determine occupancy rates, or predict food and beverage sales. The right BI solution can help you manage your business, increase productivity, and provide the excellent service that builds customer loyalty.

    For example, Hilton Hotels wanted an adaptable, scalable solution that would include demand-based pricing and improve forecasting for group, catering, and public-space sales. Hilton leveraged Microsoft’s Business Intelligence platform, deploying Microsoft SQL Server 2005 and using SQL Server Analysis and Reporting Services all running on the Microsoft Windows Server 2003 operating system. As a result of the Microsoft BI solution, Hilton increased their data processing rate by 300 percent. They reduced catering forecast time by 25 percent. And they improved customer service by accommodating more catering requests, all with a 15-percent reduction in deployment time. Kathleen Sullivan, vice president, Sales and Revenue Management Systems at Hilton Hotels, says, "SQL Server 2005 provides Hilton with the power and extensibility to deliver revenue analysis and forecasting capabilities."

    Top of page

    Business intelligence and beyond

    One trend that’s already changing revenue and channel management is how BI is fueling a better understanding of convention space and catering needs to generate revenue for sales and catering.

    Organizations are integrating customer relationship management (CRM) sales tools with business intelligence to help book their conventions and catering events. The sales department can determine which event will bring in the most all-property revenue. Harrah’s Entertainment, a forerunner in innovative use of BI, is using customer intelligence and CRM strategies for tracking and increasing customer loyalty. Harrah’s hands out credits to their guests each time they visit the casino and play games. Harrah’s then tracks visits and the more the guest visits, the greater the value of the reward. Harrah’s can predict the value of each guest, their habits, and how to increase each guest’s total revenue per available room (REVPAR).

    Top of page

    The Microsoft Business Intelligence solution

    IDC’s competitive analysis report, "Worldwide Business Intelligence Tools 2005 Vendor Shares," found that Microsoft’s BI tools revenue growth was more than twice that of the other leading database management systems (DBMS) and legacy pure-play BI vendors.

    The Microsoft Business Intelligence platform is a complete and integrated solution. Whether you use it as your data warehouse platform, your day–to-day user interface, or as an analysis and reporting solution, Microsoft provides the fastest growing business intelligence platform to support your needs. The Microsoft BI solution includes the following servers and client tools to enhance your business:

    Microsoft SQL Server 2005 (along with Visual Studio 2005 and BizTalk Server 2006) provides advanced data integration, data warehousing, data analysis, and enterprise reporting capabilities to help ensure interoperability in heterogeneous environments and speed the deployment of your BI projects.

    Microsoft SQL Server Reporting Services is a comprehensive, server-based reporting solution designed to help you author, manage, and deliver both paper-based and interactive Web-based reports.

    Microsoft SQL Server Integration Services (SSIS) is a next generation data integration platform that can integrate data from any source. SSIS provides a scalable and extensible platform that empowers development teams to build, manage, and deploy integration solutions to meet unique integration needs.

    Microsoft SQL Server Analysis Services (SSAS) provides tools for data mining with which you can identify rules and patterns in your data, so that you can determine why things happen and predict what will happen in the future – giving you powerful insight that will help your company make better business decisions. SQL Server 2005 Analysis Services provides, for the first time, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, KPI scorecards, and data mining.

    End-user tools build on the BI platform capabilities of Microsoft SQL Server.

    Microsoft Office Excel 2007 helps you to securely access, analyze, and share information from data warehouses and enterprise applications. And maintain a permanent connection between their Office Excel spreadsheet and the data source.

    Microsoft Office SharePoint Server 2007 becomes a comprehensive portal for all of the BI content and end-user capabilities in SQL Server Reporting Services and the Microsoft Office 2007 release, providing secure access to business information in one place. Excel Services allow customers to more effectively share and manage spreadsheets on the server.

    Microsoft Office PerformancePoint Server 2007 offers an easy to use performance management application spanning business scorecarding, analytics, and forecasting to enable companies to better manage their business.

    Business intelligence: Adding value to daily decisions

    Anonymous Web surfing at your fingertips…IP Proxy Servers

    del.icio.us Tags: ,,,

    Not saying you should do this, but don’t those pesky IT administrators with their  firewalls and IP blocks, disallowing work time visits to social networking sites, are really preventing a highly valuable word of mouth marketing for their companies through said social networks in my opinion. All things in moderation, and I do realize that there are those “abusers” of said rites who would spend all day on Facebook, or checking their Twitter timelines. But for the mainstreamers, like myself, who “get it”, social networking provide companies with great recruiting vehicles especially when the word of mouth comes from inside of the company’s walls from other employees. It is also a great lead generator for consultancies like my own, so blocking access seems counter intuitive and archaic in my humble opinion. Here is a list of IP addresses to use as proxy servers within your Internet Options. (Go to Tools, Internet Options, Connections, Proxy Server/Port field to utilize). I recommend you use one listed in the US.

     

    All responsibility around usage of said proxies falls on YOU dear reader, so keep in mind this list was gained from a publically available proxy listing service and should be used with caution and NOT for illicit or scandalous purposes, realizing that any harm done is YOUR responsibility/owned by YOU and NOT mine.

     

    Care of Proxy4Free.com

    MENU

    HOME

    PROXY LIST

    proxy list 1

    proxy list 2

    proxy list 3

    proxy list 4

    proxy list 5

    Proxy List 1

     

    IP
    Port
    Type
    Country
    Last Test

    148.233.159.58
    8080
    anonymous
    Mexico
    2009-07-13
    Whois

    84.255.246.20
    80
    anonymous
    Slovenia
    2009-07-13
    Whois

    67.69.254.250
    80
    anonymous
    Canada
    2009-07-13
    Whois

    67.69.254.254
    80
    anonymous
    Canada
    2009-07-13
    Whois

    125.245.160.130
    8080
    anonymous
    South Korea
    2009-07-13
    Whois

    67.69.254.248
    80
    anonymous
    Canada
    2009-07-13
    Whois

    218.14.227.197
    3128
    anonymous
    China
    2009-07-13
    Whois

    218.6.16.162
    80
    anonymous
    China
    2009-07-13
    Whois

    93.123.104.66
    8080
    anonymous
    2009-07-13
    Whois

    41.210.252.11
    8080
    anonymous
    Angola
    2009-07-13
    Whois

    78.41.19.30
    3128
    anonymous
    Czech Republic
    2009-07-13
    Whois

    218.75.100.114
    8080
    anonymous
    China
    2009-07-13
    Whois

    193.37.152.154
    3128
    anonymous
    Germany
    2009-07-13
    Whois

    86.101.185.109
    8080
    anonymous
    Hungary
    2009-07-13
    Whois

    78.108.96.47
    8080
    anonymous
    Czech Republic
    2009-07-13
    Whois

    86.101.185.97
    8080
    anonymous
    Hungary
    2009-07-13
    Whois

    200.174.85.195
    3128
    transparent
    Brazil
    2009-07-13
    Whois

    61.172.244.108
    80
    anonymous
    China
    2009-07-13
    Whois

    202.98.23.114
    80
    anonymous
    China
    2009-07-13
    Whois

    64.29.148.15
    80
    high anonymity
    United States
    2009-07-13
    Whois

    121.58.96.10
    3128
    anonymous
    China
    2009-07-13
    Whois

    200.65.129.1
    80
    anonymous
    Mexico
    2009-07-13
    Whois

    64.12.223.232
    80
    anonymous
    United States
    2009-07-13
    Whois

    189.108.102.138
    3128
    anonymous
    Brazil
    2009-07-13
    Whois

    121.204.0.2
    80
    anonymous
    China
    2009-07-13
    Whois

    67.69.254.246
    80
    anonymous
    Canada
    2009-07-13
    Whois

    203.160.1.75
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    203.160.001.112
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    64.12.222.232
    80
    anonymous
    United States
    2009-07-13
    Whois

    119.70.40.102
    8080
    anonymous
    South Korea
    2009-07-13
    Whois

    143.215.129.230
    3128
    anonymous
    United States
    2009-07-13
    Whois

    59.39.145.178
    3128
    anonymous
    China
    2009-07-13
    Whois

    203.162.183.222
    80
    transparent
    Vietnam
    2009-07-13
    Whois

    67.227.132.249
    80
    high anonymity
    United States
    2009-07-13
    Whois

    121.9.221.188
    80
    high anonymity
    China
    2009-07-13
    Whois

    67.69.254.245
    80
    anonymous
    Canada
    2009-07-13
    Whois

    85.214.81.233
    3128
    anonymous
    Germany
    2009-07-13
    Whois

    200.174.85.193
    3128
    transparent
    Brazil
    2009-07-13
    Whois

    217.169.182.206
    8080
    anonymous
    Czech Republic
    2009-07-13
    Whois

    222.68.207.11
    80
    anonymous
    China
    2009-07-13
    Whois

    60.29.241.102
    80
    anonymous
    China
    2009-07-13
    Whois

    222.218.156.66
    80
    anonymous
    China
    2009-07-13
    Whois

    203.160.1.66
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    121.12.249.207
    3128
    anonymous
    China
    2009-07-13
    Whois

    222.68.206.11
    80
    anonymous
    China
    2009-07-13
    Whois

    Places Where You Can Find More Proxy Lists

    Site 01
    Site 02
    Site 03
    Site 04
    Site 05
    Site 06
    Site 07
    Site 08
    Site 09
    Site 10

    Name
    Port
    Type
    Country
    Last Test

    219.137.229.218
    3128
    anonymous
    China
    2009-07-13
    Whois

    64.29.148.30
    80
    high anonymity
    United States
    2009-07-13
    Whois

    221.130.191.216
    8080
    anonymous
    China
    2009-07-13
    Whois

    84.1.150.30
    8080
    anonymous
    Hungary
    2009-07-13
    Whois

    61.172.249.96
    80
    anonymous
    China
    2009-07-13
    Whois

    203.160.1.85
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    78.154.132.241
    8080
    anonymous
    Latvia
    2009-07-13
    Whois

    222.124.190.12
    8080
    anonymous
    Indonesia
    2009-07-13
    Whois

    114.30.47.10
    80
    anonymous
    Australia
    2009-07-13
    Whois

    118.175.255.10
    80
    anonymous
    Thailand
    2009-07-13
    Whois

    61.152.246.226
    80
    high anonymity
    China
    2009-07-13
    Whois

    67.69.254.253
    80
    anonymous
    Canada
    2009-07-13
    Whois

    80.148.27.97
    8080
    anonymous
    Germany
    2009-07-13
    Whois

    67.69.254.240
    80
    anonymous
    Canada
    2009-07-13
    Whois

    203.160.1.94
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    141.85.118.1
    80
    high anonymity
    Romania
    2009-07-13
    Whois

    200.65.127.161
    3128
    anonymous
    Mexico
    2009-07-13
    Whois

    213.180.131.135
    80
    anonymous
    Poland
    2009-07-13
    Whois

    219.255.135.180
    80
    anonymous
    South Korea
    2009-07-13
    Whois

    193.37.152.206
    3128
    anonymous
    Germany
    2009-07-13
    Whois

    119.167.225.136
    8080
    anonymous
    China
    2009-07-13
    Whois

    189.56.61.33
    3128
    anonymous
    Brazil
    2009-07-13
    Whois

    201.147.20.245
    80
    anonymous
    Mexico
    2009-07-13
    Whois

    119.40.99.2
    8080
    anonymous
    Mongolia
    2009-07-13
    Whois

    80.90.82.93
    80
    anonymous
    Albania
    2009-07-13
    Whois

    61.172.249.94
    80
    anonymous
    China
    2009-07-13
    Whois

    83.230.181.116
    3128
    anonymous
    Spain
    2009-07-13
    Whois

    67.69.254.252
    80
    anonymous
    Canada
    2009-07-13
    Whois

    86.101.185.98
    8080
    anonymous
    Hungary
    2009-07-13
    Whois

    200.65.129.2
    80
    anonymous
    Mexico
    2009-07-13
    Whois

    121.9.221.187
    80
    high anonymity
    China
    2009-07-13
    Whois

    195.229.150.7
    80
    anonymous
    United Arab Emirates
    2009-07-13
    Whois

    60.12.226.18
    80
    anonymous
    China
    2009-07-13
    Whois

    67.91.182.64
    3128
    anonymous
    United States
    2009-07-13
    Whois

    125.70.229.30
    8080
    anonymous
    China
    2009-07-13
    Whois

    67.69.254.247
    80
    anonymous
    Canada
    2009-07-13
    Whois

    208.78.125.18
    80
    anonymous
    United States
    2009-07-13
    Whois

    148.233.229.235
    3128
    anonymous
    Mexico
    2009-07-13
    Whois

    64.37.184.17
    80
    high anonymity
    United States
    2009-07-13
    Whois

    203.160.001.103
    80
    anonymous
    Vietnam
    2009-07-13
    Whois

    86.101.185.112
    8080
    anonymous
    Hungary
    2009-07-13
    Whois

    202.98.23.116
    80
    anonymous
    China
    2009-07-13
    Whois

    61.172.246.180
    80
    anonymous
    China
    2009-07-13
    Whois

    218.28.176.246
    3128
    anonymous
    China
    2009-07-13
    Whois

    67.69.254.243
    80
    anonymous
    Canada
    2009-07-13
    Whois

    Optimizing BI Operational Reports with Internal IT Ticketing / CRM Systems

     

    How often do you think about optimizing your operational reporting processes with your internal ticketing system / IT CRMs? Probably not as often as you would like –

    Being a Lean Six Sigma Black Belt, I can’t help but think about these things.

    In the process of promoting a report between a test environment and a production environment often involves customer communication in the form of an email – Why not standardize and automate that process?

    First, you should have an inventory of your reports with an ID or CUID. This can be extracted from the BusinessObjects or BI provider auditor universe/logs respectively; in a worst case, start reporting off of your SQL data source instances or event logging tables.

    Here is an example of C# code that automates the promotion process and generates a nice user friendly output which is standardized, and calls out the folder class where the report lives, provides a login link or if using SSO, a pass through token to log the user in auto-magically. Anything in maroon are comments for you , dear blog reader; anything in navy is a part of the actual email content.

    “Your report has been created and placed on the TEST system for testing.

    Please login (by clicking the link below) and test the report.
    http://<servername>:<port>//InfoViewApp/logon.jsp

    Your report can be found in: \\" target=_blank>\\<LOB folder>\<Department folder>\<Department subfolder>

    The report name is: <ReportName>

    Hide the CUID, Display the mapped report name only to end users like you see above.

    Link using OpenDocument and append to C# code: http://<servername>:<port>/OpenDocument/opendoc/<platformSpecific><parameter1>&<parameter2>&…&<parameterN>

    OR, you can go directly and get updated or new report here  :
    You will have to login manually if you use this link. Remember to change the default Authentication (on the login window) to "Windows AD" or your respective authentication method (“Enterprise” or “LDAP” are your other choices).
    Once the report has been tested please let me know by re-opening the ticket so I can move it to the production system.

    Untested reports are purged every 30 days. Should you want to make any further changes to an existing report outside of data quality corrections, please open a new ticket but reference it to the old one for tracking purposes. Thank you for your kind consideration and adherence to the BI team report process.”

    Note:
    To obtain the document ID, navigate to the document within the Central Management Console (CMC).
    The properties page for the document contains the document ID and the CUID. Use this value for the iDocID parameter.

    Libra – a week of house cleaning and independence is in store for you!

    Your Horoscope – This Week (April 26 – May 2, 2009)

    Don’t start or decide on anything of matter on Monday or Tuesday – Moon and Mars are in your ruling house where the Hermit is deriving a joyous solitude that may surprise you dear Libra, if you choose to listen. Your key word is independent, so break the chains of codependency now. Your love life looks hotter than ever. You can’t escape the demands and desires of your lover. The presence of Mars in your relationship zone indicates it’s time to clear the air. If there are any issues that have been pushed under the carpet, they’re about to be exposed. You may find your partner a lot more argumentative than usual. Talking over difficulties will help the energy in the relationship to move instead of stagnate. There may be some turbulence, but you’ll also feel a lot better for having shared your feelings.

     

    Your Horoscope – April 2009

    You’ll be torn between work and personal matters at home and need to find balance on April 1 and 2. Don’t let others push you into making decisions quickly. You’ll have the chance to establish warm and affectionate bonds and enjoy life on the weekend of April 4. Catch up on a work project on the evening of April 5. Take your obligations very seriously on April 6 and 7. The Moon in your sign on April 8 and 9 will relax you and help you take it easy for a couple of days. You’ll have to be proactive and assertive in some situations, though, and not wait for developments. Relationships may be strained on the weekend of April 11 if you don’t deal with issues immediately. April 13 and 14 would be a good time to take a day trip if you feel the need for a change of scene. Burn off some excess energy by hitting the gym. Work will be intense on April 15 and 16 and you may need to put in some extra hours to get things finished. You may want to get involved in a humanitarian cause or at least be of help to someone in need on the weekend of April 18. A burst of energy on April 22 and 23 may be short lived but will motivate you to start new projects. Do some reading or call a friend on the evening of April 26. You’ll feel frustrated by a critical person in authority on April 27 and 28.

    Attn: Northwest BI Professionals – Register now for the next TDWI NW Chapter Meeting

     

    Date: May 14, 2009

    Time: 5:30–8:00PM, with billiards and networking to be held at the Parlor immediately following event

    Location: Lincoln Square, 700 Bellevue Way NE, Bellevue, WA 98004 (see map below)


    Lincoln Square

    Speaker: Dave Wells, TDWI Research Director and Avid Conference Speaker

    Customer Speaker: Vincent Ippolito, Washington Dental Services’ Director of BI

    Topic: How to Deploy BI Programs in Time of Economic Hardship

    Registration is free. Food is free to attendees. And best of all (unlike those other Data Organizations), you DONT have to be a member to attend, nor pay to attend even if you ARE NOT currently a member.

    Space is extremely limited and advanced registration is recommended

    Link to Register: http://1105media.inquisiteasp.com/cgi-bin/qwebcorporate.dll?P5RVKQ

    TDWI NW Page: http://www.twdi.org/northwest

    Re-visiting Organizational Objectives and Values

    Simplistically speaking, the BSCOL (Balanced Scorecard Collaborative) defines the cascaded model approach for linking corporate values with individual’s performance review goals/objectives. Starting at the bottom and looking at what each individual’s personal goals are, and flowing up from there to the departmental goals, the division goals and finally the executive tier strategy/vision/goals/objectives, will help you to see where you have gaps in your values to what your employees are driving your company towards, versus where you have alignment.

     

    Restructuring those values either at the top (harder) or at the individual contributor level at the bottom (easier) to ensure alignment will both drive better performance from your people because of the visibility offered to them by demonstrating how what they are tasked to complete in a year are contributing factors to helping the company achieve its organizational values. If your start with existing values, and then add what the existing objectives are as a starting point, see if you can map those together. 9x out of 10, they will NOT be aligned and that is a big AH-HA for many leaders to see on paper.

     

    Then start to cascade from there to the division leader tier, the department management tier and lastly, to the individual contributors. That is the vertical alignment process from top to bottom, if that is your preference. Once you have these vertical lines mapped, look to see overlap or conflicting values between divisions, departments and people and find affinity areas that can be mapped logically back to the values where you started (in a top down approach). BSCOL.ORG and my blog (shameless plug) are both great resources offering excellent templates to assist in this process, like strategy maps. (See graphic provided by the TDWI BI Journal below) with a twist = Instead of using the 4 perspectives or in conjunction with (as that is very valuable in and of itself), use your organizational hierarchy instead. Financial becomes CEO’s Established Organizational values/objectives, Customer becomes the divisions that report into the CEO where the circles become that divisions values / goals/objectives, Internal becomes the departments and Learning and Growth becomes the Individual Contributors objectives that their manager lists out in those pesky annual performance reviews

     

    (sorry to those big believers but until true performance management like what I have outlined is institutional in all companies, the PR system is a bell curved sham where some of the best employees get the short end of the bell curve stick because how could one department have all highest performers even if they are a crackerjack team of employees. One day…A girl can dream right?…)

     

    -Laura Edell Gibbons

    New LinkedIn Group: TDWI NW Chapter: Get Your Social Intelligence On!

     Having fueled a social networking surge, TDWI has started embracing what is super exciting in my eyes: social networking and BI. TWITTER: http://www.twitter.com/lauragibbons (had to shamelessly self promote)

    TDWI NW CHAPTER: http://www.twitter.com/TDWI_NW_CHAPTER

    TDWI on TWITTER: http://www.twitter.com/TDWI

    LINKEDIN: http://www.linkedin.com/in/lauragibbons

    TDWI NW CHAPTER on LINKEDIN: http://www.linkedin.com/groups?about=&gid=820537&trk=anet_ug_grppro

    Enjoy! Laura Edell Gibbons, TDWI NW Chapter Board Officer & Chapter Secretary

     

    Common errors when using STSADM -o backup/restore to transfer a database to a new farm on MOSS 2007

     

    In the past I have seen the following problem a couple of times: a customer creates a backup of a content database on one server farm (e.g. using STSADM -o backup or a DB backup in SQL) and restores the backup on a different farm and attaches the content database to a web application.

    After this operation is done several operations (like variations and content deployment) fail to work with the following exception:

    System.ArgumentException. Value does not fall within the expected range.    
    at Microsoft.SharePoint.Library.SPRequestInternalClass.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder)     
    at Microsoft.SharePoint.Library.SPRequest.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder)     
    at Microsoft.SharePoint.SPWeb.GetMetadataForUrl(String relUrl, Int32 mondoProcHint, Guid& listId, Int32& itemId, Int32& typeOfObject, Object& fileOrFolder)     
    at Microsoft.SharePoint.SPWeb.GetFileOrFolderObject(String strUrl)     
    at Microsoft.SharePoint.Publishing.CommonUtilities.GetFileFromUrl(String url, SPWeb web)

    The reason for this problem is that backup/restore does not adjust the references from the publishing page objects in the Pages library to their Page Layouts. These URLs are sometimes stored as absolute URLs including the server name. And this server name is the server name of the old server farm which cannot be resolved on the new farm.

    Be aware that backup/restore of MOSS content databases between server farms are not fully supported! Official documentation of this support limitation is currently in the works. The supported way to transfer content between server farms is to use STSADM -o export/import or content deployment. Backup/restore is only supported for the same server farm.

    In case that you have run into the above problem you have two options:

    1. Throw away the database and transfer it correctly using STSADM -o export/import or content deployment
    2. Fix the incorrect links manually using the following steps
      1. Open the web in SharePoint Designer
      2. On the “task panes” window, pick hyperlinks.
      3. For the “hyperlink” heading, click the arrow and pick (custom…)
      4. In the dialog, ask to show rows where the hyperlink begins with a URLs which are not valid on the current server farm
      5. For each of the files, right click and say “Edit hyperlink…” and Replace hyperlink with the hyper link that is valid on the current server farm.

    Strategic Business Intelligence in Times of Economic Turmoil

    Ideas for business intelligence practitioners to forge ahead with their BI initiatives in times of economic turmoil – To pursue best-in-class business intelligence and data management without incurring the wrath of the monolithic centralized platforms built when times were marked by economic growth in most revenue bearing verticals – Can this same race hold its pace with its same velocity and momentum when the economy shifts winds against the runners? In this, larger than life uber-BI applications race, marked in the last 13 – 24 months by a high rate of BI mergers and acquisitions, one has to wonder what will happen when the dust settles and the acquired folks realize they are no more a part of the organization that was the little guy you bought out way back when. Imagine how ProClarity felt when giant Microsoft came a callin’ – Or, when SAP acquired BusinessObjects; did it become: B-I-C ERP meet B-I-C, well, BI.

    What about the true R & D exploratory labs like Google Labs, who churns out some interesting advancements in the technology space, offering APIs and SDKs for free to the bold and daring willing to take them up on their offer (oh, and that’s no wilting flower of a number folks…Google had a cap of 10000 invitations when their App Store went live in the Spring of 2008. Some of the cool new data warehousing appliances or the process changes that came about with Master Data Management came about from the die harder open source fans who wanted to bring some structure to the masses without the cost of enterprise platforms with their clunky deployment paths and costly upgrades. Let’s not forget, the Adobe flash-frenzied dashboard users now introduced to the presentation layer worthy interactive dashboard gauges that made mouths salivate the first through third time viewing it… As was expected, vendors tried to update their applications to mirror such interactivity and integration with the MS Office stacks, though Xcelsius still corners the presentation layer market by far. (Open source has some cool contenders especially when it comes to data visualizations) as the race to the dashboard 2.0 space moves into the collaborative world of social networks.

    No, there really is a Santa Claus, and I, too, am still smitten with PerformancePoint Planning !! PPS truly rocks the product placement in this arena, much harder to appeal to that broad category of stuffy Financial Budgeting and Planning CFOs, and the likes.

    So I ask, with the downturn in the economy, can such advancements be made in, as clearly demonstrated, a capitalized business intelligence software industry , whose recent growth spurts marked a growing sense of entitlement yet with subpar execution and results upon implementation, where services and solutions costs drove positive spikes in software sales, and vice versa? In fact, an interdisciplinary and while highly debated, interdependency exists between the BI, social network, collaboration and portals with custom embedded BI apps, web services and more all geared with one goal in mind: to optimize in a cost effective manner in an effort to drive better, more data driven decision making ? Or is this another blue skies and apple pie dream manifested by one girl’s love for business intelligence

    Enterprise Architecture Got You Down?

    Try this simplified toolkit approach based on standards defined by the Federal Enterprise Architecture (FEA) board, along with NASCIO:

     

    Performance Reference Model (PRM) •Inputs, outputs, and outcomes •Uniquely tailored performance indicators  

    In this category, you should immediately think Scorecard (Balanced Scorecard and otherwise),:

    –Each scorecard have 4-6 perspectives which are logical / categorical groupings of key indicators, or what I like to call ‘affinitized’ KPIs.

    –Each perspective has < 6-7 KPIs (Key Performance Indicators) (if you receive pushback, and you will, as people would define a KPI for the percent of time the Express Grocery queue contains purchasers with more than the specified limit of 12 – 15 items, doll it up for the BBB as a complaint, and deliver it with such ferver one almost winces when they realize said complaint is recycled faster than their next trip to the grocery store

        –Remember the 3 keys to success with defining KPIs: are they actionable, are they measureable (not in a future state, but today can you measure it) and will they drive a performance based behavior change (think incentives, as they represent a perfect example of a performance based behavior changer).

    – So, ask the question now… "That’s a long list, John or Jane Exec…First, what are you going to do with that information [the so-what question distinguishing actionable from interesting]…can you really drive performance improvements in your business with more than 5 indicators in the case where all 5 go red at the same time or if you don’t want to be direct, you can ask "how do those KPIs link to your performance objectives personally – Any leader worth their title will take the time to align the activities and work tasks that they personally, or through delegation, list in their performance reviews.

    –This process can be started at any layer in an organizational hierarchy and is called Goals / Objectives Alignment or Cascaded KPIs

    –Most leaders you ask have no more than a few KPIs that they ACTUALLY care about – just over the hurt feelings now, or feeling that you have wasted x number of hours measuring and reporting on metrics that top leadership doesn’t care about; if you have been in BI long enough, you will have experienced this at least once in your career.

     

    Business Reference Model (SRM) • Lines of Business (functions and sub-functions) • Agencies, customers, partners

     

    How will the performance KPIs cascade down to the individual contributor from the CEO’s goals? Easy – take this example:

    CEO sets a goal of wanting to increase revenue by growing the Sales line of business, specifically new customer sales. He sets a goal of 35% growth of new sales revenue, which the VP of Sales is tasked to drive. They , in turn, assign the goal to the account managers in charge of new customer accounts who then add the same goal to their salesforce in the field. The KPI becomes New Sales Growth >= 35%, frequency is set to weekly with hierarchical rollups to monthly and quarterly aggregations.

     

    –Now, you may ask yourself, what about the Operations department where Customer Sales and Service (aka Telesales) lives, and bingo! You’re getting it now…Even though it was tasked to the VP of Sales, they, or that same CEO, should have realized that the Telesales department can also generate revenue from new sales, just those that come through a different channel. Instead of the typical route of calculating in field sales and measuring the sales department for a goal of this nature, the Operations VP should have the same goal on their performance review as the VP of Sales, which they, in turn, delegate to their Telesales Center Managers, who delegate to the Supervisors who delegate it to the agents on the phone – While it is an implicit delegation as one who is hired to man a Telesales line understands their job is to answer the phone and make sales (thus, encouraging sales growth), it is still an action that is mandated by a supervisor, who received the mandate from their manager, who likely received the goal from the corporate office VP of Operations or person responsible for the Telesales center.

    –It flows from top or bottom (vertically) as well as horizontally since in this example, it covers two horizontal business units (Sales and Operations).

    –This is why starting with objectives or goals makes this process, that is, cascading KPIs, that much easier because you have a definitive starting point and end point which is that same objective/goal.

     

    New Sales Growth >= 35% is the same whether you start with the call center agent whose awesome sales performance on the phones contributes to making her supervisor meet their goal which was to grow sales by 35% who enabled their manager who enabled the VP of Operations and the VP of Sales objectives who then met or exceeded their CEO’s original mandate.

     

    Service Component Reference Model (SRM) • Service domains, service types • Business and service components

    –A service component is defined as "a self contained business process or service with predetermined functionality that may be exposed through a business or technology interface.

     

    Data Reference Model (DRM) • Business-focused data standardization • Cross-agency information exchanges

     

    Technical Reference Model (TRM) • Service component interfaces, interoperability • Technologies, recommendations

    Today I dub ‘Data Services Oriented Architecture’ for a Web 2.0 and beyond World

    As David Besemer wrote in his May 2007 article for DMReview, ‘SOA for BI’, "It took Michelangelo nearly five years to complete his famous works at the Sistine Chapel. Your transition to SOA for BI can go much faster if you start with data services."

    What are data services? According to Wikipedia, wait, there isnt an existing definition on Wikipedia. First, a definition with I share with the Internet users of the world vis-a-vie WikiPedia:

    "A Data Services Oriented Architecture or ‘DSOA’ framework consists of a combination of schemas, classes and libraries that facilitate and provide the ability to create and consume data services for the web. DSOA reveals the consumer data underlying architecture, exposed using Data Services’ Entity Data Model, and provides reusability of your service when developed correctly," Laura Edell-Gibbons, Mantis Technology Group Inc.


    portalnavbar
    And by correctly, I would highly recommend not getting bogged down by the concept of plug and chug, dubbed by my colleague Tip, or making your code reusable. It is all a balance, remember, young BI padawan.

    Select best-in-class data services middleware to help you model, develop and implement your back-end BI services. PowerDesigner is a pretty rocking modeling tool, which covers everything from data element impact analysis to facilitating requirements gathering. Simplistically speaking, I am a big fan of the simplicity of SQL Server Integration Services and the new Data Services, both Microsoft products, though this opinion is certainly one that doesnt necessarily represent the populas vote. I am a big fan of Infomatica and Data Integrator (now called Data services, funnily enough under the SAP/BusinessObjects brand).

    During the first and second projects, be sure to track all productive working hours to deliver each phase of your solution and costs savings for the efficiencies I expect you designed your system with the unescapable expectation of being the ROI-generator, a widely accepted expectation that all BI systems have high ROI, and many due. Start small, grow enterprise once the concept has been leaned out and efficiencies expected and beyond are gained. Then, as you expand your deployment from project to enterprise, you can easily self-fund additional licenses and other required resources with the savings or other benefits gained on those 1st two, somewhat painful, ‘initiation’ projects. We all have to go through the process and while painful at times, the learning experiences offered outweigh any of the difficulties while learning.

    It is better to build the new services project by project, always making the predecessor available to other projects in a unified data services tier as you go. You and your team can then choose whether to rhttp://scorecardstreet.spaces.live.com/mmm2008-11-07_18.20/#euse a data service, extend an existing service, buy or to build something from scratch, my least favorite BTW.

    Over time, these will change and I suspect ‘reuse’ will become the greatest portion of the proverbial pie, whereas today, I believe the paradigm shifts more in the direction of ‘build from scratch.’

    Starting to plan front-end BI services up front, even while deploying your backend BI services, will enable you to make small but meaningful steps without much noticable downtime to the organization, something especially important for those of us working with a ‘4 x 9s’ uptime SLA for our data centers. Plus, remember, if you build these on a powerful data services foundation, you will reduce your time to market, and your TCO over time. By providing the business with their much anticipated and needed operational reports, tactical and strategic dashboards and performance management analytics while infusing the lot into your SOA, one will reep rewards greater than my words could ever portray, dear reader…Til then, remember ‘what will come sooner than you think is no more when than how’.

    References:

  • David Besener. "SOA for BI." DMReview, May 2007.
  • SWF Search-ability Announcement from Adobe and How It Relates to Xcelsius 2008

    Imagine Xcelsius dashboards especially built in 2008 with its flexible add-on component manager making it that much easier to customize components (think objects / widgets like scatterplots which are offered out of the box as a chart type)..

     

    Now, we have a best practice for monetizing the SWF content that is part of your Xcelsius 2008 dashboard…here is what Adobe had to say:

     

    Adobe is teaming up with search industry leaders to dramatically improve search results of dynamic web content and rich Internet applications (RIAs). Adobe is providing optimized Adobe Flash Player technology to Google and Yahoo! to enhance search engine indexing of the Flash file format (SWF) and uncover information that is currently undiscoverable by search engines. This will provide more relevant automatic search rankings of the millions of RIAs and other dynamic content that run in Adobe Flash Player. Moving forward, RIA developers and rich web content producers won’t need to amend existing and future content to make it searchable—they can now be confident that it can be found by users around the globe.

    Why is this news important?

    Adobe is working with Google and Yahoo! to enable one of the largest fundamental improvements in web search results by making the Flash file format (SWF) a first-class citizen in searchable web content. This will increase the accuracy of web search results by enabling top search engines to understand what’s inside of RIAs and other rich web content created with Adobe Flash technology and add that relevance back to the HTML page.

    Improved search of SWF content will provide immediate benefits to companies leveraging Adobe Flash software. Without additional changes to content, developers can continue to provide experiences that are possible only with Adobe Flash technology without the trade-off of a loss in search indexing. It will also positively affect the Search Engine Optimization community, which will develop best practices for building content and RIAs utilizing Adobe Flash technologies, and enhance the ability to find and monetize SWF content.

    Why is Adobe doing this?

    The openly published SWF specification describes the file format used to deliver rich applications and interactive content via Adobe Flash Player, which is installed on more than 98 percent of Internet-connected computers. Although search engines already index static text and links within SWF files, RIAs and dynamic web content have been generally difficult to fully expose to search engines because of their changing states—a problem also inherent in other RIA technologies.

    Until now it has been extremely challenging to search the millions of RIAs and dynamic content on the web, so we are leading the charge in improving search of content that runs in Adobe Flash Player. We are initially working with Google and Yahoo! to significantly improve search of this rich content on the web, and we intend to broaden the availability of this capability to benefit all content publishers, developers, and end users.

    Which versions of the SWF file format will benefit from this improved indexing and searching?

    This solution works with all existing SWF content, across all versions of the SWF file format.

    What do content owners and developers need to do to their SWF content to benefit from improved search results?

    Content owners and developers do not have to do anything to the millions of deployed SWF files to make them more searchable. Existing SWF content is now searchable using Google search, and in the future Yahoo! Search, dramatically improving the relevance of RIAs and rich media experiences that run in Adobe Flash Player. As with HTML content, best practices will emerge over time for creating SWF content that is more optimized for search engine rankings.

    What technology has Adobe contributed to this effort?

    Adobe has provided Flash Player technology to Google and Yahoo! that allows their search spiders to navigate through a live SWF application as if they were virtual users. The Flash Player technology, optimized for search spiders, runs a SWF file similarly to how the file would run in Adobe Flash Player in the browser, yet it returns all of the text and links that occur at any state of the application back to the search spider, which then appears in search results to the end user.

    How are Google and Yahoo! using the Adobe Flash technology?

    Google is using the Adobe Flash Player technology now and Yahoo! also expects to deliver improved web search capabilities for SWF applications in a future update to Yahoo! Search. Google uses the Adobe Flash Player technology to run SWF content for their search engines to crawl and provide the logic that chooses how to walk through a SWF. All of the extracted information is indexed for relevance according to Google and Yahoo!’s algorithms. The end result is SWF content adding to the searchable information of the web page that hosts the SWF content, thus giving users more information from the web to search through.

    When will the improved SWF searching solutions go live?

    Google has already begun to roll out Adobe Flash Player technology incorporated into its search engine. With Adobe’s help, Google can now better read the SWF content on sites, which will help users find more relevant information when conducting searches. As a result, millions of pre-existing RIAs and dynamic web experiences that utilize Adobe Flash technology, including content that loads at runtime, are immediately searchable without the need for companies and developers to alter it. Yahoo! is committed to supporting webmaster needs with plans to support searchable SWF and is working with Adobe to determine the best possible implementation.

    How will this announcement benefit the average user/consumers?

    Consumers will use industry leading search engines, Google now and Yahoo! Search in the future, exactly as they do today. Indexed SWF files will add more data to what the search engine knows about the page in which it’s embedded, which will open up more relevant content to users, and could cause pages to appear at a higher ranking level in applicable search results. As a result, millions of pre-existing rich media experiences created with Adobe Flash technology will be immediately searchable without the need for companies and developers to alter content.

    When will the new results register on Google?

    Google is using the optimized Adobe Flash Player technology now, so users will immediately see improved search results. As Google spiders index more SWF content, search results will continue to get better.

    How will this announcement benefit SWF content producers?

    Organizations can now dramatically improve the rich web experiences they deliver to customers and partners by increasing the use of Adobe Flash technology, which is no longer impeding the ability for users to find those experiences in highly relevant search results. RIA creators and other web content producers can now be confident that their rich media and RIA experiences leveraging Adobe Flash technology are fully searchable by users around the globe who use the dominant search engines. Furthermore, the ability to index information extracted throughout the various states of dynamic SWF applications reduces the need to produce an HTML or XHTML backup for the RIA site as a workaround for prior search limitations.

    Does this affect the searchability of video that runs in Adobe Flash Player?

    This initial rollout is to improve the search of dynamic text and links in rich content created with Adobe Flash technology. A SWF that has both video and text may be more easily found by improved SWF search.

    Will Adobe Flex applications now be more easily found by Google search, including those that access remote data?

    Yes, any type of SWF content including Adobe Flex applications and SWF created by Adobe Flash authoring will benefit from improved indexing and search results. The improved SWF search also includes the capability to load and access remote data like XML calls and loaded SWFs.

    Does Adobe recommend a specific process for deep-linking into a SWF RIA?

    Deep-linking, in the case of SWF content and RIAs, is when there is a direct link to a specific state of the application or rich content. A variety of solutions exist today that can be used for deep-linking SWF content and RIAs. It’s important that sites make use of deep links so that links coming into a site will drive relevance to the specific parts of an application.

    To generate URLs at runtime that reflect the specific state of SWF content or RIA, developers can use Adobe Flex components that will update the location bar of a browser window with the information that is needed to reconstruct the state of the application.

    For complex sites that have a finite number of entry points, you can highlight the specific URLs to a search spider using techniques such as site map XML files. Even for sites that use a single SWF, you can create multiple HTML files that provide different variables to the SWF and start your application at the correct subsection. By creating multiple entry points, you can get the benefits of a site that is indexed as a suite of pages but still only need to manage one copy of your application. For more information on deep-linking best practices, visit www.sitemaps.org/faq.php.

    Is Adobe planning on providing this capability to other search vendors too?

    Adobe wants to help make all SWF content more easily searchable. As we roll out the solution with Google and Yahoo!, we are also exploring ways to make the technology more broadly available.

    Where to go from here

    For for more information from Google on SWF search, read Improved Flash indexing on the Official Google Webmaster Central Blog.

    .NET vs. Java Consumer SDK – BusinessObjects Enterprise

    The Java and .NET versions of the consumer SDK are identical in functionality. The two versions of the SDK are generated from a common set of Web Service Definition Language (WSDL) files. As a result, they possess identical class names and inheritance patterns. There are differences between the two, however, that are addressed in this section.

    Note:    For more information on the Platform Web Services WSDL, see Using the WSDL instead of the consumer API.

    Organization of plugin classes

    It is the goal of this SDK to provide the same organizational structure of plugin classes as provided in the traditional, non-web services Enterprise SDK.

    In Java, classes are organized in packages where the name of the plugin is part of the package. For example, the CrystalReport class is located in the com.businessobjects.enterprise.crystalreport package, while the Folder class is located in the com.businessobjects.enterprise.folder package.

    In .NET, classes are organized in namespaces based on its plugin type. There are separate namespaces for destination, authentication, desktop, and encylopedia plugin classes. For example, both the CrystalReport and Folder classes are desktop plugins, so they are located in the BusinessObjects.DSWS.BIPlatform.Desktop namespace.

    There is also a separate namespace for system rights in .NET.

    Representation of class properties

    WSDL class properties are generated differently in Java and .NET. In Java, properties are generated as getX and setX methods, where X is the name of the property. In .NET, properties are generated as fields.

    In this guide, the term "property" refers to both the class method in Java and its field equivalent in .NET.

    Capitalization of method names

    In Java, method names begin with a lowercase character. In .NET, method names begin with an uppercase character.

    In this guide, the convention is to refer to a method by its Java case.


    Marketing ClickStream Dashboard Example

    Many times, I am brought in to help develop KPI’s for an organization. When it comes to the Marketing subject area, I find myself getting most excited by the clickstream data, an often overlooked yet rich source of correlative opportunity for the bold and analytical minds that seem to not exist within the core team itself. While marketing practitioners are some of the most adapt at doing it, I believe they either do not realize themselves to be doing it or more often, the company stigmatizes them as non-analytical, sales oriented as opposed to the true data analysts that they are. After all, didn’t Market Research originate out of Marketing, which tend to house the company’s statisticians? Here is a sample dashboard for measuring Marketing Efficacy.

    http://resources.businessobjects.com/flash/cx/markerting.swf

    Graphical Representation of the Process Towards Business Intelligence Enlightenment

    It all begins with an idea…whether an idea that came to you or one which is derived on a senior executive’s whim, all business intelligence initiatives start with a single thought: how can I drive more data into our decision making and business processes in order to drive better more accurate decisions for the business, thus enabling world class operations and growth potential. Whew – that was a mouthful.
    In all reality, let this graphical representation flow as organically as the thought I am trying to emphasize here – BI is a thought process and is a human relative need – So, we, as technologists need to start building software applications that meet that germanely simple conceptual need – to create software that not only improves my efficiencies at home or at work but that marries those efficiencies into human adaptive and behavioral neurological processes. When synapsis’ fire in ones brain, and neurological circuitry moves to pass one synapsis into another cortex from an origination point, one can visualize how to tie this metaphor into the systems we use everyday – Take the process of searching the Internet using your favorite search engine. We enter key words or metadata tags that represent nouns, verbs or contextual fragments that represent natural neural processing of the human brain. If search engines were constructed, likewise, BI systems, to more mirror this reflexive neural network within their enterprise application architecture, one might find more usefulness in the long run in terms of end user adoption and sustainment of said adoption after the 1st year after implementation. Let this pictorial represent that behavior marriage between BI technologies and human neural networks.

    Increasing Business Value vs. Insights Provided – a Business Intelligence Roadmap

     

    While many companies feel they have strong BI programs, most, in my experience, have operational reporting systems and sometimes, if you are lucky, they also have strategies that go with those systems or even better, are driving those systems (fueled by requirements gleaned from business needs and actual usage scenarios vs. the "way it has always been done/reported on").

    As you can see in figure 1, that level merely tells you the ‘WHAT’ – it doesn’t answer the ‘WHY’ it happened (root cause), or predict the ‘HOW’ it might affect you in future, nor the ‘WHEN’ in terms of monitoring if it is happening currently or just a past event.

     (img source TDWI Research at http://tdwi.org, 2007)

    Your BI roadmap should have a similar long term plan. If you want to provide increasing value to your organization, one must get out of the business of operational reporting and move towards the real meat and value of BI, which lay in the analysis, monitoring and predicting in terms of how the business views their needs from BI, not how BI believes they can deliver information.

    Capturing Metrics and Their Measurements – A Data Quality Perspective

     

    The techniques that exist within the organization for collecting, presenting, and validating metrics must be evaluated in preparation for automating selected repeatable processes.

    Cataloging existing measurements and qualifying their relevance helps to filter out processes that do not provide business value as well as reducing potential duplication of effort in measuring and monitoring of critical data quality metrics. Surviving

    measurements of relevant metrics are to be collected and presented in a hierarchical manner within a scorecard, reflecting the ways that individual metrics roll up into higher level characterizations of compliance with expectations while allowing for drill-down to isolate the source of specific issues.

    As is shown in Figure 1, collecting the measurements for a data quality scorecard would incorporate:

    1. Standardizing business processes for automatically populating selected metrics into a common repository

    2. Collecting requirements for an appropriate level of design for a data model for capturing data quality metrics

    3. Standardizing a reporting template for reporting and presenting data quality metrics

    4. Automating the extraction of metric data from the repository

    5. Automating the population of the reporting and presentation template, or a data quality scorecard

     

    Figure 1:

    Dimensional World – Understanding Modeling Techniques and Approaches

     

    As a data architect, I am often amazed at how many with the same title really do not understand the core differences between dimensional model structures in my industry.

    In fact, it is a rarity to find the data architect willing to be challenged without personalities coming into the mix. As as we all know, when you fight the person and not the problem, you end up with hurt feelings and resentments in the workplace. For an EDW, this can lead to people resigned and not willing to stand up for what they think, which leads to the ‘sheep’-like syndrome called ‘Group Think’, thus resulting in an EDW that is built off of emotion and not educated beliefs or best practices.

    I thought I would take the time to explain some of the differences between dimensions to enable you, reader, to stand up for the correct approach when you are faced with a contended data model designed by you.

    Remember, they are not attacking you, just the model. Look at it as a chance to grow and learn from others. Maybe, just maybe, having 4 or 6 eyes is better than just your 2 and your model, already attributed to you from a recognition perspective, will be that much better.

    Here we go:

    Slowly Changing Dimensions (SCD – of the Type II variety is where I become a Type II gal)

    • What in the world is a SCD anyway?
    • Ralph Kimball defined this his 1996 book as the following:

    (Kimball, 1996), a slowly changing dimension is a dimension table in which a new row is created each time the underlying component entity changes some important characteristic. Its purpose is to record the state of the dimension entity at the time each transaction took place.

    This concept is hard for for those who have primarily dealt with changes as handled in operational systems: no matter how a customer changes, we want to ensure that we have only one customer row in the customer table.

    Thus each row in a slowly changing dimension does not correspond to a different entity but a different “state” of that entity—a “snapshot” of the entity at a point in time.

    To create a slowly changing dimension table, the following design steps are required:

     

    • Define which attributes of the dimension entity need to be tracked over time. This defines the conditions for creating each new dimensional instance.
    • Generalize the key of the dimensional table to enable tracking of state changes. Usually this involves adding a version number to the original key of the dimension table.

    Apart from the generalized key, the structure of the slowly changing dimension is the same as the original dimension. However, insertion and update processes for the table will need to be modified significantly.

    Splitting Dimensions: “Tiny-Dimensions”
    In practice, dimension tables often consist of millions of rows, making them unmanageable for browsing purposes. To address this issue, the most heavily used attributes (e.g., demographic fields for customer dimensions) may be separated into a mini-dimension table. This can improve performance significantly for the most common queries. The mini-dimension should contain a subset of attributes that can be efficiently browsed. As a rule of thumb, there should be fewer than 100,000 combinations of attribute values in a mini-dimension (i.e., fewer than 100,000 rows) to facilitate efficient browsing (Kimball, 1996). The number of attribute value combinations in a tiny dimension can be limited by:

    • Including attributes in the mini-dimension that have discrete values (i.e., whose underlying domains consist of a fixed set of values).
    • Grouping continuously valued attributes into “bands.” For example, age could be converted to a set of discrete ranges such as child (0-17), young (18-29), adult (30-45), mature (45-64), and senior (65+).

     Figure 4. Mini-Dimensional Table (Detailed Level Design)

    Figure 4 shows how customer demographics in the Order Item star could be stratified out into a mini-dimension table. Some of the attributes in the original table have been transformed to reduce the number of rows in the mini-dimension table: 

  • # Employees and Revenue have been converted to ranges.

  • Date of First Order has been converted to Years of Service, reducing the number of combinations to years’ multiples rather than all dates combos that are possible.

    Rule of Thumb:

    Rather than create a lot of very small dimension tables, these may be combined into a single dimension, with each row representing a valid combination of values. As a rule of thumb, there should be no more than seven dimensions in each star schema to ensure that it is cognitively manageable in size (following the “seven, plus or minus two” principle).

    Dealing with Non-Hierarchical Data

    A major source of complexity in dimensional modeling is dealing with non-hierarchically structured data.

    Dimensional models assume an underlying hierarchical structure and therefore exclude data that is naturally non-hierarchical.

    So what do we do if important decision-making data is stored in the form of many-to-many relationships? In this section, we describe how to handle some particular types of non-hierarchical structures that commonly occur in practice:

    1. Many-to-many relationships: these define network structures among entities, and cause major headaches in dimensional modeling because they occur so frequently in ER models.
    2. Recursive relationships: these represent “hidden” hierarchies, in which the levels of the hierarchy are represented in data instances rather than the data structure.
    3. Generalization hierarchies: subtypes and supertypes require special handling in dimensional modeling, because of the issue of optional attributes. These represent hierarchies at the meta-data level only—data instances are not hierarchically related to each other so they cannot be treated as hierarchies for dimensional modeling purposes.

    First, #1:

    1. Many-To-Many Relationships
      Many-to-many relationships cause major headaches in dimensional modeling for two reasons. Firstly, they define network structures and therefore do not fit the hierarchical structure of a dimensional model. Secondly, they occur very commonly in practice. Here we consider three types of many-to-many relationships which commonly occur in practice:
      • Time-dependent (history) relationships
      • Generic (multiple role) relationships
      • Multi-valued dependencies (“true” many-tomany relationships)
  • To include such relationships in a dimensional model generally requires converting them to one-to-many (hierarchical) relationships.

    Type 1. Time-Dependent (Historical) Relationships
    A special type of many-to-many relationship that occurs commonly in data warehousing applications is one which records the history of a single-valued relationship or attribute over time. That is, the attribute or relationship has only one value at a specific point in time, but has multiple values over time. For example, suppose that the history of employee positions is maintained in the example data model. As shown in Figure 5, the many-to-many relationship which results (Employee Position History) breaks the hierarchical chain and Position Type can no longer be collapsed into its associated component entity (Employee).

    Figure 5. Time-Dependent (Historical) Relationship

    There are two ways to handle this situation:

    • Ignore history: Convert the historical relationship to a “point in time” relationship, which records the current value of the relationship or attribute. In the example, this would mean converting Employee Position History to a one-to-many relationship between Employee and Position Type, which records the employee’s current position. Position Type can then be collapsed into the Employee entity, and the Employee dimension will record the employee’s current position (i.e., at the time of the query). The history of previous positions (including their position at the time of the order) will be lost. A disadvantage of this solution is that it may result in (apparently) inconsistent results to queries about past events.
    • Slowly changing dimension: Define Employee as a slowly changing dimension and create a new instance in the Employee dimension table when an employee changes position. This means that Position Type becomes single valued with respect to Employee, since each instance in the Employee table now represents a snapshot at a point in time, and an employee can only have one position at a point in time. Position Type can again be collapsed into the Employee dimension. The difference between this and the previous solution is that the position recorded in the dimension table is the employee’s position at the time of the order, not at the time of the query.

    Type 2. Generic (Multiple Role) Relationships
    Another situation that commonly occurs in practice is when a many-to-many relationship is used to represent a fixed number of different types of relationships between the same two entities. These correspond to different roles that an entity may play in relation to another entity. For example, in the example data model, an employee may have a number of possible roles in an order:

    • They may receive the order
    • They may approve the order
    • They may dispatch the order

    This may be represented in an ER model as a many-to-many relationship with a “role type” entity used to distinguish between the different types of relationships (Figure 6).

    Figure 6. Generic (Multiple Role) Relationship converted to Specific Relationships

    The intersection entity between Employee and Order means that Employee cannot be considered as a component of the Order transaction, and therefore orders cannot be analyzed by employee characteristics. To convert such a structure to dimensional form, the different types of relationships or roles represented by the generic relationship need to be factored out into separate one-to-many relationships (Figure 6). Once this is done, Employee becomes a component of the Order transaction, and can form a dimension in the resulting star schema.

    Type 3. Multi-Valued Dependency (True Many-To-Many Relationship)
    The final type of many-to-many relationship is when a true multi-valued dependency (MVD) exists between two entities: that is, when many entities of one type can be associated in the same type of relationship with many entities of another type at a single point in time. For example, in Figure 7, each customer may be involved in multiple industries. The intersection entity Customer Industry “breaks” the hierarchical chain and the industry hierarchy cannot be collapsed into the Customer component entity.

    Figure 7. Multi-Valued Dependency (MVD)

    One way of handling this is to convert the Customer Industry relationship to a 1:M relationship, by identifying the main or “principal” industry for each customer.

    While each customer may be involved in many industries, there will generally be one industry in which they are primarily involved (e.g., earn most of their revenue). This converts the relationship into a one-to-many relationship, which means it can then be collapsed into the Customer table (Figure 8). Manual conversion effort may be required to identify which is the main industry if this is not recorded in the underlying production system.

    Figure 8. Conversion of Many-To-Many Relationships to One-To-Many Recursive Relationships

     

     

    Second, #2 from above:


  • Recursive Relationships

    Hierarchies are commonly represented in ER models using recursive relationships. Using a recursive relationship, the levels of the hierarchy are represented as data instances rather than as entities. More flexible for sure…However, such structures are less useful in a data warehousing environment, as they reduce understandability to end users and increase complexity of queries.

    In converting an ER model to dimensional form, recursive relationships must be converted to explicit hierarchies, with each level shown as a separate entity. To convert this to dimensional form, each row in the Industry Classification Type entity becomes a separate entity. Once this is done, the levels of the hierarchy (which become classification entities) can be easily collapsed to form dimensions.

    For example, the industry classification hierarchy in the example data model may be shown in ER form as a recursive relationship (Figure 9).

    Figure 9. Conversion of Recursive Relationship to Explicit Hierarchy

     

     

     

    Lastly, #3 from above:


  • Generalization Hierarchies: Subtypes and Supertypes
    In the simplest case, supertype/subtype relationships can be converted to dimensional form by merging the subtypes into the supertype and creating a “type” entity to distinguish between them. This can then be converted to a dimensional model in a straightforward manner as it forms a simple (two level) hierarchy. This will result in a dimension table with optional attributes for each subtype. This is the recommended approach when there are a relatively small number of subtype specific attributes and/or relationships. In the more complex case—when there are many subtype-specific attributes and when different transaction-entity attributes are applicable for different subtypes—separate dimensional models may need to be created for the supertype and each of the subtypes. These are called heterogeneous star schemas, and are the dimensional equivalent of subtypes and supertypes. In general, this will result in n+1 star schemas, where n is the number of subtypes (see Figure 10):

    Figure 10. Heterogeneous Star Schemas ("Dimensional Subtyping")

     

    • One Core Star Schema (“dimensional supertype”): This consists of a core fact table, a core dimension table plus other (non-subtyped) dimension tables. The core dimension table will contain all attributes of the supertype, while the core fact table will contain transaction attributes (facts) that are common to all subtypes.
    • Multiple Custom Star Schemas (“dimensional subtypes”): A separate Customer Star Schema should be optionally created for each subtype in the underlying ER model. Each custom star schema will consist of a custom fact table, a custom dimension table plus other (non-subtyped) dimension tables.
    • Each custom dimension table will contain all common attributes plus attributes specific to that subtype. The custom fact table will contain all common facts plus facts applicable only to that subtype.
  • Dimensional Structures: A Look Into the Outliers that Often Get Overlooked When Designing Data Structures

     

    Think Star Schema is the only way to go with your EDW? Think again…StarFlake is a Best in Breed Approach that Might Make BI on top of the EDW that much more capable and robust.  Read on… I will cover the following concepts:

     

  • Alternative dimensional structures: snowflake schemas and starflake schemas
  • Slowly changing dimensions
  • Mini-dimensions
  • Heterogeneous star schemas (dimensional subtypes)
  • Dealing with non–hierarchically structured data in the underlying ER model:
    – Many-to-many relationships
    – Recursive relationships
    – Subtypes and supertypes

    Figure 1: Snowflake (the anti-lord of all things good in the data world)

    Kimball (1996) argues that “snowflaking” is undesirable because it adds unnecessary complexity, reduces query performance, and doesn’t substantially reduce storage space.

    However, performance impact of snowflaking depends on the DBMS / OLAP structures in place.  Advantages of the snowflake : explicitly shows the hierarchical structure of each dimension, which can help in understanding how the data can be analyzed.

    Thus, whether a snowflake or a star schema is used at the physical level, views should be used to enable the user to see the structure of each dimension as required.

    Being a star schema girl myself, I am quite interested in the power of the starflake model and thus being a starflake girl <she is reminded of the Tori Amos song ‘Cornflake Girl’ with this reference>.

     

    Figure 2: Starflake model

    A starflake schema is a star schema in which shared hierarchical segments are separated out into sub-dimension tables. These represent “highest common factors” between dimensions. A starflake schema is formed by collapsing classification entities from the top of each hierarchy until they reach either a branch entity or a component entity. If a branch entity is reached, a subdimension table is formed. Collapsing then begins again after the branch entity. When a component entity is reached, a dimension table is formed.

     

     

    Dimensional Design Trade-Offs

    The alternative dimensional structures considered here represent different trade-offs between complexity and redundancy:

    • The star schema is the simplest structure, as contains the least number of tables—eight tables in the example. However, it also has the highest level of data redundancy, as the dimension tables all violate third normal form (3NF). This maximizes understanding and simplifies queries to pairwise joins between tables.
    • The snowflake schema has the most complex structure and consists of more than five times many tables as the star schema representation (41 in the example). This will require multitable joins to satisfy queries.
    • The starflake schema has a slightly more complex structure than the star schema—nine tables in the example. However, while it has redundancy within each table (the dimension tables and sub-dimension tables all violate 3NF), redundancy between dimensions is eliminated, thus reducing the possibility of inconsistency between them. All of these structures are semantically equivalent: they all contain the same data and support the same set of queries. As a result, views may be used to construct any of these structures from any other.
  •  

    Bottom’s Up Data Mart and Enterprise Data Warehouse Project Implementation Plan

    SUMMARY: In my consulting practice, I recommend an incremental, ‘bottom-up’ implementation methodology, similar to that advocated by Ralph Kimball. This has proven to be a successful deployment approach to ensure a short-term return on investment with minimal project risk, while still delivering a data warehousing architecture that provides a standardized, enterprise wide view of information. This column describes a typical project plan based on a bottom-up implementation methodology.

    * REQUIREMENT FOR BOTTOM-UP DEVELOPMENT

    I receive numerous e-mails from both technical and business managers who have become extremely frustrated by the inordinate amount of time and effort required to build data warehouses using traditional, top-down development techniques. Due to the large amount of effort required up front to perform user interviews and enterprise data modeling, organizations using top-down techniques often wait 12 to 14 months to get any return on investment. In some cases, managers complain that they have worked with a systems integrator on a top-down development project for over two years without obtaining any solution to their business problem.

    As described in previous articles, a bottom-up development approach directly addresses the need for a rapid solution of the business problem, at low cost and low risk. A typical requirement is to develop an operational data mart for a specific business area in 90 days, and develop subsequent data marts in 60 to 90 days each. The bottom-up approach meets these requirements without compromising the technical integrity of the data warehousing solution. Data marts are constructed within a long-term enterprise data warehousing architecture, and the development effort is strictly controlled through the use of logical data modeling techniques and integration of all components of the architecture with central meta data.

    * PROJECT PLAN

    The representative project plan described below is based on an incremental, ‘bottom-up’ implementation methodology. In my experience, this has been the most successful deployment approach to ensure a short-term return on investment with minimal project risk, while still delivering a data warehousing architecture that provides a standardized, enterprise-wide view of information.

    The project plan breaks down into three major phases, as follows:

    1. Data Warehouse Requirements and Architecture – gather user requirements through a series of requirements interviews, assess the current IT architecture and infrastructure, along with current and long-range reporting requirements, and develop an enterprise data warehousing framework that will provide maximum flexibility, generality and responsiveness to change. Define functional requirements for the initial data mart within the enterprise architecture. Optionally, conduct a Proof-of-Concept demonstration to select enterprise-class ETL and/or BI tools.

    2. Implement Initial Data Mart under the New Architecture – prove the new architecture works by implementing a fully operational data mart for a selected subject area within a 90-day time box.

    3. Develop Additional Data Marts under the New Architecture – on a phased basis, develop additional architected data marts within the new framework.

    Timeboxes are placed around each phase of the project plan to strictly limit the duration of the development effort. Phase 1 (specification of architecture and functional requirements) is limited to four weeks. If proof-of-concept testing of ETL and BI tool is conducted, these tests occur outside the timebox for Phase 1, due to the uncertain timing of scheduling vendors to perform the tests. Phase 2 (development of initial data mart) is limited to 90 days. Phase 3 (development of additional data marts) is limited to 60 to 90 days for each subsequent data mart.

    * TASKS AND DELIVERABLES BY PHASE

    The list below summarizes representative tasks, timeframe, and deliverables for each of these three phases.

    **Phase / Task
    1. Requirements and Architecture
    —Conduct workshop to review/define strategic business drivers, review current application architecture, and define strategic data warehousing architecture.
    —Elapsed Week :1
    —Deliverables: strategic requirements, document specifying the recommended long-term enterprise data warehousing architecture

    —Conduct requirements interviews with personnel from multiple subject areas and document the results of the interviews.
    —Elapsed Week: 1
    —Deliverables: Interview summaries, requirements inventory

    —Conduct workshop to define functional specifications of initial data mart,
    —Elapsed Weeks: 2
    —Deliverables: scope statement, reporting and analysis specifications

    —Develop high-level dimensional data model for initial data mart
    —Elapsed Weeks: 2
    —Deliverables: logical data model

    —Conduct workshop to prepare for a Proof-of-Concept test of leading ETL tool(s)
    —Elapsed Weeks: 3
    —Deliverables: specifications for the ETL POC

    —Conduct workshop to prepare for a Proof-of-Concept test of leading BI tool(s)
    —Elapsed Weeks: 3
    —Deliverables: Specifications for the BI POC

    —Conduct workshop to develop Phase 2 project plan
    —Elapsed Weeks: 4
    —Deliverables: Phase 2 project plan

    **Phase / Task
    2. Implement Initial Data Mart in 90-Day Time-Box,
    —Elapsed Weeks: 12
    —Deliverables: live architected data mart, live OLAP cubes and reports, central meta data repository, local meta data repository, reusable ETL objects and conformed dimensions

    **Phase / Task
    3. Implement Additional Data Marts
    —Elapsed Weeks: 8-12 each
    —Deliverables: Additional architected data marts

    * PHASE 1 IMPLEMENTATION

    The initial task in Phase 1 is to conduct two on-site workshops, limited to 1 to 2 days each. The function of the first workshop is to bring all members of the development team up-to-speed on alternative enterprise data warehousing architectures and ‘best practices’ in data warehousing. The second workshop is used to achieve consensus on the specification of an enterprise data warehousing architecture capable of meeting the long-term business requirements of the organization.

    Interviews with personnel from multiple subject areas are held to define high-level functional requirements for each subject area. Subject areas often correlate with proposed data marts, in areas such as finance, sales, marketing, HR, supply-chain management, customer touchpoints, etc. As described in a previous column, the interviews are kept deliberately short (one day or less per subject area). The deliverables from each interview include a short, concise requirement specification for the subject area and a top-level dimensional data model representing the data sources, source-to-target mappings, target database, and reports required for a specific subject area. The top-level data models from all subject areas are then synthesized to identify common data sources, conformed dimensions and facts, common transformations and aggregates, etc.

    Following synthesis of functional requirements from all subject areas, a workshop is held to define the functional requirements for the initial data mart, lay out the project plan for the development of the initial data mart, identify required skill sets, and personnel assignments to the project.

    The next task is to define a high-level dimensional data model for the initial data mart, representing an expansion of the model prepared as part of the user interview process.

    If the organization has decided to conduct a proof-of-concept test of competitive ETL tools and BI tools, the next two tasks represent preparation of functional specifications for the tests. Proof-of-concept testing may be required if multiple ETL or BI tools are being evaluated and it is politically expedient to conduct rigorous testing of competitive products prior to making a selection. I have found that an intensive two-day workshop is sufficient to specify the functionality of the tests to be conducted for a proof of concept for either an ETL or BI tool.

    The final task in Phase 1 is to specify a detailed project plan for the implementation of the initial data mart.

    * PHASE 2 IMPLEMENTATION

    In the recommended bottom-up development methodology, the process of implementing the initial data mart is limited to 90 calendar days. Although 90 days is arbitrary, it fits the needs of business managers for a rapid solution of the business problem and meets the needs of CFOs for a 90-day Return-On-Investment. The 90-day timebox starts on the day that the ETL tool, the target DBMS, and the BI tool are successfully installed. To meet the challenge of a 90-day implementation process, utilization of an ETL tool, rather than hand-coding the extractions and transformations, is strongly recommended.

    Implementation of the initial and subsequent data marts is ideally performed by a small team of data warehousing professionals, typically consisting of a data modeling specialist, an ETL specialist, and a BI tool specialist. In my own organization, I emphasize cross-training of personnel, which minimizes the number of personnel that need to be assigned to a project and maximizes their efficiency.

    The first task for the development team is to design the target data base for the initial data mart. Modeling of the target data base for the initial data mart proceeds through three steps: design of an entity-relationship diagram, then a logical dimensional model, and finally a physical model of the target database schema. Although the E-R diagram is not required for the initial data mart, it is required for subsequent data marts to ensure that all physical data models for multiple data marts are derived from a common logical specification.

    The next major task is to specify and implement data mapping, extraction, transformation, and data cleansing rules. The data mapping and transformation rules are defined first in natural language, and then implemented using only the transformation objects supplied with the ETL tool. The objective is to avoid coding any ETL processes. It is hard to predict how long this task will take. For many applications, specification and implementation of the transformation rules should not take more than 3 to 4 weeks. However, some applications may require many months to specify and implement complex transformations. These applications are not likely to fit within a 90-day implementation timebox.

    In parallel with implementation of the transformation logic, developers build aggregation, summarization, partition, and distribution functions. The ETL tool may be used to compute aggregates in one pass of the source data, using incremental aggregation techniques. Pre-computed aggregates are recommended for most data warehousing applications to provide rapid response to predictable queries, reports, and analysis.

    The final task in Phase 2 is delivery of a fully operational, architected data mart for the initial subject area, using an exact subset of the enterprise data warehousing architecture. Ideally, the development team delivers all of the functionality that was specified at the beginning of the 90-day timebox. However, the team is permitted to defer low priority functions in order to make the 90-day timebox. The team is self-managed and organizes its resources to deliver as much functionality as possible within the 90-day development window.

    * PHASE 3 IMPLEMENTATION

    The objective of Phase 3 is to build additional architected data marts. Additional data marts are built by the primary development team using common templates and components, such as conformed dimensions and facts, common transformation objects, data models, central meta data definitions, etc.

    In the bottom-up methodology, a central data warehouse, an operational data store, and a persistent staging file are optional. Data marts are typically developed using a data warehousing backplane schema design, as described by Ralph Kimball in his book ‘The Data Warehouse Toolkit, Second Edition.’ There may be good technical reasons to incorporate a central data warehouse and an operational data store in the enterprise data warehousing architecture. However, in the bottom-up methodology, development of the central data warehouse and ODS are deferred until they are clearly required. A central data warehouse is often required and when detailed, atomic data from multiple data marts must be accessed to generate cross-business reports, and when there is a low percentage of conformed dimensions across the data marts.

    Maintenance and administration of the data warehousing application is an ongoing function. A secondary team may be used to enhance and maintain completed data marts. The primary team transfers transformation templates, data models, conformed dimensions, conformed facts, meta data, etc. to the secondary team to simplify the enhancement and administration of completed data marts.

    My organization has used this methodology successfully for many clients and has a good deal of experience with it. However, successful implementation of the methodology depends on several critical success factors:
    -a dedicated implementation team;
    -consulting help from an experienced organization at the beginning of the project;
    -backing of a business manager who is hungry for a solution to a painful business problem; and
    -integration of all components of the architecture with central meta data.

    To go Logical or Not…That is the Question?!

     

    Figure 1: Basic star schema for typical point of sale data-mart

    In the current design, the database does not enforce the integrity of data with respect to the definition of the fact tables. The basic fact table should contain rows only for daily sales for a store for all the products sold in a day. If we try to insert an aggregated record in the basic fact table (for week, district and brand), the database would not reject the rows. If we like to separate the fact tables according to the grain, then we would have to rely on the ETL process to enforce the granularity of the fact table. Enforcing rules through ETL process has following drawbacks:

     

       

      

    • ETL processes need to persist the metadata for populating the fact table in the first figure and the aggregated fact from the 2nd figure. 
    • Multiple ETL processes may be responsible for populating the fact table. In such cases, we would need to duplicate the rules to each process. By maintaining rules (business logic) in the database layer, we can minimize maintenance headaches because the logic is now centrally located making change aforethought and less prone to mistakes.
    • ETL processes may not be efficient to enforce rules if the process relies on SQL to enforce the rules. Database integrity constraints are more efficient than SQL.
    • If the ETL process relies on a programming language, then it would be very difficult to maintain the meta data in the ETL process.

    Let us see how the design would change when we partition the dimension table to enforce the granularity of the fact and aggregate table. If we partition the dimension table based on the level of the data in the hierarchy, then we can use these partitioned dimension tables to join with the appropriate fact table. Figure 2 illustrates how the data model will look in the partitioned dimension table approach.

    Figure 2: Constraint Enforced Point of Sale Mart

    In the new design, the basic fact table joins with the basic dimension tables and the aggregate fact table joins with the higher-level dimension tables. In this design, the granularity of the fact table is enforced by the relational constraint defined in the database. We can use SQL to create a view over all partitioned dimension tables using SQL union all to create a logical design that is similar to the basic POS data mart as described in Figure 1. We can also create views over the basic fact table and aggregated fact table to have one fact view with multiple grain, but this depends on how the data modeler wants to present the logical model.

    TDWI Executive Summit World Conference San Diego, 2008 – It’s On…

    Video: TDWI World Conference 2008 – Sunday, August 18, 2008 Pre-Conference HighlightsTonight, formally starts the Executive Summit portion of the Data Warehouse Institute (TDWI) World Conference 2008, in sunny San Diego…(See @lauragibbons on Twitter for hotel pics).
    Check back often for clips and interviews with the masters of business intelligence, including (hopefully) Bill Baker and Wayne Eckerson (formerly Microsoft and TDWI, respectively). I am speaking on Tuesday on Six Sigma / Process Intelligence, the marriage of BI and Process Excellence and Six Sigma.
     

    Talking about Monsters of Legend: Cryptozoology

    I find it funny how we ‘the collective human being" tends to believe what we are taught without questioning how factual or relevant a given learning is – For example, Pluto is no longer a planet as those born in the 70’s and 80’s were taught. And for decades prior, people believed our solar system consisted of 7 interplanetary objects called planets orbiting around a circumspect, gravity-free system of preconceived time, dwarf stars and theory, bound no more by the laws of astronomy, than is gravitational force an immutable space. Likewise, the black swan was considered myth until one made its way from Australia once a long-time ago, because the very existence was merely one belief, while baked in 1st -person factual experience. Did anyone go back and say "jeez, I was an id*ot for disbelieving you Joe Australian, seer of the black swan"? I highly doubt it. So, here is an account of legend gone extinct gone awry.

    Quote

    Monsters of Legend: Cryptozoology


    Many animals once thought extinct or legendary have been eventually discovered to be real.

    Guerilla Marketing Meet Business Intelligence; The Result? Social BI Networks or BI 2.0

    Casual users who become adopters compound the “viral” tipping point of the “usage effect”.  When a system eases the user’s experience, rather than the reverse that never leads nowhere good <insert the person who best fits ‘I deleted it from the registry with a shrug’ who inadvertently deletes their O/S here>, why not try to these stages to gain a better understanding of your BI efficacy from your users.

     

    I dubbed this measurement technique 3XEI (in order) : educate, illustrate, elaborate, integrate, extrapolate, iterate. The follow down process is as defined as follows:

    1. Seek first to understand, then to be understood; EDUCATE yourself on the pain points from the eyes of the customer (“Voice of the Customer”),
    2. ILLUSTRATE to prior adopters, detractors and neutralities the benefits of BI by mapping out or illustrating a visualization map of their current process circling all pain points in the process.
    3. In order your dependence on IT to support business-driven application acquisitions, you should continue to interview candidates, walk the process and mine as much as you can in order to ELABORATE on the process map until you’ve captured all process metrics.
    4. INTEGRATE your approach to educating, illustrating and elaborating on your BI program, you will gain end-to-end insights previously obscured by not factoring in the multi co-linearity of inputs on each other as well as the intended output.
    5. EXTRAPOLATE to pain points and use information to drive decisions or changes to the process at large.
    6. ITERATE on the illustrated processes as extrapolation exercise dictates or requires in order to remain vital and relevant to the organization.

    These phases have no definitive start/stop points. They are as mutable as the changing technology landscape in corporations today. They are tightly embedded within for the life of the process at the broad organizational level. Very few companies understand the power of mapping all efforts to the core business process supported, nor can they visualize the power of the optimized, end-to-end process on driving the bottom line. Those that do, reap the rewards; and those that don’t, well, they need not be mentioned.

    Business Informed Decisions or Uninformed Intelligence in a 2.0 Wannabe World

    Fact: Only 19 percent of companies say their employees have all the data they need to make better, more informed decisions. 1

    Legacy “executive information systems”, and in some cases, failed TQM, CRM and IVR – any business-dubbed acronym assigned to a once disruptive yet sexy piece of technology used to support some business process. And with each time said, the ‘apparently innocuous acronym’, became what corporate decision support & IT leaders would hate the most; the ever-present reminder that they, too, like you or I, believed the promised triple digit ROIs and NPVs and IRRs and TCOs; and we all know how we feel about the typical IVR experience.

    And for all of the marketing hype and business-related jargon, none much materialized in the way of extreme financial rewards. In fact, the pains that were felt were invisible to the average eye. They came in the form of the levels of support (i.e. manpower) that was required to ‘support’ users and systems; but hey, they don’t call it disruptive for nothing.

    But I digress…

    Let’s just say the expectations set were far greater than the actualized returns.

    Driven to succeed and with common understanding of importance of sharing information across silos and processes, and free from the burden of role ambiguity, employees can truly uncover ALL of the intersection points affecting a given process, product and plan. And with further analysis, one could establish the root causes and relative impact to the ultimate measure of success for most companies: profitability.

    And with boundary-less organizations, come a well-oiled operations machine, free from the clutter and waste that infiltrates overly complex and bureaucratic organizations. The same ones that have meetings to plan meetingsàI know you know the ones …

    What comes when one begins to think outside of the proverbial box, and think about things from the customer’s vantage point, is a powerful sense of organizational self awareness, ultimately, bubbling up a call to action to your workforce, resulting in focus and drive…but that’s where I stop…it is stalled by middle management, filled with less than promising mid-managers who know little about mentoring and growing their staff, far too consumed with feelings of being threatened by xxx up-and-comer staffie…

    Using this ‘voice of the customer’ along a supply-chain is a powerful technique for designing and developing effective process and project metrics. In many ways, effective measurement determines the success of the projects — A world where BI and project / processes have merged to evolve a new technique called Process/Project Intelligence, where the combined use of project management methodologies (including ISO9001, Agile (Scrum) or Design for Six Sigma) and business intelligence are merged into a true picture of the health of the organization across all verticals and horizontals, where all root causes are explored, eliminated and prioritized. While BI has undoubtedly evolved into a powerful set of technologies suitable for different types of users and information analysis needs over the early ‘EIM’ days, it is far from the ‘single source of truth’ that so many vendors spin into their sales pitch because most systems negate to weave the in element of the business managed processes that geminate out of every department within every organization at some level.

    Honestly examining one’s process by actual current state and stop blending the truth through their rose-colored, kaleidoscopic-view of operations, can one move into the thrilling world offered by those at the top of the Process & Project Intelligence pyramid. A place driven by the truth of data, where unified and systematic approaches to understanding one’s business are commonplace rather than exception; where process optimization is key to project prioritization rather than secondary; where development initiatives are driven by the ‘voice of the customer’, rather than the voice of the developer.

    Business Informed Decisions or Uninformed Intelligence in a 2.0 Wannabe World

    Fact: Only 19 percent of companies say their employees have all the data they need to make better, more informed decisions. 1

    Legacy "executive information systems", and in some cases, failed TQM, CRM and IVR – any business-dubbed acronym assigned to a once disruptive yet sexy piece of technology used to support some business process. And with each time said, the ‘apparently innocuous acronym’, became what corporate decision support & IT leaders would hate the most; the ever-present reminder that they, too, like you or I, believed the promised triple digit ROIs and NPVs and IRRs and TCOs; and we all know how we feel about the typical IVR experience.

    And for all of the marketing hype and business-related jargon, none much materialized in the way of extreme financial rewards. In fact, the pains that were felt were invisible to the average eye. They came in the form of the levels of support (i.e. manpower) that was required to ‘support’ users and systems; but hey, they don’t call it disruptive for nothing.

    But I digress…

    Let’s just say the expectations set were far greater than the actualized returns.

    Driven to succeed and with common understanding of importance of sharing information across silos and processes, and free from the burden of role ambiguity, employees can truly uncover ALL of the intersection points affecting a given process, product and plan. And with further analysis, one could establish the root causes and relative impact to the ultimate measure of success for most companies: profitability.

    And with boundary-less organizations, come a well-oiled operations machine, free from the clutter and waste that infiltrates overly complex and bureaucratic organizations. The same ones that have meetings to plan meetingsàI know you know the ones …

    What comes when one begins to think outside of the proverbial box, and think about things from the customer’s vantage point, is a powerful sense of organizational self awareness, ultimately, bubbling up a call to action to your workforce, resulting in focus and drive…but that’s where I stop…it is stalled by middle management, filled with less than promising mid-managers who know little about mentoring and growing their staff, far too consumed with feelings of being threatened by xxx up-and-comer staffie…

    Using this ‘voice of the customer’ along a supply-chain is a powerful technique for designing and developing effective process and project metrics. In many ways, effective measurement determines the success of the projects — A world where BI and project / processes have merged to evolve a new technique called Process/Project Intelligence, where the combined use of project management methodologies (including ISO9001, Agile (Scrum) or Design for Six Sigma) and business intelligence are merged into a true picture of the health of the organization across all verticals and horizontals, where all root causes are explored, eliminated and prioritized. While BI has undoubtedly evolved into a powerful set of technologies suitable for different types of users and information analysis needs over the early ‘EIM’ days, it is far from the ‘single source of truth’ that so many vendors spin into their sales pitch because most systems negate to weave the in element of the business managed processes that geminate out of every department within every organization at some level.

    Honestly examining one’s process by actual current state and stop blending the truth through their rose-colored, kaleidoscopic-view of operations, can one move into the thrilling world offered by those at the top of the Process & Project Intelligence pyramid. A place driven by the truth of data, where unified and systematic approaches to understanding one’s business are commonplace rather than exception; where process optimization is key to project prioritization rather than secondary; where development initiatives are driven by the ‘voice of the customer’, rather than the voice of the developer.

    Project Measurement and Selection Fostered by Business Intelligence

    Fact: Only 19 percent of companies say their employees have
    all the data they need to make better, more informed decisions.1 

    Legacy “executive information systems”, and in
    some cases, failed TQM, CRM and IVR – any business-dubbed acronym assigned to a
    once disruptive yet sexy piece of technology 
    used to support some business process. And with each time said, the
    ‘apparently innocuous acronym’, became what corporate decision
    support & IT leaders would hate the most; the ever-present reminder that
    they, too, like you or I, believed the promised triple digit ROIs and NPVs and
    IRRs and TCOs; and we all know how we feel about the typical IVR experience.

    And for all of the marketing hype and business-related
    jargon, none much materialized in the way of extreme financial rewards. In
    fact, the pains that were felt were invisible to the average eye. They came in
    the form of the levels of support (i.e. manpower) that was required to
    ‘support’ users and systems; but hey, they don’t call it disruptive for nothing.

    But I digress…

     Let’s just say
    the expectations set were far greater than the actualized returns. 

    Driven to succeed and with common understanding of
    importance of sharing information across silos and processes, and free from the
    burden of role ambiguity, employees can truly uncover ALL of the intersection
    points affecting a given process, product and plan. And with further analysis,
    one could establish the root causes and relative impact to the ultimate measure
    of success for most companies:  profitability.

     And with
    boundary-less organizations, come a well-oiled operations machine, free from
    the clutter and waste that infiltrates overly complex and bureaucratic
    organizations. The same ones that have meetings to plan meetingsàI
    know you know the one…

    What comes when one begins to think outside of the
    proverbial box, and think about things from the customer’s vantage point,
    is a powerful sense of organizational self awareness, ultimately, bubbling up a
    call to action to your workforce, resulting in focus and drive…but
    that’s where I stop…it is stalled by middle management, filled with
    less than promising mid-managers who know little about mentoring and growing
    their staff, far too consumed with feelings of being threatened by xxx
    up-and-comer staffie…

    Using this ‘voice of the customer’ along a
    supply-chain is a powerful technique for designing and developing effective
    process and project metrics. In many ways, effective measurement determines the
    success of the projects — A world where BI and project / processes have merged
    to evolve a new technique called Process/Project Intelligence, where the
    combined use of project management methodologies (including ISO9001, Agile
    (Scrum) or Design for Six Sigma) and business intelligence are merged into a
    true picture of the health of the organization across all verticals and
    horizontals, where all root causes are explored, eliminated and prioritized.
    While BI has undoubtedly evolved into a powerful set of technologies suitable
    for different types of users and information analysis needs over the early
    ‘EIM’ days, it is far from the ‘single source of truth’
    that so many vendors spin into their sales pitch because most systems negate to
    weave the in element of the business managed processes that geminate out of
    every department within every organization at some level.

    Honestly examining one’s process by actual current
    state and stop blending the truth through their rose-colored,
    kaleidoscopic-view of operations, can one move into the thrilling world offered
    by those at the top of the Process & Project Intelligence pyramid. A place
    driven by the truth of data, where unified and systematic approaches to
    understanding one’s business are commonplace rather than exception; where
    process optimization is key to project prioritization rather than secondary;
    where development initiatives are driven by the ‘voice of the
    customer’, rather than the voice of the developer.

     

     

    1.        
    HP and Business
    Objects study. 2007.