Cognitive decisions are at the heart of monetizing advanced analytics for data science. They are the functional value for why advanced analytics models and related business rules are developed and implemented. Cognitive decisions include management decisions and insights as well as transactional decisions in service processes. The full value of analytics demands that decision analysis is in scope for every project. Without a new kind of decision process, there is little to no operational gain from applied analytics.
Archives For Analytics Program Management
Program Management includes several critical management and work activities that you or your managers must be accountable for. Consult the Program Management Body of Knowledge for a detailed view of this level of organization and apply it to your enterprise analytics group.
In case anyone wonders whether advanced algorithms really make a difference.
Twenty-first-century students would benefit from 16th-century habits of mind.
“the really great discoveries ..have been made by men and women who were driven not by the desire to be useful but merely the desire to satisfy their curiosity”.
‘Working’ Analytics: a useful term that distinguishes building deployable models that solve problems with a minimal amount of cost and complexity. Almost by definition, ‘Big’ is not ‘working’ analytics; it’s something else. When things get big, they get costly and complex. They get impractical to operationalize much less gain useage in day-to-day operations. A foundation principle for data-science that pre-dates ‘BIG’ is parsimony, also known as Occam’s razor.
For data scientists, ask yourself whether you want to be a ‘working’ practitioner or a developer of complex, inexplicable and mostly unused solutions. You can certainly make complex solutions but your job is to make them simple.
For employers, it is temping to believe in ‘unicorns’….a wickedly complex algorithm that creates a discontinuous shift in your industry and crushes the competition for years to come. But think about hiring people with the attitude and habit of contrarian thinking (e.g. putting a camera on a phone). Hire a blend of ‘working’ practitioners with a philosophy of parsimony, and ‘explorers’ who will thrash data and models regardless of where it takes them.
There are many, many working problems to solve while you are looking for your unicorn.
On this subject, a useful (and challenging) concept from Oliver Wendell Holmes:
“I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity.
Edward H. Vandenberg
Perhaps we need to really focus on decision science. The old parlance of data science had this orientation. Decision science is the ‘so-what’ of data science….making better decisions (even if those are automated and embedded in transaction systems).
I like this focus because it naturally relates to the decision-maker (currently in most circumstances) and how we make decisions.
Edward H. Vandenberg
You may have business rule ‘messes’ in your transaction systems. I’m referring to business rules created by well-meaning IT folks and Business Analysts, attempting to direct a complex decision using linear business rules. This pretends to be data science but is often a ‘mess’. The overzealous use of rules posing for data science is a common situation. Many times these rules are worse than doing nothing (guessing), as far as supporting a complex decision. Worse, they lead to poor data (to much data entry required to make them work as planned). Worst of all, they may be encoding a linear thinking and a bias towards ‘averages’ rather than distributions, when it comes time to interpret heuristics into data science.
The ‘mess’, may be a good place for a new data science project. Likely you will need to rip out the rules altogether (not popular with IT). However, assuming the data is semi-clean, the historical use of the business rules may prove to be useful predictors in a multi-variate model. Not all business rules are a mess if they are the result of simple heuristics and have been maintained properly. In any case, look for these pseudo models as a way to improve decision making with true analytics.
Edward H. Vandenberg
A colleague sent this article to me and what follows is my response.
Read the article thank you. Everybody trying to understand analytics needs to understand this and the burden it puts on projects and coming up with results.
Unfortunately, the issue goes even deeper than the article describes. Transaction systems were designed for accounting and contractual fulfillment, not for data science. The designers of those systems weren’t too particularly savy about the way people work so the data entry became corrupted by laziness and short cuts and some just crazy sloppy validation and edits. Now we’re in a state where the data to model coming from these lousy data entry systems got loaded into data warehouses. The ETL performed on data again, was now maybe a bit better….supposed to make reporting and analysis easier. But the Transform logic just added another layer of poor hygiene to the data and/or illogical transformations. And the Load logic was all about reporting and not data science. So data warehouses are not great to facilitate data science.
Data science is unwinding all of that row by row and column by column in a brute force effort. We even try to get inside of the bugs by finding patterns in null values and unexpected 1’s and 0’s where there is supposed to be valid values entered.
Data science projects simply run out of time to correct all of this and end up throwing out half the data originally thought to be interesting. Also keep in mind that after the janitorial work, the data has to be preprocessed for the specific algorithmic approaches being used…..binning, log transformations, and a dozen other critical techniques to extract signal and not get fooled by the noise.
I don’t believe there is an automated approach beyond what we already have, because the source systems are so varied in the way the data collection was programmed, the ETL was programmed and the data entry actually happens. The first step is to perform statistical evaluation to ‘smell’ the data. These are pretty basic steps but need to be done on every column you are working with…sometimes hundreds or thousands.