From the media hype around AI and ML (Machine Learning), we would feel that common usage is just around the corner, with promises of fantastic results in terms of productivity and dire news for human employment. Although some of this might be partially true, I believe that for the most part, the promise is being oversold and that a partial disappointment is on the cards. When I say partial disappointment, it is because many of the techniques that were developed in the 1970s onwards have either matured or the conditions are now present to make their use viable, but more is needed besides computing power and lots of available data.
I will talk more here about ML, and not AI related techniques, because many of these techniques already have been taught in the universities for decades but seldom were applied in the context of private enterprises — Heck, even tried and proven statistical methods are rarely applied today, so it is no surprise that ML had little traction. In any case, ML was considered to be mostly in the realm of the ivory tower academic research full with dreams and lots of promises but very little practical results, not that the results weren’t sometimes impressive. The difficulty in translating research into applications and distrust on the part of decision makers on the viability of said techniques in getting results without a very high cost. When I talk about cost, it is not only time taken and money spent, it is also about the credibility and reputation cost associated to projects that fail to meet expectations. Since there were practical difficulties that made it challenging to use ML, issues like:
- Lack of appropriate datasets for training and testing models, this in time was much less of an issue.
- Lack of computing power to run several model iterations, again this was less of an issue over time.
- Lack of development environments with tools and libraries that could make testing and comparing different ML algorithms and different models much easier.
- Lack of people who had both ML technical experience but also domain experience.
In the early 2000s using ML still meant implementing ML in a general purpose programming language or Matlab, some statistical packages like SAS provided some tools, BI tools provided some extensions that enabled some ML facilities. But these were the realm of the specialist, these were tools that were either too expensive or time-consuming to not allow a wider usage. Only big institutions that generated massive amounts of data and had big pockets could afford these tools. Inference Models, Decision Trees and other techniques were used to detect credit fraud, health care fraud, parse genetic data, anything that meant sifting through tonnes of data and find a possible needle.
The advent of R and Python enabled a democratization of the access to ML learning, but the biggest kick to enthuse the interest in ML was Google, Apple and Facebook work in the field. Without products like Siri, Google Translator, and many other related bots and autonomous agents, this field would still be relegated to the research labs. Now, these uber companies compete to get the available AI/ML researchers to develop their portfolio of products in an AI arms race.
Like I said earlier, lack of available professionals in ML with domain knowledge was always a problem. Having a ML professional with no domain knowledge in the field creates its own type of friction, it makes communication difficult with related stakeholders and generally means a higher learning curve to develop a successful application. And since, there is a lot to choose from on the ML toolbox, from numerical and logistical regression, k-means, vector machines, decision trees and random forests, each method with its particular strengths and weaknesses so good judgement is a key factor.
But ML comes with another set of aggravations, being mostly data driven and statistical in nature it fails in a key aspect, the human need for certainty and predictable outcomes when in an organizational setting. An organization structure likes predictability, our codes of law in some cases require it under threat of penalties, and shareholders love it. What ML can provide depends on the analyst capacity to tweak and adapt the model, and the training data available as to minimize the total error. This means, at any given time, the model that is used will flag some cases as false when they are true (a false negative), or flag some cases as positive when they are false (false positive).
What manager would like to know for a critical business process that the ML model has a 63% accuracy, even if in reality the current process has only 55% accuracy but the process is well-known and familiar. Now, if the current process has 90% accuracy with human operators but costs 20 times more and takes weeks instead of hours. Well, there is always that trade-off moment…
This somewhat uncertain pay-off meant that organizations focused their IT efforts on the development of systems that automate processes through sets of required rules with the expectation that these are adequate for the business. And for many cases this more than adequate, and has been quite successful. There is little need for AI in a simple CRUD front end that is merely an interface to push and pull form data in a database.
The problem arises when there is need to classify data when there is a low signal to noise ratio, or there is too much data for a human to classify within a reasonable time-frame. These problems are becoming more frequent as organizations accumulate a lot of data or buy it for marketing purposes. One thing is to aggregate and cross tabulate with a OLAP engine on large datasets, which is useful but with some loss of context, the other is targeting with an ML algorithm on specific groups of individuals to achieve particular behaviours. This has the promise of making marketing budgets much more effective but also has very troublesome implications.
The trend to move for ML/AI will not be smooth from the point of view of development teams and organizations. Big tech companies like Google, Facebook, Amazon and Fintech companies can afford the R&D, and this matches their business models well, while older tech giants like IBM might struggle for relevance in the field. Tech startups can also do well in technical terms, though profitability might not be a sure thing. Non tech small and medium companies, and mature and conservative companies, might struggle a lot to make sense of it all and in some cases might get gobbled up or go out of business because of it.
In many of these companies, development teams live in their own microcosm, sometimes the less is known the better. But there are traits that are common, here are some examples:
- In many companies, BI and application development are separate silos.
- Team leads are suspicious of technologies they don’t understand. And, they try to push for the use of tools that fit they particular tech niche (don’t underestimate the need some people have to use a database for everything).
- Another aspect is that ML might be used as a status project moniker to advance someone’s career with the blessing of management but where neither it makes business sense or the person to be advanced doesn’t have the required skills.
- The team lacks the skills and is hostile to changes in the technology stack that might jeopardize their job.
- Risk aversion on part of middle management leads to paralysis and delays in program implementation.
This doesn’t mean that these companies are doomed, probably they can live well in their particular niche for quite a long time till their whole development team is replaced by attrition, or people go up in the organization. Implementing ML in the context of a SME or on a mature non-tech company is not a recipe for success by itself, and for most cases, it will be invisible within and outside the organization.