There is a widespread misconception that "business intelligence" is not really for intelligent people. I hereby begin my (possibly long and painful) battle to dispel this myth
The more perceptive among you will know that I’m recruiting. Over the last few months, I’ve put out several posts on LinkedIn and Twitter regarding this. I’ve tried various tactics to attract people - talk about the kind of work we do, rant about the hiring process, get my CEO to advertise on my behalf, and the likes.
I can’t say these methods haven’t been successful (there, I used a double negative!) - I have indeed managed to recruit a few people. However, my recruitment efforts have drawn more attention from random relatives and friends I’ve spoken to than from potential candidates. They have also drawn significant attention from my wife.
“Dude, why do you even say that you are recruiting for BI? Everyone who’s been with a big company knows that BI work is basically euphemism for bi*** work. Nobody will want to work with you if you say BI”, she told me once.
That is possibly true. I have occasionally heeded my wife’s advice and said that I’m recruiting for “analytics” or even for “data science” (a term I’ve vowed never to use for myself). However, the fact of the matter is that my department is called “Analytics and Business Intelligence”. In fact, it was going to be just called “Business Intelligence”, but for a strategically timed WhatsApp message I’d sent between getting the verbal offer and the formal written offer for the job I’m doing now.
So I have no choice, but to launch a long-running campaign to
Make BI Great Again
So what do I mean by BI? In one of my recruitment messages on LinkedIn, I wrote this:
The problem I'm working on at work today involves
- computational geometry
- operations research (vehicle routing problem)
- search algorithms (what you can call as "classic AI")
- data analysis (to calibrate the models, and make realistic assumptions)
There is machine learning involved in a step prior to today's problem. The "solution", whenever we solve it, will go to the operations team to implement (in other words, it's not a "tech solution").
If you enjoy working on data problems with this kind of diversity, and are actually good at some of the above, get in touch with me. We're hiring at all levels.
Yes, it is the true. However, this particular post didn’t succeed in attracting candidates.
After I had put up this post, I realised that the problem I was working on also involved a “quick machine learning model” (I’ll come to those downstream in this letter), between the OR and the AI steps.
Pretty much every problem that my team and I work on involves using a wide range of math and data skills to solve a hardcore business problem. The objective of my team is to use intelligence from data to derive business insights. And so we are the “business intelligence” team.
That is not true everywhere, though. A long time back (over three years ago), I had written in this newsletter about “suitcase terms”. Coined by AI pioneer Marvin Minsky, it refers to words or phrases that can mean lots of things. Data science is the classic “suitcase term”. Analytics is a suitcase term. Even artificial intelligence has become a suitcase term.
And business intelligence can’t be far behind, can it?
Business Intelligence started off by being data intelligence used to provide business insights. And then over a period of time it has exclusively come to mean (outside of my company, that is) “dashboards”.
In some sense I like to think that this is a consequence of “fighterisation” of the process of getting intelligence from data. One way to get intelligence from data is to show the data to intelligent people. And this means showing the most updated data, aggregated and filtered appropriately, in a nice way.
And an important part of this process, and the most voluminous job in this process, is to build the plumbing so that the data from the database flows into the (hopefully) nice pictures. And so “BI” has come to represent a software engineering function, to build the plumbing between databases and front end tools. In some sense, all intelligence has gone away from the term.
For the third time in the lifetime of this newsletter, I am going to quote this quotable quote by data visualisation guru Kaiser Fung:
The data science community is guilty of talking down on the business intelligence function. There is a misperception that BI is for less skilled people doing boring things. The reality is there is more science in BI than in so-called data science (defined here as software engineering). Science, after all, is about figuring out why things are as they are. Engineers, by contrast, use our understanding of science to change the way things are.
It is possible that I will continue to quote this until I have succeeded in my mission of Making Business Intelligence Great Again.
My business intelligence team needs skills in:
Classic artificial intelligence (search algos, heuristics, etc.)
Building and refining algorithms
And a little bit of machine learning.
We are called business intelligence because the intelligence we provide goes directly into powering business decisions. If you are good at a few of the above and want to work with us, please get in touch. Replying to this mail is a good way to do so.
And no, we don’t care if you know Qlikview or Tableau or any of the other gazillion “BI tools” out there. Knowing R helps, though.
Quick Machine Learning Models
Somewhere in the earlier section, I mentioned “quick machine learning models”. So the issue was that the thing I was working in (involving OR and AI and computational geometry and all that) needed some tweaking. After my heuristics ran, I figured that the “boundaries were not smooth” (at this point, this is all I can say. For all you know, once this has “shipped” I might write a corporate blogpost about this).
I was thinking of various ways to smoothen the boundaries, and was evaluating which heuristic might work well for this problem. For a while, I drew a blank, and then decided it wasn’t a bad idea to unleash the proverbial nuclear weapon on the sparrow - drawing boundaries reminded me of support vector machines. And in the next 20 minutes I had figured out how to so SVMs in R, and the job was done.
The next day, as I started stitching together the algo, I figured that SVMs were inefficient, and figured a “K nearest neighbour” (KNN) model might be superior. Another 20 minutes. Job really done this time.
And this got me thinking - when your job involves <something> (whatever that thing is), you become a specialist at that <something>, and you can be prone to making a big deal of the <something>. In this case, for example, if your job is building machine learning models, you take each model too seriously. And while a particular requirement involves a “quick model” you may be liable to overthink it and throw a fusion bomb (rather than a simple fission bomb) at the sparrow.
When you overthink something and make a big deal of the something, it increases the transaction cost. Traditionally, specialisation has meant that the person you outsource something to will spend less resources on doing the thing than you might have. Overthinking means this may not be true.
Relating it back to “quick machine learning” models, I’m now starting to think - unless you are into research or something like computer vision that really involves a LOT of machine learning, does it really make sense to have a team whose only job is machine learning? Would it not be better to simply empower the analytics and tech teams to use “quick ML” when necessary?
Pipelines and padding
And when I say “quick ML”, it is really quick ML. I must have written multiple times on this blog - when you use python, machine learning is literally three lines of code. And the beauty with python is that whatever machine learning model you choose to use, it is three rather similar lines of code.
The thing is - topper types don’t like simplicity. If something is too simple, they tend to think that it is “not enough work”. For example, I was in an assignment group with a couple of toppers during our Design and Analysis of Algorithms course (back in 2002). One assignment seemed rather simple. They decided to compensate by “submitting a formal proof” (which, to me, was unreadable).
A lot of people in machine learning are “topper types”. A lot of them have masters and PhD degrees. Some even have done post docs. And so three lines of scikit-learn code is clearly insufficient. And so whenever you see machine learning literature, you will see that there is sufficient “padding”.
The other day, while writing my “quick SVM”, I thought I will use “tidymodels”, the R family of packages that supposedly allows for easy machine learning. Most of the literature I could find around it was all about cleaning the data, building pipelines, creating “recipes”, “juicing”, etc. It was impossible to find how to run a quick model.
And this is not an exception. Most literature on machine learning seems “very basic”. 90% of the space will go to fundamentals like filling the gaps in data, separating data into training and test sets, redefining “variance” and “bias” for the 100th time (I still don’t understand it in a ML context) and all that. All the meat - the actual model which I’m trying to look for - will have at max 10% of the space.
I don’t understand this at all. It is as though someone believes that “just giving the formula is not worth the 5 mark question”, like we used to during high school board exams.
Based on my experience, I would prefer that data scientists “go about their 5 mark questions” differently. Yes, with packages such as scikit-learn, access to machine learning has been democratised.
What has NOT been democratised is the thinking around how to use machine learning. How do you set up the problem? What are the potential business implications of the solution? Is machine learning the right thing to do for the given situation? What is the most logical ML method for this problem? (based on logic rather than pile-stirring).
And these are things that a lot of machine learning people simply don’t think about. Maybe that’s why the work they do is not called “business intelligence”.
Since I’ve been hiring, I’ve been coming across a large number of resumes. Everyone who applies applies with their own mental model of “business intelligence” or “analytics”. I find an inordinate number of resumes where the primary skill is in building SQL queries.
And as I’d mentioned in the previous edition of this newsletter - using tools like dbplyr, we are trying to gain an arbitrage into NOT writing SQL. This way, we write cleaner code that is far less verbose, far easier to read and far easier to debug.
However, after I sent that out, I have a more radical thought - if your database is well designed, your analytics and BI people don’t need to be good at SQL.
Yes, a lot of people take a lot of pride in writing long and complex SQL queries. A lot of people are really good at it. However, in each of these cases, I now confidently assert, the reason these long and complex SQL queries exist in the first place is because the data is not properly organised. If the data warehouses had been constructed in the first place in a business-friendly way, the job could have been done with far easier queries.
I’ll possibly have material to elaborate on this in my next post. For now think about it. And tell me what you think. If you got this by email, you can just reply to it to talk to me. If you are reading it on a website or feed reader, you will have to leave a comment.