I recently read an article about Gemma Milne in Forbes (article). She wrote a book called Smoke & Mirrors. It’s a book about the misuse of technical terminology and how it can affect funding, policy-making, voting, and other things.

Though I have not read her book (and, therefore, cannot vouch for it), I thought it was an interesting point that I have come across in my own business. I believe the term ‘artificial intelligence’ is way overused.

I get it, AI is cool. It makes us think of far-flung sci-fi action. A world of self-driving cars and human-like robots. That’s why it’s attached to all sorts of products that may technically qualify as AI but are far from the technology we are picturing.

I think this is due to the misunderstanding of what artificial intelligence is. AI is an over-arching term that encompasses all forms of machine learning software. And a lot of that software is underwhelming when compared to the sci-fi fantasy we picture in our minds.

I’ll give a direct example, I used Casetext software on a trial basis. They promote their legal assistance software, known as CARA A.I., as being research-oriented artificial intelligence. Literally, it’s described as “like having a research assistant at counsel’s table.” I was told that I could upload a brief into CARA, and she (or it, I suppose) would find all of the relevant cases and predict arguments that could be made. I was skeptical to say the least.

The fact is that Casetext was actually a pretty good tool. It worked like I would expect a research tool would, but it was a far cry from a research assistant at counsel’s table. I tried CARA and was, frankly, underwhelmed. I uploaded a brief. It managed to find most of the citations but not all of them, and it made no real effort to predict arguments myself or opponents could make.

That’s not to say it was bad. It wasn’t. It worked fine. I just think it was massively oversold. And that’s my point.

Artificial intelligence just isn’t at the level being promised, certainly not in a form available to consumers. Most of these tools are some form of machine learning algorithm designed to aid in automation. Why aren’t businesses more honest about the capabilities?

Because ‘machine learning algorithm’ is not sexy.

I only tried Casetext because of the AI capabilities. Had they been honest, I wouldn’t have used the trial at all. So I suppose their ruse worked, kind of. It certainly backfired when I was underwhelmed by the actual product.

The problem is that AI has been plagued by overpromises since it’s inception. For example, the term ‘artificial intelligence’ was first used at a conference at Dartmouth College in 1956 hosted by Marvin Minsky. Minsky was one of the original thinkers in the AI movement. In 1970 Minsky said that we would have artificial general intelligence (meaning humanlike intelligence) within three to eight years.

Fifty years later, we still don’t have artificial general intelligence, and there is no real estimate for when we will have it.

I’m sure you’ve heard about how lawyers will (or have been) replaced by AI, and accountants, engineers, and doctors are next. Essentially every knowledge worker is on the chopping block.

But the technology is nowhere near capable of doing basic jobs like driving trucks, let alone taking on complex knowledge work. Don’t get me wrong, our day will come. There is reason to believe that AI will be able to do these jobs in the future, but it’s much harder than these companies will let on.

With AI’s capabilities still so limited, companies should stress the advancements we have made. We have seen great leaps in natural language processing for example, which helps streamline search engines and research software (like Casetext), and games have been dominated by AI over the last year (consider AlphaGo and AlphaStar for examples). That’s really cool and should be celebrated.

Let’s move away from the hype.