The supposed power of AI will be undercut by human need for relevance

Got this quote here from the 19,133rd article I’ve seen about “the need to integrate AI and people:”

We saw this in another of the companies we spoke with when we learned that despite having integrated AI, managers were modifying the output values from the algorithm to fit their own expectations. Others in the same company would simply follow the old decision-making routine, altogether ignoring the data provided by algorithms. Therefore, human behavior is central to implementing AI.

You realize how fundamentally insane that is? A company probably put big money into advanced AI investments — or, let’s be honest, they bought the cheapest shit possible because cost containment is baked into executive bonus structure. Either way, they invested some money in something related to AI. And what are managers doing? Modifying output values to fit their own expectations. LOL. So basically lying, falsifying data, cooking books? Cool, cool. And they’re also following the old decision-making routine, altogether ignoring the data provided by algorithms. LOL. So basically throwing the AI investment money down the drain. Nice, nice.

Why does this happen?

Pretty simple, really. A lot of work is about the need to be relevant and the need to have things to control. That’s the primary intersection. It’s also what tends to hold a lot of effective hiring back from being, well, effective. People see AI as a threat: “Will this thing take my job?” There’s probably 10,000 or less true AI experts in the world, so most people have no idea what this is — they just know headlines they’ve seen and all-caps posts from some old white guy on Facebook about how Ohio is being decimated by machines.

Don’t believe me on that part? Let me give you another quote from that article up top:

Top performing companies spent significant time communicating with employees and educating them, so that the human talent understood how machines made their jobs easier, not obsolete. To build trust in AI, it is imperative for leaders to communicate their vision transparently, explaining the goal, the changes needed, how it will be rolled out, and over what timeline. Beyond communication, leaders can inoculate their workforce against fear of AI by arranging for visits to other companies that have undergone similar transformations, providing a model for workers to see with their own eyes how the technology is used.

This is actually where the line around “top-performing companies” may form in the coming years. Read through that quote. Most leaders would do virtually none of the above. “Communicate their vision transparently?” There’s often a better chance of a hooker in their trunk than that happening. “How it will be rolled out?” “Over what timeline?” You think a penny-chasing executive even knows those things? The good ones do. Most don’t.

“Arranging visits to other companies that have undergone similar transformations?” Good idea on face, but it would be defeated internally by one of three factors: (1) COVID; (2) “Our people are so busy” or (3) The company you plan to visit not wanting people to see their processes. It could happen — “FIELD TRIP!” — but it’s not likely, no.

“The tyranny of small decisions”

Just a brief pause on the hysterical irony of artificial intelligence.

People and money

A lot of people are not rich. Most are not, actually! Many people have a salary, a house value, maybe some inheritance, and a few “plays” somewhere. The inequality gaps are growing; no need to link out on that. I think we all know it and see it daily. So, as such, a job is very important to people. Look at some of the chaos around evictions re: COVID layoffs. It ain’t pretty; might impact 33 million people in the USA and 50 million+ globally. That’s a small percentage of the global population, but it ain’t trifling. People need salary and income.

So here comes big bad AI, right? And we have all these virtue signaling narratives around “It will create more jobs!” (it might) and “Capitalism always finds a place for man too!” (does it?). Most people look at AI and their instant belief is “This thing could make me obsolete.” Ironically, the guys who sign your checks probably want that to be the outcome. Ha. HA! No, I’m kidding. It’s sad.

When the perception of AI is “threat to personal relevance,” which it currently is in the majority of cases, then what is Mikey Middle Manager going to do? Adjust the outputs. Ignore the data. Do what he always did anyway. It’s back to the “gut feel vs. use data” discussion, or the guesswork era.

Is it possible that the “promise” of tech won’t be realized because humans will undercut it in the name of relevance and getting checks? Or will executives just brush those humans aside? It’s an interesting question to consider over the next two decades.

Ted Bauer