AI won’t save HR stuff, and might actually regress it

I’m not really a big fan of the narrative that AI will change every single thing about humanity and workplaces overnight, nor do I think that would really be possible. I think workplaces are still largely made up of humans, and will be for another 30 or so years until we can get this tech beyond just “Oh, my schedules work faster now.” So long as you have humans in the pipes of a place, well, there’s going to be politics and emotions and under-cutting and irrational decisions. I don’t think AI can flush all that overnight. I think it might make scheduling interviews easier, and might make benefits enrollment stuff easier, and might make calendar stuff easier. As for, like, “wholly re-inventing work?” I think that might take some time.

There’s a good article on Wharton’s website right now about all this. It makes a bunch of interesting points. I don’t want to summarize all of them, because that would be tedious as all hell and you’d stop reading this post, but there’s something down near the bottom third of the article that’s interesting and no one is really discussing that much. To wit:

Here’s an example of employee reaction, and it comes to this explainability thing. Right now, supervisors have a fair amount of power in the workplace. They might control your schedules, for example. If you move strongly in this direction of algorithms, the algorithms start to make those decisions instead. Let’s say we figured out in our work group who has to work on Saturdays. The algorithm [creates a rotation], and I get two Saturdays in a row. Who do I complain to? Well, I can’t complain to my supervisor because she didn’t do it, right? Would she say, “I don’t know. Here’s the name of the software programmer in Silicon Valley who came up with the algorithm.” I can’t complain to my supervisor, nor is my supervisor able to do anything about it. A supervisor can’t say, “I understand this was not fair. But we’ll take care of you next week.”

Hmmm. OK.

Why this is a problem

We’ve spent the last generation or so saying that “the heart and soul” of an organization is the relationship between employees and managers. Some of this is largely lip-service and bullshit, yes; many managers could care less about their employees. (We call these ones “absentee bosses.”) And, there are so many aspects of an employee-manager relationship that can be inherently doomed, it’s almost too many to list. (I tried once though.) So, there’s certainly some bullshit to this idea that “Manager-employee relationship drives the organization.”

But at the same time, there’s not. Most people leave jobs because of bosses; that’s a virtually unavoidable fact. How your boss relates to you determines a lot about your happiness at a job. Bad managers, who unfortunately don’t scale out very often, create bad workplaces. In these ways, then, the manager-employee relationships are crucial. Each one contributes to the overall organizational well-being and health.

So what you’re doing with AI, then, is you’re breaking that bond of manager-employee relationship. Now the manager has less responsibility for everything related to the employee, including scheduling, (maybe) performance reviews, (maybe) questions about career progression, etc, etc. The employee basically reports to an algorithm, and the manager probably dinks and dunks around all day trying to please his bosses.

Now, of course, the dirty little secret…

… is that most managers want it this way, so long as their paychecks stay intact. Many people don’t want to manage. They did it because they got three kids and a mortgage along the way, and their company won’t let them make more money without taking on direct reports. This situation is far more common than “people who want to manage,” but we just don’t openly have this discussion very much.

And the second dirty little secret…

… is that since tech started scaling, mostly what tech does is creates environments where bosses don’t need to talk to employees directly. Bosses can hide. Platforms. Applications. “Hey, did I see this in Asana or something?” It all stands to reduce the number of conversations that two people can/need to have, which inherently reduces the idea around what’s a priority — especially if you’re juggling stuff in 12 platforms — and as priority declines, organizational trust declines, and you all start working at places where the left hand has no idea what the right hand is doing.

You know what tends to happen in those places? A few sales guys do badly for a quarter or two, and the execs panic and lay people off. “We got all this tech, right? Can the tech do the work? The tech doesn’t need insurance, right? Am I right about this stuff?”

Work can become a joke when we think tech will solve everything, and it’s especially a joke when we think tech will solve everything even though we’re not entirely sure what the problems even are. That seems to be where AI is residing in a HR context now.

Oh, and a note on data silos

From that same Wharton article linked above:

They’re struggling mightily just to get data together to analyze, and this says a lot about the reality of trying to work with human resource data. Here is a typical problem: We have data on employee performance, and it’s filed in this dataset over here. We have data on hiring and the attributes of applicants in this dataset over here. But, by the way, as soon as somebody is hired, we tend to throw that data out. The biggest problem they’ve got is, can we get these datasets together to talk to each other? That is not so unusual, except we also run into stories about the person running this silo who doesn’t want to share the data with this group over here. It’s a data management exercise, but it’s also a political exercise internally.

Remember above when I talked about places populated by humans being political? Yep. Who wants a data scientist to be more important than they are? No one, and especially not 50 year-old middle managers who have no other relevance in society apart from the work info they hoard in their G-Suite. This is one major reason “I trust my gut!” or “I know this space!” will continue to beat back every tech innovation we throw at work. People need to be relevant, y’all. They already fear the work is going away. You think they want to expedite their own departure as a W-2?

Work is still very psychological. Tech cannot fix all of that. And in HR, which is often the least relevant department, tech might come for the whole package eventually — although it will take time — but until it does come for the whole department, we’re going to be running in circles on silo’ed data, managers caring even less than they do know, and the whole kit and kaboodle of solutions looking for problems tends to create. So, that should be fun!

Your take on AI and HR stuffs?

Ted Bauer