Tonic logo master 2020-2

Filter by Category
Filter by Category

AI - the bluffer’s guide to what it means in HR

A lot of people throw around technical buzzphrases like AI, Machine Learning and Blockchain. What's meant by any of those, what those things are and are not actually good for.

Recently on LinkedIn, I ran across a presentation by Arvind Narayanan, an Associate Professor of Computer Science at Princeton, entitled “How to recognise AI snake oil”. He talks about AI allegedly being used in a bunch of HR processes (e.g. to assess body language in video interviews) before describing these as bogus, and asking “How did this happen? Why are HR departments apparently so gullible?”

I think there’s a real problem out there (not just in HR) that a lot of people throw around technical buzzphrases like AI, Machine Learning and Blockchain without really having any great confidence - or even worse having misplaced confidence - on exactly what is meant by any of those, and more importantly what those things are and are not actually good for. If you’re not sure, I’m going to do my best to help. This first blogpost is going to focus on AI.

Definition:

AI is any computer mimicking human (or occasionally animal) intelligence. It’s automation for the brain, where previous waves of automation replaced physical actions. It’s not even slightly a new idea, but the range of what’s possible keeps changing - partly as there are actual technical breakthroughs, but equally as increasingly powerful computers have made the same old code run a lot faster, wider or bigger.

Examples:

The earliest “AI” I can remember consciously interfacing with was Microsoft Clippy, who would pop up and ask me if I would like help with the letter I was writing in MS Word. There was no NEED for this to be delivered as an AI, but Microsoft decided (wrongly as it turns out) that we’d respond more positively to the software “talking” to us using anthropomorphic stationery to make suggestions, rather than all that functionality only accessible through menus and templates.

For lots of people, the first AI they may consciously encounter is a computer opponent in a game, and indeed this is also where lots of the research into AI takes place. I was working at IBM in the Deep Blue era, just after it had beaten Kasparov at chess, and indeed I was taught how to code by learning to programme computer opponents in increasingly complex games - starting with noughts and crosses, then blackjack, and then the dizzying complexity of Connect 4. (These games are all “solved” as you can programme a perfect player. What can and can’t be “solved” is one of the most important questions in mathematics and computer science. If you’re interested: https://en.wikipedia.org/wiki/P_versus_NP_problem)

These days, “AI” crops up in all sorts of environments - chatbots, automated advisors - even my CEO’s PA is an AI.

What it’s good for and what it’s not:

AI is great at anything where we can explain exactly (and therefore code exactly) how a human is doing it - even if (and perhaps especially if) it would take a human a long time to do it.

Some tasks are very hard to explain how we’re doing them. If I showed you a picture of a dog or cat and asked you to tell me if it’s a dog or a cat, you’d find that easy. Whatever the breed, only a part of it, at a funny angle, with an odd Instagram filter. But you couldn’t explain how you’re doing that very precisely, and so if you try coding that it’s going to get insanely complicated. I would back an eight-year-old to beat a supercomputer at this game.

Some tasks AIs can do a LOT better than us - anything that requires something to be done repeatedly, rapidly and consistently many many times. This is kind of why chess computers work, as they can explore every plausible* move and its consequences far faster than any human ever could. (*Please note, they don’t do every possible move and its consequences. That was one of the big breakthroughs)

Lots of AI breakthroughs can be about how a computer can do the “thinking” better than us, but others can be actually more to do with how a computer is able to harvest the input data better or faster than a human would.

The big “trap” with AI is that it’s going to do whatever it does lots of times and very fast. If it’s working as intended, that’s awesome. If it contains an error, that’s a massive problem. It magnifies and replicates whatever it’s been told to do. Some HR AIs have turned out to have really big ethical problems - Amazon’s recruiting AI had an inadvertent gender bias, and thus became a massively effective misogynist: https://www.lexology.com/library/detail.aspx?g=0ac44465-1a28-4c5b-8c8b-d0b3142fa4ab

What it means for HR:

Let’s start with the impact on HR as a discipline. If you can explain exactly how you want an HR process done by a human, it can probably be automated. Whether it will be or not depends on how difficult that will be to do, but also on how profitable it would be for someone to bother - so the more widespread and generic activities are likely to be first. Many of you may already have AI functionality built into HR systems, and if you don’t you’re likely to in the near future. For you as individuals, this means it’s a good idea to get good at the things that are hard to automate - i.e. anything that’s more consultative than process driven, anything that’s niche rather than universal.

If we look more widely across your businesses (let’s do some Strategic Workforce Planning), you can probably start estimating which talent groups are going to be affected - and again it’s that combo of what CAN be programmed, and what’s WORTH programming. So whilst diagnostic medicine is quite hard to programme, IBM worked out there’s a hell of a lot of highly paid jobs they can automate around the world, and a nice solid clinical evidence dataset they can throw into IBM Watson, so it made the cut. So for HR it might radically change what talent groups you need to hire, nurture, engage and retain.

But actually I think you might also have a bigger role here, as the “human” lobbying group when these discussions are happening within the company. Surely the decision on what to replace with an AI could involve a pretty heavyweight debate between the CIO/CTO and the CHRO? There are ethical elements as well as business elements.

My advice:

Be interested, keep reading, but if you don’t want to get mis-sold something that may cause you more problems than it solved, keep questioning it - both whether it will work, and if it’s ethical. A lot of AI is wildly oversold, and predictions on how fast it’ll develop are made to impress investors (anyone enjoying their self-driving car? Its failure to appear is behind a lot of Uber’s problems right now, as they didn’t bank on still needing actual drivers in 2020).

One good source I keep an eye on is the Gartner Hype Cycle - please note that in their view AI in Talent Acquisition hasn’t even yet reached the “Peak of Inflated Expectations”, and I’m inclined to agree - it’ll be in the “Trough of Disillusionment” in a few years time if they’re right… https://www.gartner.com/smarterwithgartner/4-key-trends-gartner-hype-cycle-human-capital-management-technology-2019/

And finally:

Well of course there’s an xkcd for this: https://xkcd.com/2237/

Connecting employer brand to employee experience
154-years old but still brand new

About Author

Marcus Body
Marcus Body

Now working elsewhere. Shame really.

Related Posts
Employer Brands: Time to entertain or die?
Employer Brands: Time to entertain or die?
Unlock Employer Brand Power: Combine Physical and Mental Availability
Unlock Employer Brand Power: Combine Physical and Mental Availability
Be Where They Are: The Real Power of Physical Availability
Be Where They Are: The Real Power of Physical Availability

Subscribe To Blog