Skip to Main Content

Having attended HRMAC’s Leadership Conference in June of 2018, I’d first like to acknowledge the organization’s leadership for presenting a thought-provoking and relevant topic to strategic HR leaders. There was so much to ponder, post-event, in examining the accelerating changes in business. In fact, the material on which I felt compelled to write about today has already changed since I began.

Contemplate, for a moment, our working world and the fact that the “H” in HR stands for “human.” The pace and direction of technology in work is frenetic, so I’d like to restate some concerns about these changes, in hopes of creating an ongoing dialogue in the local HR community.

First off, a parable… In the very near future, scientists design and build the most advanced and powerful supercomputer ever devised, with the purpose of doing only one thing: solving for Pi. They choose this relatively benign undertaking to explore the potential of machine learning with a non-threatening task. Once turned on, the computer begins to diligently hum along, but quickly, its machine-learning abilities allow it to improve on the task. In short order, it begins to access other computer networks to speed up the process. Within days, the system has taken over all the computing power available in the United States. In less than a week, the computer takes over Earth and harnesses all the resources of our solar system. Ultimately, the task absorbs our galaxy, followed by the entire universe, all focused on the mission of solving for Pi. Yikes!

This parable points to a significant problem with algorithmically driven AI, or artificial intelligence. Any morality – ethics or concern for others – needs to be programmed. Even when ethics or social responsibility is built into systems, all that programming can’t help but reflect some of the beliefs and biases of those who wrote the code. The software world has a saying, “It’s good enough to ship.” The implication of this is to initially deliver acceptable software and fix any bugs in later versions. This means there is a very real possibility of programming error.

If we don’t attend to the need for unbiased, moral and humane elements in man-made intelligence, we are going to “solve for Pi” – and we won’t like the answer.

AI, robotics, blockchain technology and big data all have something in common: None of them inherently care about humans. A computer has no conscience, so, with the possible exception of advanced machine learning, technology is therefore inherently psychopathic. This is a critical factor to consider in the implementation of automation, AI, blockchain agreements and more.

As one of my fellow attendees correctly pointed out, “When we outsourced and offshored work, corporations and governments were not very effective in addressing the job losses created by the new paradigm.” For-profit companies are not designed to care for people, they are built to grow and become more profitable. Leaving many blue-collar workers behind, we moved on. As the panel of experts who spoke that week made so apparent, these new technologies could do the same thing to whole new categories of jobs.

Finance, insurance, accounting and real estate each fall into the “Professional Services” category, and are good examples of businesses where a large percentage of the workforce performs important, but routine, tasks. The confidentiality, accuracy and security of these tasks requires actual people to perform them. Lawyers, CPAs, notary publics and researchers, for instance, have been safe, up until recently. Blockchain technology has the potential to eliminate layers of white-collar jobs. Blockchain also provides secure systems containing immutable data verifiable by those with access. Poof! There go layers of tasks.

Forbes recently published an article about software that replaces much of the researching, recruiting and scheduling of candidates for employment previously performed by humans. Giant companies, like Google, are now doing, at scale, what the smaller software startups implemented last year.

While the Harvard Business Review may cry out “Blow Up HR!,” it seems likely that our obligations will only become more broad and critical. If we are to effectively manage the impact on people from the implementation of new technologies, we will have to be at the forefront of these processes. Think about some of the issues that human resources must oversee: EEO-1, OFCCP, wrongful discharge, wage and hour claims, ADA, FMLA, OSHA, and more.

Companies exist to grow and maximize shareholder returns. Technologies have no inherent ethics and the companies that produce them are under tremendous pressure to be first to market. Where does that lead?

Even though significant, destabilizing technological events seem inevitable, it still takes time for them to become widespread. We could all see the irreversible trend of e-commerce, but it took over a decade to cause the big disruptions.

If we don’t attend to the need for unbiased, moral and humane elements in man-made intelligence, we are going to “solve for Pi” – and we won’t like the answer.


Rick Cobb (Truman) is the executive vice president of international at Challenger, Gray & Christmas.