hrtechoutlook
JUNE 2021HR TECH OUTLOOK8PRAGMATICALLY ETHICAL AIWhat does it mean to behave "ethically" as an individual? A universally accepted answer has been elusive for as long as humans have been sentient. Now we are faced with an even more daunting challenge, attempting to clearly define Ethical Machine Behavior. Determining one globally recognized standard probably isn't achievable but there may be some principles we can apply to help us begin to consider and appropriately frame the issue.Principle 1: Ethical AI ORIGINATES with Ethical Human BehaviorBy far, the most common issue I have encountered in my career is the issue of model bias. This is a subject garnering a great deal of attention in the AI space. However, I have generally concluded it is less a technology concern than an evolutionarily one. All humans are biased, these biases can take many forms but not all of them lead to prejudices or discrimination. The same is true of AI. If we use censored, skewed, or manipulated data the result will be a biased algorithm. This bias is a direct result of unethical or incompetent human behavior, not some esoteric algorithmic evil. We call these "Input Driven Biases" and we have in essence corrupted our model. The human developer taught inappropriate behavior and the model has no innate way of acting differently. However, it is important to recognize that these types of biases are not always "unethical". At Disney, we developed a sophisticated system called Customer Centric Revenue Management designed to optimize offers at the customer call center. The underlying recommendation engine balanced Desirability (customer desires) with Profitability (Disney's desires). We built the system with the ability to manipulate this balance, sometimes skewing toward Desirability and sometimes toward Profitability. We purposely introduced biases into the algorithm for the purpose of manipulating outcomes. It wasn't unethical, it was pragmatic.The question isn't whether your model has bias, it does because humans do and the world does. The question is whether these intrinsic biases will result in decisions that "unfairly" harm an individual or group. That answer can and will evolve so we need to stay vigilant and we need to train modelers and end users on the scientific and societal issues at play.Principle 2: AI commonly REPLICATES Human BehaviorSometimes we do everything right in model development and still have potential issues. This is particularly true with specific types of learning and reinforcement models. Quite often the result of a complex modeling exercise is the development of a highly descriptive and predictive construct of human behavior. We may not like that behavior, we may not want to accelerate it, but the issue isn't the data or the math. One of the most public examples of this was Tay, the Microsoft AI Twitter Chatbot released in 2016. Sixteen hours into production, Microsoft was forced to shut the system down when it began posting highly inappropriate and inflammatory racial tweets.Cameron DaviesIn MyOpinionCameron Davies, SVP, Corporate Decision Sciences, NBCUniversal MediaBy
< Page 7 | Page 9 >