hrtechoutlook
JUNE 2021HR TECH OUTLOOK9PRAGMATICALLY ETHICAL AIMathematically, it is impressive how quickly the system began to learn and "optimally" respond. Socially and ethically, it's deeply concerning how easily internet trolls "brainwashed" and manipulated the system. Inflammatory click-bate headlines aside, Microsoft did NOT build a racist chat bot. They built impressive technology that was misused. That is often the risk with scientific advancement. Technology that allows me to efficiently navigate the globe is the same technology guiding ballistic missiles. The answer can't be a full stop on evolving technology, but it does require we be even more thoughtful about applications and corresponding implications.Principle 3: Human Intelligence must help ELUCIDATE AI BehaviorFifty years of Behavioral Economics research has taught us that most humans consistently under perform at making rational decisions. However, we are very good at rationalizing and developing ex post defac to narratives. Said differently, we consistently make irrational decisions but we are very good at justifying them. Currently, AI systems don't have the ability formulate creative, self-justifying narratives. If an autonomous vehicle swerves to avoid a small animal, instead severely injuring a child, there will be no penitent figure on the witness stand pleading their case to a jury of empathetic "peers".When Apple Card offered higher credit limits to men than women, public outrage was immediate and strong. To date, neither Apple nor Goldman Sachs has offered a clear explanation of why it happened. We don't know if there were valid reasons or something more nefarious and evidently neither do they, or maybe they are unwilling to say. What we do know is that societal patience is waning. Consumers, legislatures, and business leaders are demanding ever more transparency into the machine's decision making processes. In contrast, our mathematical and technological approaches are getting more complex, making clarity even more difficult. The need for transparency pre and post execution will be key to acceptance and scale for these systems. We must get better at elucidating their results.Many people in this space want to engage in deep theological discussions around the singularity and other esoteric ethical paradoxes. As entertaining as these discussions may be, they don't help us pragmatically solve the real business issues of today. These three principles can help us begin to establish a framework. Building on these principles, we need a fuller, more pragmatic framework. It is time for key industry stakeholders to come together and decide what a Responsible AI Structure could really look like.As a starting point, we would propose and integrated structure to address 5 Key areas. Only then can we get truly pragmatic with our ethical machine discussions.1. Data Quality and Associated Data Rights2. Model Clarity and Interpretability3. Model Robustness and Stability4. Model Bias and Fairness5. Regulatory and Compliance Risks Detect data drifts, balance, biases, and ensure appropriate usage and access compliance Enable human users to interpret, understand, and explain machine generated predictions Ensure all development, usage, and model executions align with appropriate organizational and governmental standards Uncover bias in the underlying data, models, and development processes AI Regulatory Compliance Risk Data Quality and Rights AI Bias and Fairness Explainability and Interpretability Ensure data security and monitor model evolution and output for appropriateness AI Robustness ghtsExplaiResponsible AI Structure It is time for key industry stakeholders to come together and decide what a responsible AI Structure could really look like
< Page 8 | Page 10 >