This is the final blog in a four-part series detailing the components necessary for AI success. You can read my earlier posts about cultural willingness, data and infrastructure readiness, and workforce skilling, before or after considering these four steps toward AI ethics, risk and compliance.
Successful AI adoption requires forethought and preparation. Although AI itself requires a culture open to experimentation and learning from mistakes, when it comes to ethics, risk, and compliance, you can’t simply wing it.
Allocating the resources and planning for ethics is often a second thought, met by a certain amount of resistance in many organizations. Ethics often takes a backseat to revenue generation and therefore isn’t prioritized. Furthermore, ethics has not historically been an embedded part of computer and technology industries to the extent that it has been in medicine, or law. There are no widely-adopted ethical guidelines for AI. No training. No oaths. As an industry, we haven’t even been able to identify an example that could be considered the gold standard (for more on this point, simply search for the latest on Google’s AI Ethics Advisory Council, or look here). Fortunately, there are pockets of innovation around AI ethics frameworks, models, and principlesthat may lead to a widely adopted standard, particularly through the efforts of the NIST AI program.
In terms of risk and compliance, the issues that arise early are often so thorny for organizations that they can derail or effectively bring an end to AI initiatives. Traditional organization structures have a tough time bearing the risk and lack of proven ROI involved with emerging AI. Many organizations seem to be waiting for others to pave the way, navigate the pitfalls, and share best practices—a practiced strategy that has yielded lower risk for other technology adoption.
So, what’s the trick for getting past “go”? My advice is simple. Consider ethics, risk and compliance before you adopt AI. Like any other business activity, one should start with a focus on outcomes, the desired goals, and what you hope to achieve, while articulating what the desired outcome should “look like” (or, at least, define some metrics to measure performance against the goals). So here are four ways to build ethics into your AI initiatives from the beginning:
Prioritize ethics early in the adoption process.
As AI use increases, the real ethical challenges become clear. The issues span at least two dimensions that need to be addressed early: scope and impact. For example, a biased algorithm might affect every end-user (high scope), but the impact may range from minor to very significant. Not overstating things: there will be issues in some applications in which people’s lives are at stake (high impact), though maybe only for a very tiny fraction of the population (low scope).
Many ethical issues begin with the data itself, so ethics must be considered from day one. Prioritizing explainability, accountability, transparency, and robustness within system requirements will ensure that the AI tools are built and leveraged in a manner consistent with values, and this will also help proactively prepare to mitigate risks when they arise. Organizations that don’t consider the ethical implications of their data usage and AI solutions are at risk for a public relations catastrophe, stemming in part from media coverage of high-profile missteps.
Although concerns around AI leading to massive job losses are exaggerated and misplaced (i.e.,the real issue is job transformation), there are legitimate concerns around AI’s ability to amplify bias in datasets in a way that inflicts harm upon already marginalized groups of people, at a rapid pace and on a large scale. The dual-use nature of AI technology (for both civilian and military applications) also raises concerns around how a seemingly innocuous solution in one domain could be used for nefarious purposes in the other domain.
Be proactive in AI risk management.
Because ethical concerns and risks should be identified early and raised long before rollouts (as stated above), organizations must articulate a code of ethics and governing principles to guide AI development and implementation efforts, perhaps by standing up a formal risk review and assessment matrix process for AI, or by incorporating AI into existing risk management business processes. Forward-looking organizations have established the role of a chief ethics officer. This role, often given a broad scope, can help steer organizational values more broadly and oversee everything from industry regulations to ensuring AI algorithms are unbiased. Establishing Ethics Review Boards, often more narrow in scope, likewise holds the organization and stakeholders accountable. Ideally, such Boards will have voting members who are external to the organization (hence, not placing revenue concerns above ethics concerns). By identifying and communicating known risks, through risk assessment frameworks, organizations can stay true to their values and historical roots yet move forward with AI.
Build trustworthy, transparent and explainable systems.
AI systems should be built with similar characteristics: be trustworthy, transparent, and explainable. Ideally, some objective metrics should be defined to measure performance against those characteristics. While the exact level of acceptability in the metrics for each of those dimensions will differ by application, this will ensure that systems can be audited and that their recommendations can be trusted.
The rise of adversarial AI and deep fakes highlights the importance of ensuring transparency and security in AI systems. Many experts are now seeing that the scale of impact with AI (specifically adversarial AI) is greater than it has ever been, with some implementations being too dangerous to release. Although there is great potential for AI to help empower organizations, for example to make better diagnoses in healthcare, it is crucial that how and why a system arrives at its recommendations can be fully understood, explained and trusted.
Ensure measured, monitored roll-outs with robust governance and oversight, guided by clearly documented processes.
At the most basic level, documenting processes and monitoring roll-outs are things that many organizations are already comfortable doing. Applying this rigor to AI applications and ensuring that these processes are regularly reviewed, tested and socialized among a wide stakeholder audience must be part of a strong, multifaceted approach to ethics and risk.
All that has been written about ethics and AI could occupy us for weeks. There are debates, concerns and fears that we all need to address and work through. I touched on this in a blog earlier this spring: Why we should take a historical—not hysterical—view of automation in the workplace. AI is an evolution but also a revolution for organizations, similar in scope to previous industrial revolutions. Conversations about minimizing risk, creating standards and acting ethically help all of us to be more thoughtful about adopting and utilizing AI in our future.
Want to keep reading? Try In Favor of Developing Ethical Best Practices in AI Researchor Assessing Ethical Risks of Artificial Intelligence. When reading these things, it helps to think of AI as a grand experiment on all of humanity. Within that context, perhaps a good baseline for AI ethics can be found in the three principles of human subjects research (HSR). These are:
- Respect for persons (autonomy, informed consent, protection of vulnerable groups)
- Beneficence (do no harm, maximize benefits, minimize risk)
- Justice (fair and equitable distribution of risks, burdens and benefits)
Read more about that perspective in “Why We Worry About the Ethics of Artificial Intelligence”.
Dr. Kirk Borne is a GovLoop Featured Contributor. He is the Principal Data Scientist and an Executive Advisor at management consulting firm Booz Allen Hamilton since 2015. In those roles, he focuses on applications of data science, data management, machine learning, and AI across a variety of disciplines. You can read his posts here.
Leave a Reply
You must be logged in to post a comment.