How to Put AI Ethics into Practice at Your Company

In recent years, there’s been a shift in companies, from simply considering the technical implications of their technology to now acknowledging and attempting to advance solutions that ensure their technologies, including artificial intelligence (AI), act responsibly. And, according to the IBM Institute for Business Value AI Ethics survey that surveyed 1,200 executives in 22 countries across 22 industries, nearly 80 percent of CEOs are prepared to take actions to increase AI accountability. This is up from only 20 percent in 2018. Awareness of the importance of AI ethics is also notably extending throughout the boardroom: 80 percent of respondents in this year’s survey pointed to a non-technical executive as the primary “champion” for AI ethics, compared to 15 percent in 2018.

This is encouraging progress. Yet, much work remains to ensure the benefits of AI support all individuals equally and successfully. Today, roughly 85 percent of businesses feel it is important to address AI ethics, according to the IBV study’s data. However, only 40 percent of consumers say they trust companies to be responsible in developing new AI applications– the same proportion who said they trusted companies in 2018, nearly four years ago.

The benefits of AI continue to grow.

AI has transformative potential. In 2021, “AI augmentation,” defined as the “human-centered partnership model of people and AI working together,” created an estimated $2.9 trillion in business value, according to Gartner, and saved an estimated 6.2 billion hours of worker productivity. As investment and adoption continue to grow, along with the development of no-code or low-code solutions that allow people to customize their own AI without extensive technical, AI will continue to become more accessible and impactful to the masses. AI is capable of augmenting human capacity in numerous areas, from research and analysis to the completion of basic daily tasks like managing our calendar and finances.

AI also allows us to think bigger about what’s possible. It took scientists more than 30 years to manually map the 3.1 billion base pairs of the human genome– a critical project needed to understand how to treat complex conditions and preserve human life. Now, through the combination of AI and human intelligence, we can streamline and expedite similar processes and more successfully address the most pressing challenges of today.

And, we have already seen AI unlock significant breakthroughs: Last year, researchers at IBM, Oxford, Cambridge, and the National Physical Laboratory showed how AI-designed antimicrobial peptides interact with computational models of a cellular membrane, a development that could have wide implications for drug discovery.

Ensuring AI is trustworthy is a balancing act– but a worthy one.

While the promises of AI are great, so too are the pitfalls if we don’t ensure it is trustworthy, ie, that it’s fair, explainable, transparent, robust and respectful of our data and insights. The definition of “untrustworthy AI” may be obvious to most: discriminatory, opaque, misused, and otherwise falling short of general expectations of trust. Yet, advancing trustworthy AI can remain challenging considering the pragmatic balancing act sometimes needed: for example, between “explainability”–the ability to understand the rationale behind an AI algorithm’s results—and “robustness”–an algorithm’s accuracy in arriving at an outcome.

Organizations can no longer adopt AI without also addressing these tradeoffs and other ethical issues. The question is whether they confront them strategically, purposefully, and thoughtfully–or not. It certainly won’t be easy. But, there are concrete steps businesses and organizations can start taking right now to move in the right direction.

Set AI ethics practices in the proper strategic context.

As with any wide-ranging initiative, implementing AI ethics begins with determining the right strategy for success. Consider the criticality of building trustworthy AI to business strategy and objectives: What are key-value creators that could be accelerated with AI? How will success be measured?

It’s also important to consider the role of AI innovation in an organization’s growth strategy and approach – is an organization a “trailblazer” that constantly pushes the boundary of putting new technology into practice, or a “fast follower” who prefers more tested approaches? The answers to these questions will help identify and codify key AI ethics principles and determine the human + machine balance in an organization.

Establish a governance approach to implement AI ethics.

The next step is for a business to establish its own AI ethics governance framework. This starts with incorporating the full range of perspectives (eg, business leaders, clients, customers, government officials, and society at large) on topics such as privacy, robustness, fairness, explainability and transparency. It also means ensuring a diversity of identity and perspective: IBM’s new research shows there are 5.5 times fewer women on the AI ​​teams than in the organization, along with 4 times fewer LGBT+ individuals, and 1.7 times as many Black, Indigenous, and People of Color (BIPOC).

Establishing the right governance framework also requires businesses to consider identifying their own AI data risk profile and threshold, as well as an internal structure, policies, processes and ultimately system for monitoring their AI ethics internally and externally.

Integrate ethics into the AI ​​lifecycle.

Finally, AI ethics is not a “set it and forget it” process. There are a number of additional steps to take once an organization establishes its governance and maintenance system. For one, it must continue engaging with its internal and external stakeholders on the topic, as well as capture, report and review compliance data. It must also drive and support education and diversity efforts for internal teams, and define integrated methodologies and toolkits that champion the principles of AI ethics.

AI will only find its way more into our everyday lives–it should be advanced responsibly and in a way that ensures ethical principles are at the technology’s core. Thankfully, the playbook for AI ethics is becoming clearer, more practical, and more tangible. But, it’s on all of us– across industry, government, research and academia, and the whole of society– to champion it.

The opinions expressed here by columnists are their own, not those of

Leave a Comment