Hyperight

The Time to Install Trust throughout the AI Life Cycle is Now: Interview with Seth Dobrin

Lap top with IBM Watson AI assistant

The development of AI must be both human-centric and trustworthy if organizations and societies want the technology to reach its full potential. Explainability, fairness, robustness, transparency and preserving privacy are the main focus areas of trustworthy AI for IBM.

“Laying out ethical principles for AI is one step, but truly putting those principles into practice is another. These focus areas help us ensure that ethical principles are being upheld in all AI technologies we deploy throughout the entire AI life cycle.”, says Seth Dobrin, Global Chief AI Officer at IBM. 

In this interview, he shares why companies must approach trustworthy AI strategically and why the time to act is now if businesses want to gain a competitive advantage.

Hyperight: “AI for Good and AI Ethics” is one of the stage themes at the 7th edition of the Data Innovation Summit. IBM has been working on trustworthy AI for some time now, so it is a pleasure to talk with you, Seth. To start with, can you tell us about IBM’s work with trustworthy AI and what we mean when we say trustworthy AI?

A picture of Seth Dobrin - IBM

Seth Dobrin: Developing AI that is both human-centric and trustworthy is the only way that this powerful technology can reach its full potential. Organizations around the world want to capitalize on the immense power in AI technologies, but many CEOs aren’t realizing the business outcomes they expected. In fact, Gartner says that only 18% of CEOs in 2021 saw AI as the most industry-impactful technology, down from 29% in 2020. I believe this gap between expectation and reality lies in the way AI is approached – as a technical challenge, rather than a human one.

Trustworthy AI is rooted in taking a human-centered approach. This holistic approach involves collaboration between people and technology from the beginning, meaning that stakeholders are involved in all stages of the AI life cycle – from development, to deployment, to monitoring. When humans are involved, models can be designed to support both the business strategy and the human needs of the organization.

IBM has established three key ethical principles on which all of our AI development is based. First, we believe that the purpose of AI is to augment – not replace – human expertise, judgment and decision-making. Second, we assert that data and insights generated from data belong to their creator, not their IT partner. Finally, we caution that powerful new technologies like AI must be transparent, explainable, and free of harmful and inappropriate bias in order for society to trust them. IBM focuses on five areas to frame ethical principles into practice – explainability, fairness, robustness, transparency, and preserving privacy.

With proper monitoring, governance, and guardrails in place, organizations can both ensure that AI solutions don’t harm people or increase business risk and that models perform accurately to deliver expected business outcomes.

Hyperight: Why is trustworthy AI important and necessary for data and AI innovation? Can we find a practical example of how trustworthy AI can help companies create and deploy human-centered and ethical AI?

Seth Dobrin: AI will not be accepted by society if it can’t be trusted – it’s that simple. We’ve seen far too many examples of where an ungoverned AI model has had negative effects on people – from AI discriminating against protected groups of people to virtual agents exhibiting turning racist or misogynistic, to name a few. 

As an example of trustworthy AI in practice, IBM worked with a North American major retail chain to build a trustworthy relationship between machines and people. Many of the lines of business of this company were running into a problem: Could they trust the decisions and recommendations made by ML models on hiring?  

They began asking – is the ML model, without being aware of it, biased against one gender versus another, or older people versus younger? They also raised the issue of how to set guardrails around the behavior of the models, as the data continually changes.  

After leveraging a set of guidelines and templates, we identified dozens of use cases and we helped the company with their prioritization and implementation while ensuring fairness and trust with their ML models. Today, the company has established guardrails to proactively monitor for and mitigate bias in its hiring processes. 

IBM autonomous lab
Photo by IBM

Hyperight: You mentioned that the main focus areas of IBM’s trustworthy AI are explainability, fairness, robustness, transparency and preserving privacy. Can we unpack a bit on those? What are they, and why are they important? 

Seth Dobrin: Laying out ethical principles for AI is one step, but truly putting those principles into practice is another. These focus areas help us ensure that ethical principles are being upheld in all AI technologies we deploy, throughout the entire AI life cycle.

To start, privacy should be the backbone of all AI governance. People need to trust that their information is safe and secure, and AI systems need to safeguard data at all stages in the life cycle. There cannot be trust without privacy.

The next three focus areas in building AI trust involve creating shared understanding between the AI and the people it impacts. AI models should be explainable, meaning that people can understand how decisions are made and what determining factors were included; fair, which means the proper monitors and safeguards are in place to mitigate bias; and transparent, which shares information with various stakeholders to reinforce trust.

The final factor in building trustworthy AI is establishing robustness at scale. When AI is deployed in the real world, it needs to be able to guard against potential threats that could introduce bias or otherwise cause the AI to be less accurate. Establishing robustness at scale is the true pressure test to determine if AI is trustworthy.

Hyperight: So, in general, for AI to start delivering value, enterprises need to have a holistic approach to data and AI technology. Trustworthy AI is built on governed data and AI tech. How is IBM approaching this?

Seth Dobrin: IBM has developed solutions and frameworks for trustworthy AI that help businesses instill trust throughout the AI life cycle, including auditing and mitigating risk, implementing governance frameworks, operationalizing AI, education and guidance, and organizational change. One example is IBM Cloud Pak for Data, which offers end-to-end data and AI governance capabilities to help enterprises establish trust across the entire AI life cycle. Another is AI FactSheets, a technology which, like nutrition labels for foods, provides greater transparency to decisions made by AI systems.

On the service front, we also began an AI Strategy practice, which helps C-suite executives develop an AI strategy that supports the broader corporate strategy. By crafting well-articulated AI intents, this approach enables organizations to use humans as a lens to discover and select AI use cases built around humans. In this way, the technology deployed returns tangible value to companies by creating value for people. 

Additionally, we strongly believe that it is critical for the industry to get this right, that we have over-sourced the basis of our technology the the IBM Fairness 360 tool kits (provide link) that are available for use under the open source linux foundation license. Many of our biggest competitors use this technology for their tools.

A phone with IBM Watson AI assistant
Photo by IBM

Hyperight: How can organizations start building this capability within their organization? What are the implementation steps?

Seth Dobrin: I think the answer to this question is as important for businesses with thousands of AI models as it is for those just starting out, as the levels of maturity within AI governance vary wildly from company to company. As I said previously, it’s important for companies to approach AI modeling strategically, and with humans at the center who are likely to embed governance and trust from the start.

Companies without any AI life cycle governance – what we sometimes call “level 0” – are typically where a company starts its AI journey. While this approach provides a lot of flexibility, it can introduce significant risks that are nearly impossible to even evaluate. Potential pitfalls of this approach include steep regulatory fines, knocks against corporate reputation, or even accusations of bias.

IBM has worked with many companies at all stages of AI deployment to help them implement steps toward AI governance. We’ve distilled these steps into a simple five-step framework for businesses to ensure trustworthy AI by helping define metrics, increase accountability, and eventually, fully automate the monitoring process.

At level one, AI policies are available to guide AI life cycle governance, but there is no enforcement of these policies. As organizations mature and move to level two, they develop a common set of metrics to govern the AI life cycle and deploy a monitoring tool to evaluate models, creating consistency across teams.

Level three focuses on creating a single data and AI catalog, where the enterprise can now trace the full lineage of data, models, life cycle metrics, code pipelines and more, which allows companies to clearly articulate risk and evaluate success of their AI strategy. Automation is the key factor as organizations move to level four, where information is automatically captured from the AI life cycle, significantly reducing burden on data scientists.

Fully automated AI life cycle governance is the final level of maturity. Businesses that use automation to automatically enforce enterprise-wide AI policies can ensure policies are enforced consistently while minimizing risk. 

The time for companies to act is now. A new survey from IBM’s Institute for Business Values, states that 79% of CEOs surveyed are prepared to implement AI ethics practices but less than a quarter of organizations have acted on it. But the time is now, because the study data suggests that those organizations who implement a broad AI ethics strategy interwoven throughout business units may have a competitive advantage moving forward. The study provides recommended actions for business leaders and is a great resource for organizations looking to embed AI ethics into their practices.

Hyperight: What are the main challenges organizations face when approaching ethical and trustworthy AI?

Seth Dobrin: First and foremost, many organizations simply do not have a clear strategy in place for the use or governance of their data or how they might use AI. Too many organizations rush to implement AI technologies without clearly defined, real-world problems they are trying to solve and the expected business outcomes of implementing the technology. 

Other common issues within organizations tend to involve lack of data accessibility and standardization across teams. Data is sitting in silos, in different formats, and from different sources, and it’s becoming increasingly more difficult for users to access it and make sense of it. This means organizations lose the value of their data, since they can’t align their data and AI strategy to the business strategy. This has a big impact on efforts to make sure that all the data is secure and trustworthy.

Hyperight: Recently, there has been a lot of focus on AI, data governance and AI regulation. Many agree that AI regulation, although with the utmost positive intention, can slow down and even hinder AI development and innovation. What is your personal opinion on this topic?

Seth Dobrin: While companies need to deploy AI technologies to keep up with demands of the industry, they must simultaneously perform under society’s increased scrutiny. Governments around the world are introducing policies to curb the risk of AI-driven societal harm and impose steep penalties for organizations that violate the rules. 

Governments should establish guidelines that promote trustworthy AI technologies, but it has to be done without stifling innovation or limiting AI’s potential to solve real-world problems. So, instead of a “blanket ban,” it’s important that governments focus their efforts on regulating the types of AI with the most potential to do harm when it goes ungoverned. IBM’s global call to action on “Precision Regulation for AI” outlines a risk-based framework for industry and governments to work together in a system of co-regulation.

Our position focuses on fostering innovation while placing safeguards on problematic use cases of the technology. It offers three principles to guide governments and companies on AI regulation: propose different rules for different use cases, require transparency, and take a co-regulation approach.

We need to keep asking the critical questions about how AI is being implemented and regulated. It shouldn’t be thought of as “slowing us down,” but rather, starting the crucial conversations needed for AI to move forward.

Add comment