As attention around AI has grown, so have concerns about its potential downsides. The “black box” is one of the most prominent of these issues, especially when it comes to emerging AI regulations. When users can’t be sure how a model arrived at a specific decision, it’s difficult to trust it.
Regulatory frameworks around AI are still few and far between. In many cases, it’s up to specific developers to determine what they deem ethical and fair. At the same time, lawmakers have taken a rising interest in this technology. Legal requirements concerning AI are almost certain to emerge, so devs must focus on building compliant — which often means explainable — AI.
The Growth of AI Regulations
While the AI landscape is still the Wild West in many ways, that’s changing. The EU has proposed the first comprehensive AI regulation by a major authority in the appropriately named AI Act. As this law takes effect, it will likely inform legislation in other areas, similar to how the GDPR influenced broader data privacy laws.
The U.S. is experiencing a rise in AI regulatory conversations, too. At least 25 states have introduced bills regarding various AI-related concerns, including 18 that have signed them into law. No national law exists yet, but the White House has expressed interest in guiding safe, ethical AI development, which could mean formal regulations in the future.
Amid these changes, AI brands face rising pressure to adapt. While making some adjustments without knowing what final regulations will require is difficult, getting ahead of the trend toward ethical and explainable AI is key to balancing compliance and competitiveness.
Why Explainability is Key to Compliance
These emerging AI regulations cover more than just model explainability, but it is a crucial factor in many of them. Even when a standard doesn’t explicitly require explainability, providing it anyway may help meet other requirements.
ISO 42001 — the world’s first formal AI standard and, thus, a likely inspiration for legislation — emphasizes the need for explainability and transparency. It encourages companies to explain their AI-driven decisions, even if that’s not a strict requirement. It does require regular review to ensure automated decision-making meets its intended use and guidelines, which explainable models make easier.
The EU AI Act requires “high-risk” AI systems to include activity logs to ensure the traceability of their results. While the language is slightly different, that requirement necessitates some degree of explainability. Similarly, the White House’s suggested AI Bill of Rights recommends accessible explanations of generated results to inform safer usage.
It’ll be difficult to comply with regulations like this without an explainable model. Consequently, devs who don’t want to risk legal fees should ensure explainability in their algorithms as requirements like this become more common.
Explainability’s Impact on AI Development
Adapting to this changing environment provides both opportunities and challenges. Getting ahead of the trend and providing more transparent AI models could help enterprises minimize disruption as laws come out requiring it. It could also boost consumer confidence to drive sales and loyalty.
On the flip side, ensuring explainability in AI is often tricky. Developers can use black box analysis tools to reveal how their models arrive at their decisions, but these aren’t 100% accurate. It’s often more time- and cost-effective to ensure transparency from the beginning, but that’s far from easy, too.
Coding explanations into neural networks involves a lot more manual development time. It may also lengthen training timelines, as more trial and error may be necessary to determine how these models actually work. Consequently, AI projects may get longer, more involved and expensive, compounding current issues like fast-moving markets and data worker shortages.
There’s also the issue of competition. Explainability means revealing more of the inner workings of an AI model. That level of transparency can make it difficult to profit from a proprietary model, as competitors could more easily replicate systems.
How to Ensure and Balance AI Explainability
While challenges remain, ensuring explainability in AI is still essential amid growing regulations. Devs can do that and balance transparency with other concerns through the following best practices.
Consider the Level of Explainability Needed
First, businesses should determine what explainability means in their specific context. A financial AI assistant may only need to tell users the factors it weighs to deem creditworthiness. A resume-judging AI needs more in-depth explanations about how it concludes one candidate is a better fit than another.
Regulations will likely leave space for this tiered approach to explainability. ISO 42001 is a risk- based standard, so lower-risk models don’t need to meet high expectations. Similarly, the EU AI Act differentiates between four risk levels, each with unique requirements.
Matching a model’s level of explainability to its risk and end use is beneficial in a few ways. First, it means devs can still develop AI models with less time and money for applications with lower requirements. Secondly, it guides further development steps by making explainability standards more specific.
Emphasize Simplicity
AI developers should also keep things as simple as possible wherever possible. The more complex a model is, the harder it will be to ensure explainability. Consequently, finding a way to serve the same purpose with a more simplistic algorithm will ease development timelines and complications.
Devs should also consider the simplicity of how their explanations come across. Many models can use accessible language and general descriptions of the factors they look at when making decisions. That accessibility is preferable from a regulatory perspective because it empowers the user. It also protects proprietary technology by not giving too in-depth a look at the model’s inner workings.
Find Marketability Beyond the Model Itself
Higher transparency will require compromise over protecting what might otherwise be a trade secret. However, that doesn’t mean AI models won’t be profitable anymore. Consider how open source software is a multi-billion-dollar industry despite the code being open to anyone.
AI companies should consider what makes their product unique apart from the algorithm. Supporting software that makes a model easier to use or more adaptable is an excellent way to retain profitability despite a transparent algorithm. Alternatively, businesses may emphasize their services to help clients integrate AI technology.
AI Explainability is Difficult but Necessary
Explainability in AI is a complex subject, but developers in the industry must pursue it.
Regulations are rising and explainable models are far more likely to be compliant, even if laws don’t specifically request explainability.
Adapting to this trend may mean organizations have to rethink their value proposition and how they approach AI development. However, that shift is more than possible and if firms tackle it well, they can ensure fair and profitable AI offerings to spur future growth.
About the Author
Devin Partida is the Editor-in-Chief of ReHack.com, and a freelance writer. Though she is interested in all kinds of technology topics, she has steadily increased her knowledge of niches such as AI, BizTech, FinTech, the IoT and cybersecurity.
For the newest insights in the world of data and AI, subscribe to Hyperight Premium. Stay ahead of the curve with exclusive content that will deepen your understanding of the evolving data landscape.
Add comment