Three Conversations That Every Business Should Have About Ethics and AI

Three Conversations That Every Business Should Have About Ethics and AI

Three Conversations That Every Business Should Have About Ethics and AI

IT  
ISRDO Team 25 Nov, 2022 - in Business
243 0
  • Rating
  • ethics
  • privacy
  • ai
  • morality
  • mitigating
  • discriminatory

In recent years, questions of AI morality have entered the public discourse. The issues are well-known, as are the outcomes everyone hopes to avoid. Nobody wants to release prejudiced or discriminatory AI. Nobody enjoys being the focus of a privacy infringement lawsuit or a probe by authorities. But what should we do now that we've established that biassed, black box AI that violates individual privacy is bad? The concern that plagues the minds of all top-level executives is this: what can be done to lessen the impact of these ethical dangers?

While it's commendable to act swiftly in response to concerns, there are no easy solutions given the complexity of machine learning, ethics, and the intersections between the two. Successful AI ethical risk reduction solutions require a thorough grasp of the problems being addressed by the organisation. Unfortunately, discussions about the moral implications of AI sometimes feel abstract. Therefore, the initial step is to figure out how to articulate the issue in terms of specific, doable measures. Here's how to set the stage for discussions about AI ethics in a way that will help you determine what to do next.

Involvement of Whom Is Necessary?

We advise forming a high-level working group to steer AI ethics within your company. All interactions should be grounded in a solid understanding of the business's requirements, the team's technical abilities, and their operational know-how, so make sure they have the proper skills, experience, and expertise. We suggest including engineers, legal/compliance specialists, ethicists, and business executives in your AI solution development team. Together, they're trying to identify and assess any ethical risks to the business at large, the industry to which they belong, and the company itself. After all, knowing both the nature of the problem and the constraints that may be placed on prospective solutions is essential.

The technologist's expertise is required to determine what is technically viable, both on an individual product level and throughout the entire business. Reason being: varied strategies for mitigating ethical risks call for unique sets of technical resources and expertise. Understanding your company's current technology standing can be useful for planning how to address any major shortcomings.

Professionals in the fields of law and compliance can check to see if a proposed new risk-reduction strategy would conflict with or duplicate any current procedures. Since it is unclear how current laws and regulations bear on new technology, or what new regulations or laws are in the pipeline, legal considerations loom especially big.

The presence of an ethics advisor can assist guarantee a methodical and complete examination of the ethical and reputational risks you face, not just as a result of developing and acquiring AI, but also as a result of dangers unique to your sector and/or company. Compliance with antiquated regulations does not guarantee the ethical and reputational safety of your business, thus their relevance is amplified.

At the end of the day, it's up to business executives to make sure all risks are minimised in a way that doesn't compromise the company's needs or its objectives. As long as there is human action, there can never be zero risk. Unnecessary risk, however, can have a negative impact on profits, therefore it's important to select risk reduction techniques with an eye toward what's practically doable.

Here are three talks that will help move things along.

Once the group is assembled, it's time to have three very important discussions. One discussion centres on agreeing on what standards an AI ethical risk management programme should uphold. Finding the space between where the company is and where it wants to be is the topic of the second discussion. The third discussion will focus on identifying the root causes of the discrepancies so that they can be fixed permanently.

1) Establish a code of conduct for the use of AI within your company.

Legal compliance (such as anti-discrimination law) and regulatory compliance (such as GDPR and/or CCPA) should be assumed in any discussion. What do we identify as the ethical risks for our industry/organization, and where do we stand on them, given that the set of ethical risks is not similar to the set of legal/regulatory risks?

Quite a few weighty queries beg for clarification. If so, please describe the characteristics of a discriminatory model in the context of your organisation. Let's say, for example, that your AI hiring programme has a bias towards women, although not as much as in the past. What do you consider to be "better than humans have done in the last 10 years" in terms of how unbiased something must be? Or do you have another standard in mind that would work better? It's an issue well-known to many working in the field of autonomous vehicles: "Do we deploy self-driving cars at scale when they are better than the average human driver, or when they are at least as good as (or better than) our top human drivers?"

There are similar concerns when considering black box models. The level of explainability at your company. Is there ever a time when you'd consider employing a black box (if it performed well against your preferred benchmark, for example)? How can we tell if an AI with explicable results is superfluous, good to have, or essential?

By delving deeply into these issues, you'll be able to create frameworks and tools for your product teams and the executives that approve product rollout. It's possible, for instance, that you'll mandate a stringent ethical risk due diligence procedure for all products before they're released into the wild or even during the initial phases of development. In addition, you can establish rules for the use of black box models if and when you decide to do so. It is encouraging to reach a point where the bare ethical requirements for all AI can be spelled out. They help you win over clients and customers and show you did your homework if regulators ever look into whether or not your business used a biassed model.

2): evaluate the current state and determine where you fall short of meeting your standards.

Several technical "solutions" or "fixes" have been proposed to address ethical concerns raised by AI. Data scientists can use a variety of technologies from established companies to innovative startups and even non-profits to evaluate the fairness of their models' predictions with quantitative indicators. Data scientists are aided by tools like LIME and SHAP, which help them describe their methods and the reasoning behind their results. Practically no one, however, believes that these technical solutions, or any technological solution for that matter, would adequately reduce the ethical risk and completely convert your company so that it complies with its AI ethics guidelines.

Together, the members of your AI ethics team should establish their areas of expertise and expertise gaps. That begs the questions:

  • Just what is it that we hope to prevent by taking these measures?
  • How can we use software or quantitative analysis to reduce the likelihood of this happening?
  • What blind spots do the algorithms/numerical analysis have?
  • To address these voids, who should do qualitative evaluations, when they should be conducted, on what basis, and what criteria should be used?

What amount of technological maturity is required to satisfy (some) ethical criteria is an important but often overlooked aspect of these discussions (e.g., whether you have the technological capacity to provide explanations that are needed in the context of deep neural networks). Keeping an eye on what is technically viable for your organisation is essential for having fruitful discussions about what AI ethical risk management goals are achievable.

What quantitative solutions can be dovetailed with existing practises by product teams? What is the organization's capacity for the qualitative assessments? How can these things be married effectively and seamlessly, in your organisation? The answers to these questions can provide clear guidance on next steps.

3): Decipher the intricate causes of the issues and implement practical fixes.

Many discussions of bias in AI jump straight to providing instances and then bringing up the concept of "biassed data sets." Often times, this leads to discussions of "implicit bias" or "unconscious prejudice," two psychological concepts that have little to do with "biassed data sets." But it’s not enough to claim, “the models are trained on biassed data sets” or “the AI reflects our historical society discriminatory actions and policies.”

The issue isn’t that these things aren’t (sometimes, often) true; it’s that it cannot be the complete picture. Understanding bias in AI entails, for instance, talking about the numerous sources of discriminatory outputs. That might be due to the training data, but understanding the potential sources of bias in those datasets is crucial for a number of reasons. There are many other factors to consider, including the weighting of inputs, the placement of thresholds, and the selection of an objective function. To sum up, while discussing discriminating algorithms, we need to go into the root causes of the issue and how they relate to different forms of risk management.

Leave a Reply

Your email address will not be published. Required fields are marked *

255 character(s) remaining.