Home Tech Updates How do you ensure ethical deployment of AI implementations

How do you ensure ethical deployment of AI implementations

by Helen J. Wolf
0 comment

The significant increase in automation and machine technology, such as AI and machine learning, has undoubtedly unlocked a whole new level of scale and service for organizations.

We probably all expect that one of the benefits of AI is its ability to remove human-led bias and improve discrimination against minority groups. However, poorly managed AI can further entrench discrimination by embedding bias into its algorithms.

How do you ensure ethical deployment of AI implementations

Today, machines routinely decide whether we qualify for a mortgage or are subject to scrutiny by law enforcement or insurance companies seeking to fight fraud. Their reach even extends to deciding what ads you see online — including that job posting for a high-paying position.

There are many organizations where AI in automated systems is poorly documented or understood. It’s time for automated decision-making to step out of the shadows and be held accountable.

When automated decision-making affects people’s lives, directly or indirectly, and machines can discriminate in harmful ways, organizations need to sit up, pay attention and act to ensure AI is implemented as ethically as possible.

First steps

Both companies and government organizations should strive for the highest level of protection against damage from the machine technology they deploy. At the start of any automation project, organizations must conduct legal, privacy, and ethical impact assessments to confirm that risks are well understood and can be satisfactorily mitigated. This also ensures that the most appropriate solution is chosen to establish an acceptable level of risk while delivering value.

The signature of these reviews should be before a multidisciplinary objective review panel with veto power over all problematic aspects of a project, including deployment mode, level of automation, and redress capability. The implementation should be collaborative between the data/technology teams and the business leadership team to operationalize best-in-practice ethics within data and analytics.

Stake

The Ombudsman’s report provides some strong advice for good practice in designing and implementing machine technology. However, we believe that all organizations are required to consider at least the following best practices:

The ethical considerations of fairness, transparency, non-harmfulness, privacy, respect for autonomy, and accountability dictate that any organization that implements machine technology must ensure that it performs with the highest accuracy for all groups involved; That there is a mechanism to explain decisions based on the output of a model or system; That there are processes in place to detect and reduce harmful outcomes That people can give informed consent to participate in the process. There are mechanisms in place to challenge results that are perceived as unjust.

Developing and deploying any machine technology should be iterative, starting with an ethical assessment of accuracy against historical data to ensure consistent performance across the sample population. If there are groups for which the commission is substantially worse, more data should be sought to provide adequate representation for all groups.

Similarly, when the risk of adverse impacts is identified, the implementation should be iterative and cautious, starting with human-in-the-loop solutions to ensure human oversight while gaining confidence in the performance of the model or system.

This is not to say that the human decision-making process is infallible; it merely provides an opportunity to understand and interrogate the output before deploying it. This process should be performed with the most trusted operators to reduce the chance of reintroducing human biases. In addition, everyone involved in the process should have undergone training for unconscious bias.

Once in production, any machine technology’s continued accuracy and performance must be continuously measured and monitored. This performance should be reportable and visible across the organization alongside existing KPIs.

Rating

Any organization implementing algorithmic decision-making must have an objective ethical review process that includes quantitative and qualitative considerations. The model’s performance should be monitored against these real metrics to understand any deviations in performance for minority groups and any change in performance over time. The model can be continuously adapted and adjusted as part of the operational process.

While implementation may seem daunting, organizations must improve their understanding and operationalization of ethical considerations in their AI and machine learning projects. Companies should adopt a ‘demand – assess – measure – improve’ approach to managing the performance and impact of their automated decision-making to ensure ethical outcomes.

You may also like