The AI-lignment Problem: LX Design

Image of networks depicting a computer chip

Have you ever considered how the AI tools you are using could potentially cause harm if not aligned to human values? How can you ensure you are using tools responsibly and are aware of AI that isn't aligned to these values? The alignment problem highlights this.

"Bias in machine-learning systems is often a direct result of the data on which the systems are trained - making it incredibly important to understand who is represented in those datasets, and to what degree, before using them to train systems that will affect real people." (Christian, p34, 2020).


Why is the Alignment Problem important?

As highlighted in the previous post, Christian (2020) explains that the alignment problem is essentially the challenge to ensure AI systems behave as they were intended too and ensure they're aligned to human values and goals. Sounds pretty straight forward, but how do you know that the output you are receiving is aligned? The following examples shared by Christian (2020) highlight where the alignment problem exists:


The Shirley Card

One of the first examples was the 'Shirley Card' that was used in the film industry to calibrate skin tones. This card was created in response to the inability for film and photography at that time to capture the wider range of darker tones of furniture and chocolate - however it was designed using a light-skinned model, and did not represent the full range of tones found in diverse populations. Whilst this is not AI related, it does highlight the broad issue of the alignment problem, namely, to ensure systems are designed with all users in mind.

Algorithmic Bias

Joy Buolamwini was a computer science undergraduate when she was given an assignment to program a robot to play 'peekaboo' - although she was able to complete the programming part, the robot was unable to recognise her face. Later in her career, she uncovered other situations where her face was not recognised as they too were using the same facial recognition code she used in her undergraduate assignment. Her experience highlighted how AI systems can perpetuate bias and discrimination, if the initial data set does not represent all users there will be a bias; specifically against minority groups, in particular, people of colour and women. Her work highlights 'algorithmic bias' which refers to the biases that can be present in the data to train AI models. If the initial data used is bias, what does this mean for the output?

Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)

COMPAS is basically a computer system that provides a risk level to a judge to determine how likely the defendant is to reoffend. The algorithm used by COMPAS reviews different information about the defendant; from their age, criminal history, and relationship, through to familial background and education level to provide a 'risk level' to inform whether they are likely to reoffend. There has been significant bias identified within this, including but not limited to:

  1. Racial bias;

  2. Socioeconomic bias; and

  3. Gender bias.

Since there is so much at stake using an algorithm, how do such biases challenge fairness and equity in the criminal justice system?

Self-driving car

In 2018, an Arizona woman was hit and killed by a self-driving uber as it did not have "…the capability to classify an object as a pedestrian unless that object was near a crosswalk." This occurred because the car sensors didn't recognise the woman as a jaywalker but rather alternated between 'vehicle, bicycle, and another' and therefore didn't stop in time. In this instance of jaywalking, it didn't have the ability to perceive and understand unpredictable human behaviours. Although this has since been rectified, it highlights the importance of understanding what dataset the AI is trained on and ensure the safety and ethical considerations that need to be considered.

How does the Alignment Problem impact you as an LX Designer?

There already have been and will continue to be many more examples of the alignment problem especially as we continue to leverage AI in education. The challenge is to be informed and ensure you are aware of it, so you can question your use of it. Some things to consider are:

  1. What are the potential limitations/affordances that may arise from using AI in LX Design?

  2. What are the implications of using AI in LX Design for the role of facilitators? How can these roles adapt to the changing landscape of AI-driven education?

  3. How can we ensure that AI-based LX Design aligns with educational goals and pedagogical principles rather than technological capabilities?

  4. What ethical considerations should be taken into account when using AI in LX Design, particularly in relation to bias, transparency, accountability, and consent?

References:

  • Christian, B. (2020). The alignment problem: Machine learning and human values. New York: W. W. Norton & Company.

Previous
Previous

Understanding AI Ethics in Education

Next
Next

The Power of Constructive Alignment: LX Design and AI