From Data to Danger: How AI is Shaping how we Design

Is generative AI already reshaping what we deem to be socially acceptable? How much influence can generative AI have on what we create? Is this being entrenched in everything we do similar to what social media has done?

These are questions that I thought of when I read ‘OpenAI is ‘Exploring’ How to Responsibly Generate AI Porn’ on Wired.com. As I read this article which highlighted OpenAI is “… exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the AP and ChatGPT,” I couldn’t help but think that this is moving so quickly, as a society, are we questioning what is happening? As LX Designers, are you keeping abreast of AI to ensure the tools are fit for purpose? What are our responsibilities when using or recommending these tools?

The AI Dilemma

This article brought me back to the AI dilemma from the Centre of Humane Technology. It highlighted that although AI provides incredible advancements in various fields, when misused it can lead to unintended consequences. We are already seeing this with deepfakes being created without consent and the implications on the individual are often lasting. In April, the Conversation explored why we need to make it a crime to create it in the first place. It highlights that the potential creation of NSFW content is a prime example of such misuse and is something we have to be aware of. Who is going to be the one to provide the guardrails to ensure there is appropriate use? ChatGPT was just given to everyone, will the same be true for NSFW content?

The Echo Chamber Effect

AI in social media was the first contact of the AI dilemma. For over a decade, AI has shaped our interactions through algorithms that capture our attention and influence our behaviour. By using algorithms to curate and recommend content, they can predict and influence what users will find appealing and therefore optimise engagement. This personalisation is based on complex algorithms which analyse our behaviour, preferences, and intentions. It sounds great, but this comes at a personal cost. The echo chamber effect.

Cinelli et al. (2021) highlight the structure of social media heavily leans toward fostering environments where like-minded users reinforce each other’s beliefs. The implication of this is that it is shaping personal opinions and beliefs, and therefore is a need for a critical approach to consuming information online. In an educational context, we are faced with the same challenge now, with generative AI, how do we teach students to critically evaluate and consume generated content? It is embedded within so many applications; are we challenging learners to question their assumptions or consider alternate viewpoints?

Ethical and Societal Implications

As stated in the AI dilemma, we are now at our second contact with AI. This is a different beast, a new way in which we are using AI, it is generative so we are creating new content, new experiences and new ways of working. In doing so, we are giving so much information to this generative AI that whoever owns the application; knows you, your hobbies, views, likes, and dislikes. You trust them with all of your data, so right now, they own your trust. With this comes obvious societal and ethical implications for what is done with the data.

When organisations such as OpenAI are “…exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the AP and ChatGPT”, I immediately think about what is age-appropriate and where are the guidelines for such use? What are the ethical implications, and more broadly, what are the societal implications of such technology? So, when you embed generative AI tools into your learner experience and ask them to engage with it, what are they inherently being exposed to and what data are they giving away?

Why Does this Matter?

As I reflect on the AI dilemma and social media as the first contact, it makes me think about AI today. The potential of NSFW has prompted me to rethink its use and integration into education. Whilst there is enormous potential for AI, with the ability to personalise and enhance learner experiences, the implications cannot be ignored. There needs to be a critical evaluation not just of the output but of how we are preventing biases, safeguarding privacy, and maintaining online safety. Our role of LX Design is critical in this as we use and integrate technology into the experiences.

References

Center for Humane Technology. (2023). The AI dilemma. In Your Undivided Attention. Retrieved from https://www.humanetech.com/podcast/the-ai-dilemma

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118

Knibbs, K. (2024). OpenAI Is ‘Exploring’ How To Responsibly Generate AI Porn. Wired. Retrieved from https://www.wired.com/story/openai-is-exploring-how-to-responsibly-generate-ai-porn/

Next
Next

Designing Learner Experiences: Authentic, Personalised and Interactive Design