Home » Module 10: The coded gaze » Module 10: The coded gaze

Module 10: The coded gaze

Hi! Hello!

It’s so funny, I talk so much about access and being kind to yourself and yet I feel this innate urge to apologize for being unavailable last week. So I guess I will say I’m sorry because I do really care about your learning space and I hope that comes through. I think that it does. But this semester, like most or all semesters I would argue, is untenable because the conditions that we’re expected to perform under are unrealistic for any bodymind (bodymind is a Disability justice term). So for any bodymind that is looking to have any amount of joy for the time that we’re here on this planet earth — yeah, it’s rough out here people. I apologize for the rant, I can’t help myself. I also don’t understand why we’re expected to be productive constantly. But that is another byproduct of white supremacy culture. So. All the more reason to flip the script.

Please DM me if you need any support or feel lost or just want to say hello. I’m here for you and hope this asynchronous space can still feel human.

Let’s jump back into thinking critically about the fields inside engineering and the sciences. This goes for everyone, but especially for Computer Science majors — have you considered the ways in which your field has bias? the ways your field has a profound impact on how society is shaped?

I’m not sure if these questions are being raised in your other courses (I hope they are! Tell me if they are!) and since we’re considering both rhetoric and composition, these questions must be taken into account. 

For this week, I would like you to watch this 13 minute talk by Dr. Joy Buolamwini about facial recognition and the effects when the sample set skews white and male.

For the module comment, I would like you to consider the following:

Take note of 2-3 rhetorical issues Dr. Buolamwini raises that speak to you. For me, it was her reframing of the “under-sampled majority” as a way to think about who is represented in most technological spaces and who is erased. So often we say “minority” when speaking about the people of the global majority who are not white and that set standard creates an intentional bias which has real implications (think policing, thinking community funding, think incarceration rates).

Have you ever considered algorithmic bias when using your devices?

What are some ways we can shift the dominant data set?

If you have an experience of algorithmic bias that you want to share, I welcome it in this space but it is not required.

Thanks everyone for staying engaged and enjoy the rest of your week!


3 Comments

  1. -AI face recognition could actually match innocent people with criminal suspect which might cause some troubles for the innocent with the law.
    -The police could potentially abuse the AI by matching innocent people as criminal suspects based on their skin tones, and even clothings.
    I have never considered algorithmic bias as this is something new for me. I believe we could shift the dominant data set by completely removing the ethnicity classification list even if it is already rooted in history. I believe that it’s very outdated and it’s time for a change.

  2. I have never experienced algorithmic bias, i never took it into consideration whenever i have been affected by it anyways. The way to shift the dominant data set, is by Applying filters or rules to include or reject particular data subsets depending on certain traits or circumstances. This can be carried out to concentrate on particular subsets that fit the desired dominating dataset.

Leave a comment

Your email address will not be published. Required fields are marked *

Course Info

Professor: Andréa Stella (she/her)

Email: astella@ccny.cuny.edu

Zoom: 4208050203

Meeting Code: vMN9ne

Slack: Invite