CompArts Week 6
Good Morning all!
It’s 8am, and I’m sat at Goldsmith’s in the Prof Stuart Hall building (before Creative Coding at 10am) looking out over the campus and doing a last minute update of my blog. This is the latest I’ve left it, I’ve actually been really on top of it — even on the weeks where I didn’t understand the material — and honestly I just forgot about it because I’ve been focused on the Research Project…but all of that aside, it’s being updated right now, so all in the nick of time!
Yeah — the research project has been intense. Feels like we might have bitten off slightly more than we can chew? Because it was billed as “the term’s project” we failed to realise that it was actually 2-3 weeks work — and suddenly when we did our “Timeline for Project Milestones” (shoutout to Sarah for that, she smashed it) it turned out we didn’t have loads of time to sit about!
At this point I’d like to say that it’s tough to know the audience of this blog…because MOST likely, it’s our teacher, Mattia, and our TA, Theodora (hello both!)…also classmates, but also, it’s available on “The Internet” so I feel like I have to be inclusive of that audience also? Hard to get the tone right…so in the interest of inclusivity I’m going to quickly summarise what we’re doing even though the ACTUAL audience probably already knows.
For our Term 1 research project my group is Exploring and Challenging Problematic Behaviour with Feminist Technology. As a group, we were appalled to discover the amount of abuse/sexual harassment that Virtual Assistants (VAs) and chatbots receive on a daily basis, and that their responses were at best passive, at worst flirtatious. Our discussions have questioned whether this is as a result of “blindspots” in the design process or whether this is “Big Tech” responding to “market forces”. Does a technology that could be used as a real tool for societal change to prove ethics shirk away from “calling out” abuse in an effort to keep consumers engaged with their company’s sales interface? We’ve asked what harm this does in the real world and who should be held accountable? Mostly we have wanted to find a way to bring about change, to prompt reflection on the gendered design of technologies, to encourage conscious design and use in the hope that more feminist technology products will appear in the future. (I’ve copy and pasted this paragraph from our Proposal, in the interest of getting any audience up to speed).
For our Project, we have to create an “Artefact”. We’ve decided that we want our Artefact to be able to make people reflect on this problem, and also gauge their comfort levels with a more Feminist response. A response that stands up against Sexual Harassment (because there is no risk of danger to a bot, unlike with a real Woman), in the hopes of stamping out this behaviour in this situation may help address the broader societal problem that exists with this. The idea that this is playful or fun, the idea that “it’s just a joke! Chill out”, the idea that you can speak to anyone or anything like this without their consent. It all stems from the same place, and challenging it on this level WILL condition us.
An important part of our artefact, is that it extracts data from the user/inter-actor. However that’s made it really hard to do! We found a course online that teaches you how to build a Feminist Chatbot. I did the whole course, and I built a practice one (to see the process). Part of the process is thinking about the “Character” you want your bot to be (gender, personality traits etc). In the interest of time, I just chose Terry Crews — and this is my Terry Crews bot! https://terry-crews-bot.glitch.me/
The course was wonderful for so many things in the process — in terms of how you should be thinking, amazing Feminist resources and speakers, the process of researching your demographic, addressing blind spots that might be there, tools for creating a storyboard, tools for helping create a conversation flow chat, imagining the Character of the bot — it really was such a helpful resource (shoutout to Claire Rocks for finding it!). I’m also going to post it here in case anybody wants to look at it.
The only problem we had in the programming platform — this course taught you how to use “Glitch”, which we couldn’t find a way to extract data from (we’re hoping to visualise the data in some sort of way). So we’ve had to scramble to figure a way to do that (Claire and Sarah stepping up big styles here!). With the data we take from our users, we’re hoping that the data shows us that people in 2020 ARE ready for a Feminist response. That they DO want their companies to display morals and ethics towards shitty behaviour. It’s something that’s VERY important to me, but I feel like there’s been a shift during the Pandemic, and the Black Lives Matter movement since the death of George Floyd. People are wanting to see that COMPANIES hold the same values that are important to us. I actually got raging at Adidas recently, and I tweeted this to them.
People ARE taking notice, I promise you. Aligning themselves with good values and important messages (Antiracism, Feminism, LGBTQ+ etc) would only help their public perception and therefore sales?? I swear, it wrecks my head why this isn’t the most important thing. I’ve just seen this was my next tweet 😳
Apparently my areas of passion include Activism/Inclusivity and Sneakers 🤦🏻♂️
Anyway, I’ve deviated slightly — we’re HOPING that the data we receive from our Feminist Chatbot will show this. And if it doesn’t…then that’s all the more reason to know about it and attack it. Awareness is everything.
I’m actually a bit turned around after all of this…but I think I’ve hit the main points. Group Project, my Terry Crews bot, People wanting companies to display ethics and morals…yeah I think that’s the sum of it for this week!
Ok all (realistically: Mattia and/or Theodora) — that’s it from me. Hope you slightly enjoyed this? As much as could be expected from a student blog. See you next week for an update on the Project! x