Artificial Intelligence… sounds magical. Smart technology, free of human emotion, right? It will help humanity and change life for the better?
We ask you to look into that Net Mirror and inquire- are the outputs of machine thinking and algorithms really Intelligent? How is that intelligence? And is it free of the biases of it’s developers (who are all too human).
Some things to consider, investigate, and do this week as we look at this week’s readings and viewings on Algorithms of Oppression.
Things to Click (and Ponder)
- Anatomy of AI – a fascinating complex diagram of all the resources, systems, inputs, players that go to creating a single Amazon Alexa device
- Awful AI – a curated list to track current scary usages of AI – hoping to raise awareness to its misuses in society
- How to recognize AI snake oil – “Much of what’s being sold as “AI” today is snake oil — it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?”
- Ring™ Doorbell Log – is an experiment in speculative surveillance
- Discrimination in Online Ad Delivery – “This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. “
Watch / Listen
We will start to take in examples of speculative fiction done in audio, video, and web narrative form. Review at least one of the first examples in our Net Mirror Library; pay attention to the storytelling, the way the genre works (or not), how it builds suspense or produces unexpected “twists”
Try the Google Art and Culture App, which uses AI to match human faces to works of art. While it may fail on the accuracy, what is missing? How is this app problematic (or not?)