August 27th Course Introduction
New assignment: Assignment #0 - DUE BEFORE 2nd class
August 29th Practical ML
Due by class: Assignment #0
New assignment: Assignment #1, due Sunday September 8th by 11:59pm
September 3rd A History of Artificial Intelligence
PLEASE SIGN UP: for being a discussant. If you are a discussant for next week, please plan to come to office hours this week.
September 5th A History of Humans Interacting with AI + AI vs. IA
Due by class: reading reflections on piazza
September 10th Matchmaking Needs and Risks for Adding AI/ML
Worksheets for design workshop:
New assignment: Assignment 2 Loan application front-end
September 12th Communicating Predictions & Recommendations with Users
Worksheets for design workshop:
September 17th Failure & Feedback with Users
September 19th Product Workshop Day
Make one new product based on prompt. Group activity.
September 24th: Why would people give you data anyway? Data ethics and laws
September 26th: Using human-centric data in an ML pipeline (wait, should it even be a pipeline??)
Assignment 2 due on Sep 29
October 1st: Visualizations to improve human-AI interaction
New assignment: Assignment 3 visualization -- delayed! Sorry!
Guest lecture: Adam Perer: Visualization in AI (Slides)
October 8th: How does telling people how an algorithm works change their experience?
Assignment 3 due on October 20th
[Warning: this reading has certain graphic, but textual, descriptions of the results of accidental radiation exposure during clinical therapy. ]
This reading is ostensibly not about AI. But it may be one that allows you to draw many parallels to our AI space. When you read the paper, consider these questions:
- What parallels can you draw from the reading to the design of human-AI systems? (For instance, the Tyler incident is caused by a "race" condition -- a hard-to-find bug that is the result of exact timing of operations, which are largely determined by chance and so hard to inspect ahead of time. AI systems can similarly rely on things that are hard to inspect ahead of time.)
- What are the roles that people played in this story in making the error, diagnosing it, and fixing it?
- Who are the heroes of this story? Who are the villains? Is it useful to think of them this way?
- What roles did users play in this story?
- Was the response from the manufacturer ethical? What about the regulatory governmental agencies, and the attending doctors?
- If something similar were to happen today with an AI-infused system, what would you expect to go differently?
October 17th Guest Lecture Ken Holstein Improving fairness in ML systems: What do industry practitioners need?
Assignment 3 is due Oct 20th!
October 24th: class is canceled for the HCII 25th celebrations
No Discussion panel
Guest lecture Julian Ramos: personalized context aware health interventions.
New assignment: Assignment 4 Make a chatbot with humans in the loop that recommends stuff to you.
The focus of this week is to rethink what it means for a human to be “in the loop”. The readings reflect this focus. If you’re interested in the more traditional view of humans “in the loop”, make sure you read the grad reading.
Assignment 4 due on November 10th
New assignment: Assignment 5 Vision with GANs
Final Project Released: Make something cool & interactive with AI/ML
November 26th Thanksgiving Break; no class or normal office hours
November 29th Thanksgiving Break; no class or normal office hours (🦃 or 🥧?)
December 3rd: Project presentations
December 5th: Closing
Final Project due on Dec 8th