Machine Learning is a speculation on algorithmic bias in computation. The video depicts an individual attempting to train a computer to recognize itself in a mirror as if the computer were an animal. This act investigates how standard algorithms are biased—for example the standard computer vision library has difficulty in recognizing black people—and proposes the user should be in control of how their computer's algorithms really function.
Machine Learning investigates computational bias by asking a computer to recognize itself in a mirror. This task is ultimately impossible. I should clarify—computer self recognition is not impossible, but training the computer to recognize itself with these specific algorithms is impossible. All the algorithms used in Machine Learning are off-the-shelf detection algorithms offered as part of OpenCV, the open-source computer vision library. These algorithms are built for general-purpose image analysis and are inherently biased toward specific information-recognition, making them incapable of performing specific functions like self-recognition. If this task is indeed impossible, why then is the trainer so heedlessly pursuing it?
When we automate menial tasks, we trust algorithms—and the machines that run them—to determine what is useful to us as human beings. These automations may be blocking spam emails, determining who you may know on a social network, or creating a photo album of a person by searching photographs for their face. All of these require a decision, or series of decisions, made by a computational process. The more complicated the automation, the more decisions move from human hands in to computer methods.
How many decisions do computers have to make for us to consider them intelligent? We already use terminology relating to intelligence when referring to any computational object or function that previously did not make decisions: smart phones, smart cars, smart homes, smart bombs. However, these objects are superficially smart; they are only as intelligent as humans permit them to be. Those individuals who construct the algorithms determine what the computer knows and how it processes that knowledge. That process of constructing intelligence can result in exclusionary results, or simply a minimal understanding of a data set.
As we trust an increasing number of decisions to algorithms, we are trusting an increasing number of decisions to the computer scientists and engineers who build these algorithms. These algorithm constructors brings with them their cultural and racial biases, and favor the results which they deem important for specific use-cases. Can we prevent this bias, this specificity? Machine Learning is an inquiry into this question through the lens of humans' relationships with their computers. This is why the trainer pursues his impossible task: he is acting as the investigator, the designer.
More specifically, the trainer struggles against current algorithmic modalities, revealing problems through his frustration, while his attempt at training the computer hints at an alternative. Through this process of revealing cracks and offering patches, the aim is not to unequivocally correct all the issues in a given system, but instead to make a problematic situation more coherent and manageable. The result is not a solution, but a point in an argument. It allows others to recognize the problem and encourages thinking and future engagement around that problem.
As such, Machine Learning is not proposing computer manufacturers should start working on user-programmable automation, but it is drawing attention to some of the problems present in current automation. This critique works as a function of speculative and critical design and is meant to address and comment on current practices, inspiring thought for future practices. To a certain extent, the project is leaning toward a parallel reality wherein each individual may tune his or her own computer automation as a household pet, removing the bias and negligence from those engineers that currently construct algorithms and automation. An idealized reality to be sure, but ultimately the trainer and project both fall somewhat flat of this solution.
Machine Learning is more of a critique of current practices than an alternate reality where individuals maintain more control over their computers. In short, Machine Learning is troubling, frustrating, and difficult—as are search engines, and biometric scanners, and content filters, and any other process we hope to automate.