On 2020-01-22 at 14:00:00 (Brussels Time) |
Abstract
The 'secret sauce' that made AI successful contains an important ingredient: vast samples of human behavior. From those, machine learning algorithms can extract the statistical rules that guide their own behavior: rules for recommendations, translations, image analysis, and more. Recently there have been concerns about subtle biases that might be found in AI agents, and some may be tracked just to the data that was used to train them, as well as to the fact that these agents are 'unreadable' to humans. Understanding the biases that are found in media content is important, as this is often what is used to teach machines to understand language. More generally, we need to understand the interface between AI and society if we want to live safely with intelligent machines.
Keywords
machine learning, digital humanities