Years after it became a running gag on HBO’s show “Silicon Valley,” the idea of companies automatically “making the world a better place” through profit-driven technological development has lost much of its shine. The next generation of computer engineers and tech entrepreneurs may benefit from a more socially conscious education that combines training in artificial intelligence with teachings on societal issues and ethics.
A growing number of universities such as Harvard and Stanford have been introducing or developing new courses that teach computer science students about ethics and the societal implications of AI technology. But Carnegie Mellon’s Artificial Intelligence Methods for Social Good course may go even further with a hands-on experience that requires students to apply what they learn about AI techniques to societal issues in healthcare, social welfare, security and privacy, and environmental sustainability.
“People are realizing that AI is not just another technique; there are important aspects of society we need to think about and discuss,” said Fei Fang, an assistant professor at the Institute for Software Research at Carnegie Mellon University in Pittsburgh. “The reason why I say this is different from other courses is that the emphasis of this course is to link AI methods directly to the societal challenges we are facing.”
This spring semester that is drawing to a close marks the first time Fang has taught this course, which includes a 12-unit version geared for master’s and Ph.D. students in computer science and engineering. Early interest in the course has been relatively strong: Fang had to increase the class size after initially planning for a maximum of just 30 students.
Part of the course introduces popular AI methods such as pattern recognition and machine learning algorithms. But the course also dives into real-life examples of how various AI techniques have been used to tackle societal issues such as figuring out the best traffic patterns or protecting endangered animals from poachers. The final project requires students to propose how certain AI methods could make a positive impact on a particular issue.
The course readings include a research paper on software that has helped randomize the roadway security checkpoints and canine patrol routes for Los Angeles International (LAX) airport since 2007. Another reading covers a machine learning technique that analyzed satellite imagery of five African countries to extract measures of socioeconomic activity. Several readings also touched upon the challenges of regulating related technologies such as self-driving cars.
Fang’s own work may also serve as inspiration for students. She helped develop an AI system that enables drones armed with thermal infrared vision to automatically detect people and animals at night. Such high-flying surveillance is being tested by a wildlife conservation group called Air Shepherd at national parks in Africa.
The course also features a number of guest lectures by experts who have been conducting such research or even developing related applications. “Much of the work we’ll introduce in the course is already being tested in the field or even deployed,” Fang said.
Fang hopes that her course’s format mix of teaching AI techniques and how to apply them to societal issues may prove inspiring for other educators seeking to create similar courses. She pointed to a similar course being taught at the Center for AI and Society at the University of Southern California in Los Angeles. But even with the growing urgency to teach computer science ethics, most courses apparently focus on teaching the AI methods and ethics in isolation from one another.
“It’s more like AI researchers are working on the AI part while other researchers from philosophy departments or the law school discuss the implications of such AI,” Fang said. “It would be good if there was deeper collaboration between the AI researchers and non-computer science researchers who care about the ethics aspects of AI.”