The development and adoption of the technology has been so rapid that what we can expect from AI—or how soon we’ll get there—no longer seem clear. And it’s forcing us to confront a question that hasn’t dogged previous computer-research efforts, namely: Is it ethical to develop AI past the point of consciousness?
The proponents of AI call out the ability of self-regulating, intelligent machines to preserve human life by going where we cannot safely go, from inside nuclear reactors to mines to deep space. The detractors, however, who include a number of high-profile and influential figures, assert that improperly managed, AI could have serious unintended consequence, including, possibly, the end of the human race.
To begin untangling this moral skein, and possibly sketch a path toward policies that could help guide our path, we talked to five experts—a scientist, a philosopher, an ethicist, an engineer, and a humanist—about the implications of AI research and our obligations as human developers.