Are we equipped to define morality for artificial intelligence?
Humans have spent most of our sentient existence trying to figure out what morality is and whether there’s some unwavering morality out there that doesn’t find itself subject to location and time. As the technology for artificial intelligence grows, people have developed concerns about the coming robot uprising. Whether or not it’s a real possibility, it does present an interesting philosophical question. If human beings have so far been incapable of settling on a moral philosophy themselves, how can we imbue robots with a morality that will allow them to function effectively in our societies?