I must, at some level, reconnect with my hopes and fears. Fear of wrong social prioritiesīeing a scientist doesn’t absolve me of my humanity, though. My focus is not on determining whether I like or approve of something it matters only that I can unveil it. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. But that is a moral question, not a scientific one. While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.Īnother possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. With each generation, the machines get better at handling the errors that occurred in previous generations. Ultimately we hope to create human-level intelligence.Īlong the way, we will find and eliminate errors and problems through the process of evolution. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. Over many generations these machine-creatures evolve cognitive abilities. The creatures’ performance is evaluated those that perform the best are selected to reproduce, making the next generation. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. We know that “to err is human,” so it is likely impossible for us to create a truly safe system. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. These are not world-changing consequences indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.īut as AI designs get even more complex and computer processors even faster, their skills will improve. But if these machines make mistakes, they lose on “Jeopardy!” or don’t defeat a Go master. Systems like IBM’s Watson and Google’s Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. We try to engineer AI without understanding intelligence or cognition first. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. I can see how we could fall into the same trap in AI research. In each disaster – sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia – a set of relatively small failures combined together to create a catastrophe. That resulted in systems that could never be completely understood, and could fail in unpredictable ways.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |