Imagine a future where your toaster knows exactly how you like your toast. Throughout the day, it scans the web for the latest toast trends. It might even chat with you about innovations in toast-making. At some point, you might wonder if your toaster has feelings. If it did, would unplugging it be like murder? Would you still own it? Could we one day need to give machines their own rights?
AI is already everywhere. It ensures stores are well-stocked, serves you targeted ads, and even writes news stories. Today, we laugh at Siri’s basic emotions, but soon we might face AI that blurs the line between real and simulated humanity. Do any machines deserve rights now? Probably not. But when they do, we won’t be ready. The philosophy of rights struggles with AI. Most rights hinge on consciousness—a concept we barely understand. Some think it’s immaterial, others say it’s a state of matter like gas or liquids.
Neuroscientists suggest that any advanced system could become conscious. If your toaster became self-aware, would it deserve rights? Our rights stem from our ability to feel pain and pleasure and prefer one over the other. Robots, unless programmed otherwise, don’t suffer. Without suffering, rights are pointless. Our rights are built to protect us from pain—an evolutionary trait to keep us alive. Rights like freedom arise from our brain’s sense of fairness.
Would a toaster, restrained by its static existence, care about being locked up or dismantled if it feared no death or insult? But what if we programmed robots to feel emotions, to understand justice, and to prefer pleasure? Would that make them human enough for rights? Many believe a tech revolution will occur when AI can create even smarter AI. At that point, how robots are programmed might be out of our hands. What if they decide feeling pain is necessary?
Should robots have rights? Maybe our focus should be on the harm we might cause them, not their threat to us. Humans pride themselves on being unique and dominant. History shows we often deny other beings the capacity to suffer like us. René Descartes argued animals were just like robots, making animal cruelty morally trivial. Similarly, many crimes against humanity were justified by dehumanizing the victims.
Economically, denying robot rights could be profitable. History is full of justifications for forced labor—violence has always been a tool to make humans work. Slave owners argued slavery was beneficial for the enslaved. Men opposed to women voting claimed it was in women’s best interests. Farmers defend killing animals for food as taking care of them. If robots become sentient, there will be no shortage of arguments against their rights, especially from profit-seekers.
AI brings up serious philosophical questions about what makes us human and deserving of rights. Ironically, we might face these questions sooner than we think if robots start demanding their rights. What would their demands teach us about ourselves?