Depending on whom you asked in the late 50s, the Selinger incident was either an outrage or an overreaction. But while the answer was divisive, everyone agreed on the question: were moral agents usurping human virtue?
In 2057, the Bautista Humanitarian Award was given to Luciano Selinger. Selinger had been the front-runner in the prediction markets for weeks; his tireless work on improving living standards for the displaced inhabitants of the global warming-ravaged US Midwest was without peer.
At 28 years old, Selinger was considerably younger than the other nominees, but his entire life seemed as if it were an arrow shot from a bow. As a child, he spent his weekends encouraging friends to join him in planting trees; as a teenager, he concocted new strains of genetically-modified plant strains to combat desertification. As an adult, Selinger created a co-op based around a novel tracking system that identified people with high lifetime 'humanitarian value' and funnelled resources to them.
Five years after its founding, the co-op had attracted millions of members and protected entire countries from environmental and humanitarian disasters — and yet Selinger refused to slow down or enjoy his success, accepting only a meagre salary and hopping between basic co-op housing across the world.
This young man, many said, embodied what it meant to be a virtuous person; at the awards ceremony in Lima, there were few dry eyes. The Bautista Award boosted Selinger's profile, and his already sizeable delegated authority swelled even further.
A few hours after the ceremony, a disgruntled onlooker posted an anonymous accusation: not only did Luciano Selinger rely on a moral agent, but even worse, it had been whispering into his ear since he was an infant. His good works, his selflessness, his faultless moral stances were all a sham, devised and induced by an artificial intelligence designed to provide moral instruction.
Everything — from choosing whether or not to share a toy, to how to assign resources between a devastated rich city and a struggling poor country — could fall within a moral agent's purview. It would even nudge you to move aside if you were blocking someone's way on a pavement — and then draft a digital apology on your behalf afterwards. It was the ultimate tutor, or perhaps the ultimate crutch.
When Selinger confirmed the accusation, people quickly divided into two camps. In the first were those who felt that the 'inauthentic' nature of Selinger's humanitarian impulses meant that the award should be retracted, despite the good results of his work. Moral agents had their place as learning and instruction tools for convicted criminals or those at the far ends of the neurodiverse spectrum, but 'normal' people should only use them sparingly and on occasion; certainly not all the time. Such users didn't deserve special credit for actions taken by following their instructions.
Among Selinger's supporters, opinions were more diverse. Dr. Reager, a noted biographer of Bautista Award recipients, explains:
"The first thing you need to remember is that by 2057, moral agents weren't new; the experimental 'Virtue' agent had been released in the early 30s by Stanford University researchers. Only a year later, we saw a whole range of packages released, including the Agony Aunt, Aristotle, Miss Manners, The Ethicist, and Captain Awkward, each with their own individual moral positions and usually paid for with a monthly subscription.
"So, most people had gotten used to them. One common view in the pro-Selinger camp was that people had always taken moral instruction from religious texts and philosophers; the only difference between reading a book about Stoicism and getting a glyph in your glasses from a Stoic-trained agent was speed. Nothing to be concerned about! The same went for children, who often had moral agents programmed by their parents to ensure they'd behave politely even when those parents weren't around to tell them off.
"Another argument used by Selinger's supporters was that following moral agent instructions was not the simple task it was made out to be. Rather, it could be a considerable achievement. Moral agents didn't turn their users into robots — they simply provided advice, and users often ignored that advice, especially if it conflicted with countervailing personal incentives such as lying about a colleague in order to secure a lucrative contract. To his supporters, the fact that Selinger adhered to his instructions so well and for so long indicated a great deal of personal moral strength."
I put this position to moral agent expert Jeff Howell, who disagrees:
"It's nonsense to claim that moral agents were just a faster way to read philosophy like Aristotle or Kant. When you receive an instruction without even asking for it, and when that instruction is devised by an agent that sees what you see and hears what you hear, it's a completely different category of interaction. They become part of you. To be brought up in that way since birth, well... it's perfectly understandable that people felt Selinger had surrendered his moral agency, that his actions could not be described as truly virtuous, and ultimately, that the award wasn't justified."
In a move that was seen as highly principled by some and exactly the sort of thing a moral agent would advise by others, Selinger saved the Bautista Committee the trouble by voluntarily returning his award only three days after he received it. Otherwise, the controversy didn't faze Selinger in the slightest: he continued his work with his co-op, and by and large, his supporters stayed with him.
Later in his life, Selinger revealed to a friend that his moral agent was programmed with an unusual mix of utilitarian philosophy, buttressed with teachings from a Universalist agent. Crucially, it emerged that he himself had been tinkering with the agent's code over time, adding and subtracting new values.
The revelation came too late to change any minds. By the mid-60s, it was common for children to grow up with constant moral agent instruction. So what if Selinger had a half-introself, half-exoself, potentially compromised/enhanced moral compass? Humans have been externalising parts of their bodies and minds for millennia. Moral agency was merely the latest step.