Understood this way it seems obvious on free speech grounds, and in fact the ACLU filed an amicus brief on behalf of Citizens United because it was an obvious free speech issue. It clarified that the state couldn’t decide that the political speech of some corporations (Hillary the Movie produced by Citizens United) was illegal speech and speech from another corporation (Farenheit 9/11 by Dog Eat Dog films and Miramax) was allowed. Good points but it’s worth clarifying that this is not what the Citizens United decision said. If the players are skilled and you're playing to win then this is a difficult game for a bot. Like if they're both mathematicians then talk about math, or they're both cooks than talk about cooking. The first thing you do is talk about common interests with each player and find something you don't think bots can do. Then they can try to find a subject in common that they don't think the bot is good at. Let's say there is a pool of players who come from the same country, but don't know each other and have played the game before. There needs to be some sort of shared culture. But, if they had nothing in common (like they don't even speak the same language) then they would find it very hard to win. So we need to assume Alice and Bob don't know each other and don't cheat. If they, like, went to the same high school then the bot couldn't do it, unless the bot also knew what went on at that school. Obviously, if they agreed ahead of time on a shared password and counter-password then the computer couldn't do it. This depends on what sort of shared secrets they have. It's more like playing the Werewolf party game.Īlice and Bob want to communicate, but the bot is attempting to impersonate Bob. The way people informally talk about "passing a Turing test" is a weak test, but the original imitation game isn't if the players are skilled. Whether the machine can demonstrate its own willful independence and further come to us on our terms to advocate for / dictate the terms of its presence in human society, or whether the machine can build a critical mass of supporters / advocates / followers to protect it and guarantee its continued existence and a place in society. I strongly suspect AI personhood will hinge not on measures of intelligence, but on measures of empathy. Humanity has a history of treating other humans as less-than-persons, so it's naive to assume that a machine that could argue persuasively that it is an independent soul worthy of continued existence would be treated as such by a species that doesn't consistently treat its biological kin as such. We're seeing more and more systems that get very close to passing the Turing Test but fundamentally don't register to people as "People." When I was younger and learned of Searle's Chinese Room argument, I naively assumed it wasn't a thought experiment we would literally build in my lifetime.Ģ. At the end of the day, the Turing Test for establishment of AI personhood is weak for two reasons.ġ.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |