The thing about the Turing Test is that it's a metric for determining when a computer iteration qualifies as human. If you're dealing with people who have an inability to distinguish between computer and human conversations, that's not on you. Either the computers have gotten too good or these people just aren't serious about drawing a line.
I'm pretty sure the first AI that passed the turing test was made in the 60s. The turing test is an absolutely garbage metric for identifying if a computer iteration qualifies as human and is entirely dependent on the whims of the individuals that make up the test group.
The turing test is an absolutely garbage metric for identifying if a computer iteration qualifies as human
It's a useful metric because it addresses the primary means by which humanity is evaluated (via evaluation by other humans). You can set up a synthetic test to determine if a response is computer generated. But this won't measure behaviors as evaluated by humans. If the results diverge, it will be due to some number of characteristics that humans aren't reliably picking up on.
The original name for the Turing Test was "The Imitation Game". And the fact that computers could pass the test as early as the 1960s only proves that humans (in this particular case, humans with very low exposure to computer behaviors) can be reliably deceived. But the consequence of this game iterating out over sixty years of practice is a hyper-sensitivity to computer output, such that end users will mistake humans for computers instead of the other way around.
entirely dependent on the whims of the individuals that make up the test group
Not whims, but learned observational patterns. This is what ultimately separates people from machines - patterns of behavior. If a computer and a human exhibit the exact same behavioral pattern, there's no way to distinguish one from the other.
The thing about the Turing Test is that it's a metric for determining when a computer iteration qualifies as human. If you're dealing with people who have an inability to distinguish between computer and human conversations, that's not on you. Either the computers have gotten too good or these people just aren't serious about drawing a line.
I'm pretty sure the first AI that passed the turing test was made in the 60s. The turing test is an absolutely garbage metric for identifying if a computer iteration qualifies as human and is entirely dependent on the whims of the individuals that make up the test group.
It's a useful metric because it addresses the primary means by which humanity is evaluated (via evaluation by other humans). You can set up a synthetic test to determine if a response is computer generated. But this won't measure behaviors as evaluated by humans. If the results diverge, it will be due to some number of characteristics that humans aren't reliably picking up on.
The original name for the Turing Test was "The Imitation Game". And the fact that computers could pass the test as early as the 1960s only proves that humans (in this particular case, humans with very low exposure to computer behaviors) can be reliably deceived. But the consequence of this game iterating out over sixty years of practice is a hyper-sensitivity to computer output, such that end users will mistake humans for computers instead of the other way around.
Not whims, but learned observational patterns. This is what ultimately separates people from machines - patterns of behavior. If a computer and a human exhibit the exact same behavioral pattern, there's no way to distinguish one from the other.