Abstract
The Turing Test is best regarded as a model to test for intelligence, where an entity’s intelligence is inferred from its ability to be attributed with ‘human-likeness’ during a text-based conversation. The problem with this model, however, is that it does not care if or how well an entity produces a meaningful conversation, as long as its interactions are humanlike enough. As a consequence, the TT attracts projects that concentrate on how best to fool the judges. In light of this, I propose a new version of the TT: the Questioning Turing Test. Here, the entity has to produce an enquiry rather than a conversation; and it is parametrised along two further dimensions in addition to ‘human-likeness’: ‘correctness’, evaluating if the entity accomplishes the enquiry; and ‘strategicness’, evaluating how well the entity accomplishes the enquiry, in terms of the number of questions asked.