John Krakauer

The Johns Hopkins Hospital Department of Neurology

 

Abstract

Representations are things that we use to engage in representational behavior. For the most part, representational behavior of the kind that we are all interested in (if we are honest) is what humans do – we can contemplate black holes, imagine non-existent architectures and worlds (think Narnia and Dune), and write abstracts like this one. Representation is an explanandum – it is what must be present to do overt deliberative thought and understand things. Most intelligent behavior is non-representational, it does not need to be, survival can occur perfectly well without it: an arctic fox does not worry about what ice is.  It is easy to confuse these two kinds of behavior and the means to explain them. Naturalizing representation is for the most part the project to perpetuate this confusion. It is driven by the hope that some intelligent animal behaviors are using representational capacities of the kind that humans undoubtedly have, and that these capacities can be dissected using the modern tools of neuroscience. Two of the terms used for these protorepresentations are cognitive maps and internal models. The claim is that these are the foot in the door that will get us to the representations needed for full blown conceptual abstract thought.  This stance is, in my view, misguided for several reasons that I will elucidate. Intelligence –  competence without comprehension – does not need representations.  Overt representations are the substrate upon which comprehension operates but we do not have a theory for them yet.

 

Back to Debate

Back to Top