Nicholas Shea

Professor of Philosophy, Institute of Philosophy, University of London

 

Abstract

Representation is a central explanatory tool of the cognitive sciences. There is not yet a strong consensus about its nature. However, many (but not all) explanations that rely on representations can in turn be explained by a family of theories of representation that appeal to internal entities that: (i) stand in exploitable relations to the world (e.g. correlation, correspondence), and (ii) interact in internal processes (algorithms); both (iii) in the service of performing some task or function.

We can also explain why things that afford this kind of explanation arise systematically in nature. Very roughly, stabilising processes like natural selection and learning are a diachronic force for producing certain outcomes robustly, and one way to achieve that synchronically is to calculate over internal states bearing exploitable relations to various features of the problem space. The most obvious cases are where representations are decoupled from immediate environmental input, but the same rationale, and the same explanatory scheme, is also present in simpler cases where no decoupling is involved.

A naturalistic account of the nature of representation, along these lines, makes sense of appeals to neural representation. There, the representational vehicles are patterns of activity in neural assemblies (or sometimes individual neurons); and computations take place between attractors or regions in neural activation spaces. Such accounts are equally applicable to explaining the operation of deep neural networks.

 

Back to Debate

Back to Top