Trying To Think
Tuesday, November 02, 2004
... can be thought of as partitioning a space. Each input node of the NN is a dimension of the space; for any given array of input values, we have a point in the space. The outputs of the NN divide up this space, and so the point will exist in one (or more) regions.
All a NN does is map input values to output values. All this description does is create a visualisation of mapping.
Internal representations: these can be thought of in the same way, simply by taking hidden nodes instead of output nodes. But this is not what I want to do - I want to create "objects" or "pictures" that can be thought to mediate the mappings.
Modularity: an important assumption in Cog Sci is that mental processes (and in particular, the abtract mental modules we theorize about) are modular. This seems to be the case, or at least we have theories of mind (folk and scientific) that are modular and have some success.
But how can we modularise a NN? Can we look at it (or its behaviour) and deduce functional modules, or can we only take a functional stance towards the behaviour as a whole (following Dennett - does he address the idea of modularisation in his talk of stances?).
eg. a NN may say words aloud, following the grammatical and ad-hoc rules of English. We can break this function up into modules, and brain damage in humans show that different modules may be independantly impaired.
Must the NN also have these modules, or may they be distributed through the NN? If the latter (which seems likely), then is it incorrect to modularise the NNs reading, even if it is functionally the same as ours? It would not show the same patterns of damage, but wouldn't the same modular theory of mind apply?
Even worse, the NN must be physical sub-structures, which can be considered as functional modules. These might not be the same functional modules as we have. And so the same function (reading words aloud) should be modularised differently?
What this might all mean: we have intuitions about how our minds should be modularised. These intiutions (generally) are based on modules that happen to exist, simply because of the idiosyncratic development of our brains, and have no deeper significance. (of course, there might be only a small number of possible ways for a brain to develop, given the tasks that it has to perform. There might be a natural modularisation in the task itself, which the brain might as well mirror. If this is the case, then the NN that reads words aloud would probably show the same functional modules, if it is allowed to develop in a similar way to our brains)
to do: think of some examples.
Comments: Post a Comment