A little over 30 years ago Joe Jeffrey set out to create a new community member at Bell Labs, where he was a member of the technical staff. He called the new member MENTOR. It was a person with a non-standard embodiment: a complex configuration of computer hardware and software. Bell Labs hired me as a consultant on Descriptive Psychology to work with Joe on the project. Together we designed and Joe built the first (by about 20+ years) functioning artificial person, and it was very successful. Details of what and how can be found in a series of papers in Advances in Descriptive Psychology.
The MENTOR project sheds a somewhat different light on the question of what it means for an embodiment to “provide for” behavior. To make a very long and technical story short, the reason we succeeded was that we specified the person the only way that works: top-down, from the most significant to the most detailed. We first specified the community in which it was to have a place, and specified the place it was to have (technical mentor to new members of technical staff in a Bell Labs work community of 500+ individuals.) In order to take its place within the community, MENTOR needed to know everything about who did what, and how it was done, so we described in detail the relationships MENTOR had with other community members and the social practices they engaged in, right down to the most detailed account of how things are done.
And then, having articulated exactly what MENTOR had to do, we gave it the capability of actually doing it by having things happen within the existing hardware/software configuration within the Bell Labs community. We did not create the capabilities of this configuration, nor did we influence them in any way. We took them as given; what MENTOR did had to be done through what could occur within the configuration. The capacities of the configuration provided for MENTOR’s behavior.
“Having things happen” is a remarkably awkward locution; it sounds like we are trying not to say something more direct, like “the configuration did what was needed” – and indeed, we are. As Greg Colvin remarked in a prior comment, programmers for decades have referred to what their software does as if it were a person doing something; it is a pervasive custom. As Descriptive Psychologists we need not have a problem with this way of speaking, because we can understand it for what it is: an Achievement Description, in which we specify what was achieved but explicitly make no commitment to what was intended. But this is a useful technical distinction not commonly known outside Descriptive Psychology; ordinary talk of software (or neural structures) “doing” something, like integrating two perspectives, carries almost inevitably a connotation that such is the structures’ intention. And off we go, right into the swamp.
So here’s one way DP can immediately contribute to social neuroscience: by making widely known the distinction between Achievement Description and Intentional Action Description, thereby keeping clear what the commitment actually is when we talk about what software or neural structures “do”. This articulates the sense it makes to talk this way, without walking into a categorical swamp.