Providing for, Part 2

A little over 30 years ago Joe Jeffrey set out to create a new community member at Bell Labs, where he was a member of the technical staff. He called the new member MENTOR. It was a person with a non-standard embodiment: a complex configuration of computer hardware and software. Bell Labs hired me as a consultant on Descriptive Psychology to work with Joe on the project. Together we designed and Joe built the first (by about 20+ years) functioning artificial person, and it was very successful. Details of what and how can be found in a series of papers in Advances in Descriptive Psychology.

The MENTOR project sheds a somewhat different light on the question of what it means for an embodiment to “provide for” behavior. To make a very long and technical story short, the reason we succeeded was that we specified the person the only way that works: top-down, from the most significant to the most detailed. We first specified the community in which it was to have a place, and specified the place it was to have (technical mentor to new members of technical staff in a Bell Labs work community of 500+ individuals.) In order to take its place within the community, MENTOR needed to know everything about who did what, and how it was done, so we described in detail the relationships MENTOR had with other community members and the social practices they engaged in, right down to the most detailed account of how things are done.

And then, having articulated exactly what MENTOR had to do, we gave it the capability of actually doing it by having things happen within the existing hardware/software configuration within the Bell Labs community. We did not create the capabilities of this configuration, nor did we influence them in any way. We took them as given; what MENTOR did had to be done through what could occur within the configuration. The capacities of the configuration provided for MENTOR’s behavior.

“Having things happen” is a remarkably awkward locution; it sounds like we are trying not to say something more direct, like “the configuration did what was needed” – and indeed, we are. As Greg Colvin remarked in a prior comment, programmers for decades have referred to what their software does as if it were a person doing something; it is a pervasive custom. As Descriptive Psychologists we need not have a problem with this way of speaking, because we can understand it for what it is: an Achievement Description, in which we specify what was achieved but explicitly make no commitment to what was intended. But this is a useful technical distinction not commonly known outside Descriptive Psychology; ordinary talk of software (or neural structures) “doing” something, like integrating two perspectives, carries almost inevitably a connotation that such is the structures’ intention. And off we go, right into the swamp.

So here’s one way DP can immediately contribute to social neuroscience: by making widely known the distinction between Achievement Description and Intentional Action Description, thereby keeping clear what the commitment actually is when we talk about what software or neural structures “do”. This articulates the sense it makes to talk this way, without walking into a categorical swamp.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Providing for, Part 2

  1. Pat Aucoin says:

    I will try for a re-description. To me, MENTOR is a robot which emulates an activity (process) which Tony or Joe would carry out. Its capacity is its being able to carry out programmed instructions; its history is the procedural code which Joe wrote. At the end of a process (interaction with a user) one can speak of an achievement.

    MENTOR is not an avatar of Tony or Joe because its embodiment is not visually iconic of Tony or Joe and its abilities are limited to one application. I would hesitate to think of it in terms of Joe’s prosthetic. It seems like one of the person-like robots which Pete fancied.

    On the other hand, if Tony and Joe had designed an ordinary neural net, it would have achievements but these would not be associated with IA processes. Pete stated ‘A neural net has skills but no knowledge.’. We really can’t describe nor model nor simulate what it will do.

    I will try again to characterize an easy-to-understand starting point for the relation between neuroscience and DP. Neuroscientists of course are Persons in the DP context. They conduct experiments on the sensory-communications-neural structures of humans and other carbon-based organisms. They talk about the outcome of these experiments in neurological terms and then in the behavioral terms which make sense to them. Of course, DP talk is talk about behavior. I could say more about what I think comes next, but instead I feel compelled to re-read previous statements by Tony and Ned. This is simplistic and is meant to be simplistic.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s