Early Artificial Intelligence

As computers get more and more advanced, theories abound about the singularity – the point at which computers suddenly develop their own consciousness and break free from the limits of human programming.

herMost recently the movie Her from Spike Jonze is a look at the singularity.  The Matrix was another film about that point in time.  While we still are quite a ways away from this happening, it’s fun to look back and see the basis of artificial intelligence and how the earliest ideas about it were formulated.  I found this article from Science to be rather informative, and I took a portion and quoted it below:

The idea of computers forming organizations raises some fundamental research questions. For example, as things are now, the organizations are specified by the programmers beforehand. Can the computers be taught to organize and reorganize themselves on their own to fit the problem at hand? Lesser has been thinking about how to do that, but finds it slow going. “You find that the question of ‘What is an organization?’ is very difficult to define,” he says. “Part of our work is to define a language in which you can talk about organizations symbolically.” Malone has also been thinking along these lines. He and several colleagues have begun to develop an analytic framework for evaluating the efficiency and flexibility of organizations, including such factors as production costs, coordination overhead, and the vulnerability of the system to isolated failures or to sudden changes in the environment.

Another research question: How can one machine reason about another’s knowledge, intentions, and beliefs? “In human communication, a lot of what I say depends on what I believe about your state of mind,” says MIT’s Randall Davis, who organized the AAAI panel. “For example, if I think you know about something, I won’t bother to explain it to you. If I think you don’t believe it, I may argue for it.” Exactly the same kind of considerations come up when machines have to communicate.

Michael Genesereth of Stanford University has been looking at some of these issues by mathematically modeling groups of computers, or “agents,” that interact according to rules based on game theory. “The thing that intrigues me,” he told the AAAI, “are the circumstances in which cooperation will emerge spontaneously from individual agents.

The simplest case is when the agents cannot communicate with each other, he explains. As long as the agents know about each other’s desires and intentions, they still end up cooperating simply because that is the way they can best achieve their individual goals. “What we’ve found is that rationality necessarily leads to cooperation,” he says.

On the other hand, says Genesereth, things begin to get very interesting indeed when the agents can communicate. Sometimes they cooperate. Sometimes they establish ad hoc organizations. But sometimes they try to manipulate each other. Sometimes they withhold information. And sometimes they lie. Genesereth hopes to do a lot more work in understanding why and when this happens.

Waldrop, M. Mitchell. “The intelligence of organizations.” Science 225 (1984): 1136+.

What do you think about the singularity?  When do you think the turning point will be?  Do you think humans will take it so far as to risk computer intelligence running away from us?  The result of this in Her was that the computers got exponentially smarter as they worked together, and in essence they “jumped dimensions” into an apparently alternate dimension that we humans cannot even conceive of right now.  And that was it.  They were gone.  An interesting turn, and one much less malicious than the ideas posed in the Matrix.