Interacting robot agents

Synthesising the origins of language and meaning using co-evolution, self-organisation and level formation.

Der folgende Beitrag ist vor 2021 erschienen. Unsere Redaktion hat seither ein neues Leitbild und redaktionelle Standards. Weitere Informationen finden Sie hier.

Luc Steels reports on experiments in which robotic agents and software agents are set up to originate language and meaning. The experiments test the hypothesis that mechanisms for generating complexity commonly found in biosystems, in particular self-organisation, co-evolution, and level formation, also may explain the spontaneous formation, adaptation, and growth in complexity of language.

Luc Steels is director of the Artificial Intelligence Laboratory and Professor of Computer Sciences and Artificial Intelligence at the Vrije Universiteit Brussel.

Introduction

A good way to test a model of a particular phenomenon is to build simulations or artificial systems that exhibit the same or similar phenomena as one tries to model. This methodology can also be applied to the problem of the origins of language and meaning. Concretely, experiments with robotic agents and software agents could be set up to test whether certain hypothesised mechanisms indeed lead to the formation of language and the creation of new meaning.

By a language, I will mean an adaptive system of representation used by distributed agents for communication (and possibly other things). As a communication system, language allows the transmission of meanings from one agent to another using some physical medium such as sound. The agents are distributed in the sense that there is no central controlling agency that defines and imposes a language. Each agent can only gain knowledge of others by interaction. A language is adaptive when it expands or changes in order to cope with new meanings that have to be expressed. Moreover new agents should be allowed to enter the group and agents may leave.

Meaning is here equated with a distinction relevant to the agent. Some meanings, like colors, are perceptually grounded. Others, such as hierarchical relations are socially grounded. Still others, like intentions or descriptions of current actions, are behaviorally grounded. Meanings may arise in any kind of domain and task setting, and need not necessarily be expressable in the language. The set of meanings that humans have access to is ever expanding as new environments are entered and new interactions take place. This should be the case for artificial agents which operate in open dynamically changing environments as well.

My main hypothesis is that language is an emergent phenomenon. Language is emergent in two ways. First of all, it is a mass phenomenon actualised by the different agents interacting with each other. No single individual has a complete view of the language nor does anyone control the language. In this sense, language is like a cloud of birds which attains and keeps its coherence based on individual rules enacted by each bird. Second, language is emergent in the sense that it spontaneously forms itself once the appropriate physiological, psychological and social conditions are satisfied. The main puzzle to be solved is how.

The origins of complexity are currently being studied in many different areas of science, ranging from chemistry to biology . The general study of complex systems, which started in earnest in the sixties with the study of dissipative systems , synergetics , and chaos , is trying to identify general mechanisms that give rise to complexity. These mechanisms include evolution, co-evolution, self-organisation and level formation.

This paper does not focus on the origin of cooperation or the origin of communication in itself, although these are obviously prerequisites for language. These topics are being investigated by other researchers, using a similar biological point of view. For example, Dawkins has argued that two organisms will cooperate if they share enough of the same genes because what counts is the further propagation of these genes not the survival of the individual organism. Axelrod , Lindgren , and others have shown that cooperation will arise even if every agent is entirely selfish. McLennan and Werner and Dyer have experimentally shown that communication arises as a side effect of cooperation if it is beneficial for cooperation.

The emergent communication systems discussed in these papers do not constitute a language in the normal definition of the word, however. The number of agents is small and fixed. The repertoire of symbols is small and fixed. None of the other properties of a natural language such as multiple levels, synonymy, ambiguity, syntax, etc. are observed. The main target of the research surveyed in this paper is to study the origins of communication systems that do have all these properties.

Before starting, an important disclaimer must be made. This work does not make any empirical claim that the proposed mechanisms are an explanation how language actually originated in humans. Such investigations must be (and are) carried out by neurobiologists, anthropologists, and linguists studying historical evolutions, child language, or creolisation.

Here, I only propose and examine a theoretical possibility. If this possibility can be shown to lead to the formation of language and meaning in autonomous distributed artificial agents, then it is at least coherent and plausible. Thus, if meaning creation mechanisms enable agents to autonomously construct and ground meaning in perception, action, and interaction, then it is no longer self-evident that meaning has to be universal and innate . And if the proposed language formation mechanisms enable artificial agents to create their own language, then it is no longer self-evident that linguistic knowledge must for the most part be universal and innate or that language can only be explained by genetic mutation and selection .