logo_ipparis.png     TelecomParis_endossem_IPP_RVB_100pix.png Telecom Paris
Dep. Informatique & Réseaux

Dessalles_2018.png J-L. DessallesHome page

November 2021

SECS.png Module Athens TP-09

    Social Emergence in Complex Systems
    jean-louis Dessalles - Associate professor at Telecom-Paris
    Julien Lie-Panis - final year PhD student at Telecom-Paris and Institut Jean Nicod

                                other AI courses


Emergence of social structures

This Lab work is based on the use of Evolife. If necessary, install it.

Please answer in English!


Collective decision

In many cases in animal kingdom, important decisions are taken collectively, without any individual having the last word. Similarly, a collective phenomenon has been invoked to explain the sudden growth of demonstrations that led to the fall of the Berlin Wall in 1989. A similar situation obtains in the brain, where neural assembly ‘decide’ to become synchonous. This study aims at showing that collective decisions may be sudden and clear-cut, even if all individual agents tend to hesitate. Swallows.gif

The simulation we study here shows how a decision can be made by a collective. It offers an insight into the kind of collective processes involved when birds decide to migrate away. In this story, the swallows’ tendency to perch (on a wire, say) increases smoothly as weeks pass. Additionally, swallows tend to imitate each other: their tendency to join a group of perched birds is proportional to the group’s size. We observe that, as real birds do in autumn, a clear-cut decision is made by the collective, despite the smoothness of all individual characteristics (probability to perch down, probability to join a group).

The program Swallows.py (in the directory Other/Swallows) implements collective decision. You can run it by executing the local command Starter in that directory. The Configuration Editor allows you to change parameter values.

The blue curve shows the proportion of the population perched. Observe that the number of swallows on the wire increases dramatically at some point in time. The yellow curve shows the increasing perching probability (x 1000). Run the simulation several times with the same parameters. Observe that the decision point varies.

Look at the definition of the various parameters (NbAgents, TimeSlope, GroupSlope, GroupOffset, Inertia). How does collective decision time vary depending on each of these parameters?

Sooner with larger population Later with larger population
Sooner with steeper time slope Later with steeper time slope
Sooner with steeper group slope Later with steeper group slope
Sooner with larger group offset Later with larger group offset
Sooner with larger inertia Later with larger inertia


Suggestions for further work

Emergence of segregationism

Schelling.jpg Thomas Schelling (1971) studied the dynamics of residential segregation to elucidate the conditions under which individual decisions about where to live will interact to produce neighbourhoods that are segregated by race. His model shows that this can occur even though individuals do not act in a coordinated fashion to bring about these segregated outcomes.

Schelling proposed a prototype model in which individual agents are of two types, say red and blue, and are placed randomly on the squares of a checkerboard. The neighbourhood of an agent is defined to be the eight squares adjoining his location. Each agent has preferences over the composition of his neighbourhood, defined as the proportion of reds and blues. In each period, the most dissatisfied agent moves to an empty square provided a square is available that he prefers to his current location. The process continues until no one wants to move.

The typical outcome is a highly segregated state, although nobody actually prefers segregation to integration. You may have a look at this very beautiful visualisation of the phenomenon.

Segregation.gif The program Segregationism.py (in the directory Other/Segregationism ) implements agents of two types (red and blue) that move randomly. To run it, execute the local command starter and load the configuration file Segregationism.evo. The Configuration Editor allows you to change parameter values.
Schelling decision
Modify the method satisfaction called by decisionToMove in class Individual: agents inspect their neighbourhood (through the method InspectNeighbourhood), and decide whether to move away or not. Copy your modification in the box below.


Agents have only weak segregationist local behaviour, in the following sense: each agent wants at most 50% neighbours that differ from her/him; otherwise agents are indifferent. Implement the model and observe how segregation emerges. You may increase the value of DisplayPeriod in the Configuration Editor to speed up the simulation.
Schelling Tolerance (1)
Be sure to have loaded the configuration file Segregationism.evo.
With the first solution to the previous exercise, for wich minimal value of Tolerance does segregation disappear?


Schelling Tolerance (2)
Set Tolerance to 80. What happens and why?


Schelling Radius (1)
Be sure to reload the configuration file Segregationism.evo to bring parameters back to standard values.
Set neighbourhood radius from 1 to 4. Explain what we get with this larger value.


Schelling Radius (2)
Set neighbourhood radius to 10. Explain what we get.


Schelling Colours
Restore default parameters and try 3 colours. Describe what happens.


Suggestions for further work

(non-)Emergence of a convention

The purpose of this small experiment is to show that some conventions, such as car driving side, do not emerge easily from mere local interactions. As for the segregationism experiment, we suppose that individual influence each other within a neighbourhood. This time, individuals do not move, but they may change colour (meaning that they adopt the other convention) if they are surrounded by too many people that do not behave like them.

The experiment shows that local coordination emerges rapidly. The situation is reminiscent of what we might observe for magnetism or spin glasses. The question is whether we may find conditions that would allow a single convention to invade the population.
The program Convention.py (in the directory Other/Segregationism ) implements agents of two types (red and blue) that may change colour, depending on the convention they adopt.
Important: for the program to run properly, you must have implemented the function satisfaction, as explained in the segregationism experiment.
Execute the program through the local command starter; load the configuration file Convention.evo and press [run]. The Configuration Editor allows you to change parameter values.
Convention (1)
Try to stablize the processus by changing Tolerance. For which minimal value of Tolerance do you get a strictly stable state?


Convention (2)
Values of Tolerance below that value lead to unstable states. Such states won’t evolve toward a uniform state.

    Because frontier dots cannot be "satisfied".
    Because uniform regions tend to have similar sizes.
    Because smaller uniform regions tend to increase in size.
    Because same colour regions are in competition to connect to each other.


Convention majority (1)
For which value of Tolerance do agents play a majority game ? (i.e. they systematically adopt the prevalent convention in their neighbourhood)


Set Tolerance to that value. Intuitively, if people tend to imitate the local majority, we should reach a point at which the prevalent convention should invade the whole population.
Convention majority (2)
The only relevant parameter left is NeighbourhoodRadius. Do you succeed in evolving a uniform convention, and for which value?

Yes, with a radius larger than 4.
Yes, but with a radius larger than 9.
No, we do not reach uniform convention.


Convention majority (3)

Once a convention becomes prevalent, it invades the population.
No convention becomes prevalent, as both conventions reach perfect equality.
Uniform convention is unlikely to emerge, as random fluctuations keep the system away from it.


Suggestions for further work

Social bubbles

SocialBubbles.gif Social bubbles, also called filter bubbles, characterize a state of intellectual (or political or artistic...) isolation resulting from information bias. The phenomenon is to be observe when individual get information from their friends and tend to make friends among people who share the same kind of information. Recommender systems may amplify the phenomenon, as the content they suggest to you corresponds to what is selected by individuals who, in the past, have made choices similar to yours.

The simulation proposed here to illustrate the phenomenon is the most basic one may think of. As a result, individuals tend to group into social bubbles corresponding to definite cinematographic preferences.

The program Bubbles.py (in the directory Other/Segregationism ) implements this scenario. To run it, execute the local command starter and load the configuration file Bubbles.evo. The Configuration Editor allows you to change parameter values.

Two parameters are crucial here: Try to find values for InfluenceRadius and NeighbourhoodRadius for which the social bubble phenomenon emerges.
Social bubbles
What is your value for NeighbourhoodRadius:    

Social bubbles
What is your value for InfluenceRadius:    


Cocktail party

(click on image to animate)

In a cocktail party, everyone must raise the voice to cover the noise of neighbouring conversations. Inevitably, the global noise level increases and conversational circles shrink.
The basic situation is implemented in the program cocktail.py (directory Other/Cocktail), launched by the local command ‘Starter’.
Agents are randomly located on a 2-D space and attempt to engage conversation with their neighbours, trying to be heared from them while sparing energy.

At the beginning, agents just talk (or sing) by themselves. You can see that agents progressively increase their voice level. The situation may stabilize at an intermediary level or evolve to a situation in which everyone shouts at the maximum level.

Note that the system is quite sensitive to parameter values:

There is a possibility of displaying the surrounding noise (parameter NoiseDisplay), but it slows down the simulation.

Consider the Influence parameter. Below a minimal value (e.g. 20), neighbouring cells cannot even hear the minimal voice level (e.g. 5).
Observe that even that minimal value will most probably lead eventually to noise surge.

Try to find a density threshold and influence combination below which noise remains localized.

Open Cocktail.py and locate the attenuation function. As you can see, voice attenuation depends linearly on the inverse of distance.
Change this for a steeper decrease, e.g. inverse of the square of the distance (copy the python line in the box below). Can you easily create a situation in which noise does emerge but remains localized? For which values of the parameters? Can you see a difference with the inverse-linear case?


Suggestions for further work

Small worlds

    [by Félix Richart]

(maybe read the slides on small worlds first.)

Many networks of real life are randomly generated: the Facebook network of friendship, the Web, the Twitter network of followers, the sex-relation graph (who had sex with whom), the citation graph (who cites whom in scientific papers). These "natural networks" (natural in the sense that they are generated by different individuals without global concertation) look similar. They are all "scale-free" networks. Scale-free_network_sample.png

A network is said "scale-free" if its degree-distribution follows a power-law: A huge number of nodes have only a few neighbours, while a small number of nodes have a lot of neighbours.
If you take the example of a graph where nodes are Facebook users and vertices are friendship links between them, then this network is a scale-free network: A few people have a huge number of friends, and most of us have only from 50 to 500 friends.
This property of natural networks can be explained by the notion of "Preferential Attachment". It means that, when a node choses its neighbours, it will prefer nodes with higher degree. For example if you are in a group of people, and you don’t know anyone, you will naturally try to be friend with the most popular person of the group.

To simulate preferential attachment, go to the Evolife/Other/SmallWorlds folder and run the program through the local command starter.
This program simulates a population of individuals who have to choose other individuals to link to.

Choose strategy ‘random’ in the ‘Network’ section. Each node sends a (bidirectional) link towards several other nodes that are randomly chosen. Run the program with 400 nodes and 4 links per nodes.
Observe the distribution of degrees.
(note: the scale is logarithmic (base 2); the distribution starts from LinksPerNode+1 and not from 0).

Explain why the maximum of the degree distribution corresponds to 2 * LinksPerNode
(again, mind the logarithmic scale on the axes).


We consider now a crude implementation of preferential attachment. The algorithm behind it is quite simple: When a node wants to find a neighbour, it randomly picks a set of nodes from the graph. It then studies this set to see wich node receives most links from other nodes of the set. This amounts to choosing as your next friend the most popular individual in your social circle.
This method leads to a global scale-free network where a small sub-set of people are "friends" with a very large number (this supposes that "friends" do not need to spend time together, as in Twitter and not as in real life).

Choose the ‘PA’ strategy (preferential attachment) and set SampleSize to a large size, typically one quarter of the total number of nodes. Run the program again. You should observe that some nodes get more popular and have more neighbours.
What can you say about the curve formed by the blue dots (degree distribution)?
Why is it so?


How does it compare with the ‘random attachment’ case?


The preceding method is time consuming. Each agent has to spend time studying a representative sample of the population. If you set SampleSize to a lower value such as one tenth of the network’s size, then you may observe that the effect disappears.

Preferential attachment may result from much lighter mechanisms.
Suppose that our nodes represent scientists. Two scientists in the graph are connected if they wrote a paper together. Imagine you are a scientist, and you want to find someone to write a paper with. You could use the above method and select the most popular author among all your fellow scientists, i.e. the one who wrote the most papers with your other friends. But you are a lazy scientist! So you adopt a much easier strategy. You just pick up a random paper in your domain and decide to work with one of the scientists who wrote it. In other words, you just select an edge of the graph and choose one of the nodes connected by this edge.
In the findNeighbour function of the Graph class, implement this method for finding a neighour
(the edge list self.Links is a list of couples representing connected nodes). Test the method by choosing this ‘picking’ strategy. copy your method below.

What do you observe concerning the blue dot curve?
Do you have any idea why it is so?



Back to the main page