TelecomParis_IPParis.png Telecom Paris
Dep. Informatique & Réseaux

Dessalles_2018.png J-L. DessallesHome page

May 2023

5
CANLP.png

Cognitive Approach to Natural Language Processing (SD213)

                                other AI courses
5

Content












Argumentation & Deliberative reasoning

Foreword

Rational reasoning and the ability to produce articulate arguments lie at the core of human intelligence. The current "guessing" strategy implemented through language models based on neural networks gives impressive results (see for instance Google’s Lambda project or OpeAI’s famous ChatGPT project). However, though the performance is impressive, off-line computation that is characteristic of these method is neither perfect nor able to anticipate the specificity of all contexts. We are still far from passing Turing’s imitation game. Making seemingly relevant moves in generic contexts is not enough. We need machines that compute relevance instead of merely guessing what should be said from accumulated experience.

The purpose of this lab is to show that the basic principles underlying the ability to be relevant might be, quite unexpectedly, rather simple. This pushes the difficulty back to the problem of generating appropriate knowledge.

We will study the C-A-N procedure.
CAN (Conflict-Abduction-Negation) is a minimal model of the cognitive capabilities underlying argumentation and deliberative reasoning. To illustrate the method, we will use a dialogue adapted from a real conversation (see below). Any other dialogue could, in principle, be processed through the same procedure, just by changing the mini-knowledge base fed into the program.
You will have the opportunity to observe:

The main purpose, however, is to show that relevance in discourse and in reasoning can be modelled as a definite form of computation. And, moreover, that this computation is surprisingly simple.

Note: two equivalent implementations are offered here: in Prolog and in Python.












The dialogue

The dialogue used to illustrate this lab work is adapted from a real conversation in which participants discuss the best way of repainting doors.

English version

Original French version


[Context: A is repainting doors. He decided to remove the old paint first, which proves to be a hard work]

A1-    I have to repaint my doors. I've burned off the old paint. It worked OK, but not every-where. It's really tough work! [...] In the corners, all this, the mouldings, it's not feasible!
[...]
B1-    You have to use a wire brush.
A2-    Yes, but that wrecks the wood.
B2-    It wrecks the wood...
[pause 5 seconds]
A3-    It's crazy! It's more trouble than buying a new door.
B3-    Oh, that's why you'd do better just sanding and repainting over.
A4-    Yes, but if we are the fifteenth ones to think of that!
B4-    Oh, yeah...
A5-    There are already three layers of paint.
B5-    If the old remaining paint sticks well, you can fill in the peeled spots with filler compound.
A6-    Yeah, but the surface won't look great. It'll look like an old door.

[Contexte: A repeint ses portes. Il a décidé de décaper l'ancienne peinture, ce qui se révèle pénible.]

A1- Ben moi, j'en bave actuellement parce qu'il faut que je refasse mes portes, la peinture. Alors j'ai décapé à la chaleur. Ca part bien. Mais pas partout. C'est un travail dingue, hein?
B- heu, tu as essayé de. Tu as décapé tes portes?
A1a- Ouais, ça part très bien à la chaleur, mais dans les coins, tout ça, les moulures, c'est infaisable. [plus fort] Les moulures.
B - Quelle chaleur? La lampe à souder?
A- Ouais, avec un truc spécial.
B1- Faut une brosse, dure, une brosse métallique.
A2- Oui, mais j'attaque le bois.
B2- T'attaques le bois.
A3- [pause 5 secondes] Enfin je sais pas. C'est un boulot dingue, hein? C'est plus de boulot que de racheter une porte, hein?
B3- Oh, c'est pour ça qu'il vaut mieux laiss... il vaut mieux simplement poncer, repeindre par dessus
A4- Ben oui, mais si on est les quinzièmes à se dire ça
B4- Ah oui.
A5- Y a déjà trois couches de peinture, hein, dessus.
B5- Remarque, si elle tient bien, la peinture, là où elle est écaillée, on peut enduire. De l'enduit à l'eau, ou
A6- Oui, mais l'état de surface est pas joli, quoi, ça fait laque, tu sais, ça fait vieille porte.

Portes.jpg

For the purpose of the illustration, we just need to keep the predicative skeleton of the conversation.

        
A1- repaint, burn-off, moldings, tough work
B1- wire brush
A2- wood wrecked
A3- tough work
B3- sanding
A5- several layers
B5- filler compound
A6- not nice surface

This is an argumentative discussion. Discussions radically differ from narratives. They consist in characteristic alternations between problems and solutions.

     Problem:     doors are not nice
     Solution:     repaint
     Problem:     ancient layers
     Solution:     burn-off
     Problem:     tough work due to mouldings
     Solution:     wire brush
     Problem:     wood wrecked
     Solution:     no wire brush
     Problem:     tough work due to mouldings
(Here, one have to assume that the ‘tough work’ problem intensity increases))
     Solution:     no burn-off
     Problem:     doors are not nice
     Solution:     sanding
     Problem:     there are several layers
     Solution:     filler compound

We hypothesize that such an argumentative discussion is the output of a fixed algorithm, modelled as the C-A-N procedure (conflict-abduction-negation), acting on a local domain knowledge. This domain knowledge should in principle be obtained through independent means. The dialogue depends on it. However, the domain knowledge contains no indication about the way dialogue moves should be generated. As a result, reconstructing dialogues remains a challenge.












Conflicting necessities

Basic attitudes such as true and false have long been recognized to be insufficient to model argumentation. Here we consider that propositional attitudes are gradual and may include both beliefs and desires.

As far as the computation of logical relevance is concerned,
the distinction between beliefs and desires can be ignored.
Beliefs and desires are subsumed by the notion of necessity.
To describe the argumentation procedure, we use a single notion, called necessity, that captures the intensity with which a given situation is believed or wished. Necessities are negative in case of disbelief or avoidance. At each step t of the reasoning procedure, a function νt(T) is supposed to provide the necessity of any predicate T on demand (we will omit subscripts t to improve readability). The necessity of T may be unknown at time t. We suppose that necessities are consistent with negation: ν(¬T) = -ν(T). The main purpose of considering necessities is that they propagate through logical and causal links, as we will see.

We say that a predicate is realized if it is regarded as being true in the current state of the world.

A predicate T is said to be conflicting if:

We say that T creates a cognitive conflict (T, ν(T)) of intensity |ν(T)|.

Note that cognitive conflicts do not oppose individuals, but beliefs and/or desires. The point of the argumentative procedure is to modify beliefs or to change the state of the world until the current cognitive conflict is solved. In many situations, solving one cognitive conflict may create a new one. Argumentation emerges from the repeated application of the argumentative procedure.












The C-A-N procedure

CAN.png

    

The C-A-N procedure
Conflict:     If there is no current conflict, look for a new conflict (T, N) where T is a recently visited state of affairs.
Solution:     If > 0 and T is possible (i.e. ¬T is not realized), decide that T is the case (if T is an action, do it or simulate it).
Abduction:     Look for a cause C of T or a reason C for T. If C is mutable with intensity N, make v(C) = N and restart from the new conflict (C, N).
Negation:     Restart the procedure with the conflict (¬T, -N).
Give up:     Make v(T) = -N.
Revision:     Reconsider the value of v(T).

The first step consists in detecting a logical conflict. This captures the fact that no argumentation is supposed to occur in the absence of an explicit or underlying logical conflict. For instance, the procedure may detect that the doors are not nice, while the contrary is wished.

The solution phase allows actions to be performed and therefore lose their necessity. This phase is where decisions are taken, after weighing pros and cons. For instance, once ‘repainting’ is identified as a cause for having nice doors, the new conflict ('repainting’ whished but not realized) can be solved by merely performing the action. For predicates that are not actions, a solution attempt consists in considering them as realized.

Abduction consists in finding out a cause for a state of affairs. For instance, ‘repainting’ is a possible cause for having nice doors. The abduction procedure itself can be considered as external to the model. Necessity propagation occurs in this abduction phase.

The effect of negation is to present the mirror image of the logical conflict: if (T, N) represents a conflict, so does (¬T, -N). Note that thanks to negation, positive and negative necessities play symmetrical roles. For instance, if ‘not having nice doors’ is a conflict of intensity -20, then ‘having nice doors’ is a conflict of intensity 20 (as it is whished and not realized.)

The give-up phase is crucial: by setting the necessity of T to -N, the procedure memorizes the fact that T resisted a mutation of intensity N.

Lastly, the revision phase consists in reconsidering the necessity of T. This operation represents the fact that when considering a situation anew, people may change the strength of their belief or their desire (the situation may appear not so sure, less desirable or less unpleasant after all, as when ‘tough work’ is reappraised in utterance A3).

The program

Python version
A version of the program used in this lab has been written in Python. It is available from there, (together with Prolog files).
A short description of the Python program is (or will be) available from there.
The Python implementation consists of five files:
  • CAN.py:    main program - contains the CAN procedure
  • World.py:    world processing
  • Predicate.py:    Predicate classes
  • Rules.py:    Definition of causal and incompatibility rules
  • utils.py:    utility functions
The program may read the knowledge Prolog file (here rel_doors.pl).

Structure

The program you will be using consists in several short modules.

rel_knowl.pl loads the domain knowledge
rel_doors.pl domain knowledge
rel_util_full.pl
rel_util.pl
displays windows showing progress in reasoning
replaces rel_util_full.pl when graphic display is not available
rel_world.pl world processing.
rel_CAN.pl main program; contains the C-A-N procedure.

    (all files, both in Prolog and in Python, are stored together there).

Run the program by executing rel_CAN.

The level of detail provided by the program depends on the trace level. If you don’t use window display (i.e. you are using rel_util.pl), change trace level at the beginning of (rel_CAN.pl). Otherwise, change it from the window. From trace level 3 up, the program pauses periodically. To see how reasoning goes forward, press [Enter] repeatedly. Depending on the trace level (between 1 and 6, specified at the beginning of rel_can.pl or on one of the windows), the program will be more or less verbose. To interrupt before the end, press ‘q’.

With trace level 2, you will eventually get something like this (after a few modifications).


    ?- go.
     Conflict of intensity -20 with nice_doors
     ------> Decision : repaint
     Conflict of intensity 20 with nice_doors
     ------> Decision : burn_off
     Conflict of intensity -10 with tough_work
     Conflict of intensity -10 with tough_work
     ------> Decision : wire_brush
     Conflict of intensity 20 with nice_doors
     ------> Decision : -wire_brush
     Conflict of intensity -10 with tough_work
     Conflict of intensity -10 with tough_work
     We are about to live with tough_work ( -10 )!
     If you want to change preference for tough_work ( -10 ), enter number followed by '.' (or else: 'n.')
    |: -22.
     Conflict of intensity -22 with tough_work
     ------> Decision : -burn_off
     Conflict of intensity 20 with nice_doors
     ------> Decision : -wood_wrecked
     Conflict of intensity 20 with nice_doors
     ------> Decision : sanding
     Conflict of intensity 20 with nice_doors
     Conflict of intensity 20 with nice_doors
     ------> Decision : filler_compound

For now, the output is much shorter:


    ?- go.
     Conflict of intensity -20 with nice_doors
     ------> Decision : repaint
     Conflict of intensity 20 with nice_doors
     ------> Decision : burn_off
     Conflict of intensity 20 with nice_doors
     ------> Decision : -wood_wrecked
    true.

We can change trace level to level 3 by typing: tl(3). We have to press some key periodically to proceed (press ‘q’ to interrupt).


    2 ?- tl(3).
    true.
    
    3 ?- go.
     (Re)start...
     Conflict of intensity -20 with nice_doors
     Negating -nice_doors , considering nice_doors
    |: repaint is revisable because it is an action with no prerequisite
     ------> Decision : repaint
    |: (Re)start...
     Conflict of intensity 20 with nice_doors
     burn_off is revisable because it is an action with no prerequisite
     ------> Decision : burn_off
    |: (Re)start...
     Conflict of intensity 20 with nice_doors
     -wood_wrecked is revisable because its status is unknown
     ------> Decision : -wood_wrecked
    |: (Re)start...
    true.

Trace level 5 reveals the functioning of the C-A-N procedure in minute details. Observe the level 5 trace on the beginning of the dialogue (you may interrupt by pressing ‘q').

You may identify the recurring occurrence of the three main phases: conflict, abduction, negation.












Knowledge representation

The program makes use of a widely adopted technique that is, let’s not forget about it, is quite artificial. This questionable technique consists in using the same knowledge to keep the ‘world’ updated and to discuss about it. In less artificial situations, e.g. in a smart home situation, the ‘world’ would evolve by itself! The knowledge used to reason and to discuss comes for the interface with perception and perceptive memory. In the situation of the above dialogue, for instance, the fact that the wood gets wrecked when a wire brush is used would be observed and noticed through perceptive means.

In the solution adopted in the ‘world’ module rel_world.pl, knowledge is represented for the main part as a set of causal rules. Let’s consider for instance this rule loaded from rel_doors.pl:


    tough_work <=== burn_off + mouldings + -wire_brush.

This rule says that tough_work causally results from burn_off when there are mouldings and one does not use a wire_brush. Knowledge includes these types of items.

Predicate w-propagate (or update in Python) in rel_world implements a form of forward chaining that generates new knowledge as soon as a new fact becomes true. As it stands, the program suggests to perform two actions: repaint and burn-off.
Then it makes a strange "decision", which is that the wood is not wrecked. Why? The reason is that it has no information about whether the wood is wrecked or not, and so wood_wrecked is considered mutable.

Mutability
Explain why abduction comes to considering wood_wrecked and which rule (in rel_doors.pl) is responsible.

    


     The purpose of a premise like -wood_wrecked is to capture with a static rule what would be an unanticipated exception in real life. There could be a myriad of possible such exceptions. We do not want our rule-based abductive system to consider them all in sequence.

Default Knowledge
Use the default predicate in rel_doors.pl to make this exception invisible to abduction. Default facts are considered true as long as they are not conflicting with other facts. Copy the new rule below.
Verify that the "decision" about wood_wrecked has disappeared.

    












Modifying knowledge

The program is not bothered by the problem tough_work because it does not know about mouldings. You may add this piece of knowledge on the fly (for Python, you have to add the fact as initial_situation in rel_doors.pl).


    ?- state(mouldings).
     fact mouldings added to the world
    true.
    
    ?- go.
     Conflict of intensity -20 with nice_doors
     ------> Decision : repaint
     Conflict of intensity 20 with nice_doors
     ------> Decision : burn_off
     Conflict of intensity -10 with tough_work
     Conflict of intensity -10 with tough_work
     ------> Decision : wire_brush
    true.

Ok, we are one step further. What about the softness of the wood? Add it by typing state(soft_wood). and again go. (in Python, add soft_wood as an initial_situation). It does not change anything.

New Rule
Add a domain rule to rel_doors.pl that leads to wood_wrecked when we use a wire brush and the wood is soft.
Copy the line below.
Verify that the program is able to abandon the idea of using a wire brush!

    

Now, the programs ends with the sad observation that it is left with tough_work with intensity -10. It offers you the possibility of revising this value. This is what A does in the real dialogue by putting emphasis ("It’s crazy!").

Revision
Enter a value that is more problematic than the intensity of nice_doors, e.g -22. What happens? Can you iterate the processus by choosing ever increasing insatisfaction?

    


Note that knowledge entered through state(_) is permanent, but that it is lost if you quit SWI-Prolog. You may replace it by adding initial_situation(_) declarations in rel_doors.pl.

Dialogue Ending
Uncomment the two commented causal rules in rel_doors.pl. Reload the program. As in the real dialogue, the program suggests sanding.
Now state that there are several layers: state(several_layers.) and then go. What happens?

    

Determinism

Prolog programs are ideally non deterministic. This means that the order of clauses and the order of terms in clauses should be irrelevant.
Determinism
Move the clause mentioning filler_compound to the first position among causal clauses. What happens? Does it change the dialogue significantly?

    

We could change the knowledge base to avoid this problem. One way to preserve the priority of burning-off over filler-compund would be to distinguish several surface qualities, (e.g. ‘nice’ and ‘ok') with different preferences.     

New Door
In the real dialogue, A is considering another action, which is to buy a new door (A3). Explain why this action is not considered any further. Does this fit with the CAN procedure?

    

Monotony.

Monotony in a knowledge base means that the set of deducted facts can only increase when the number of known facts increases.
Monotony
Explain in what way the program is non-monotonous.

    

Suggestion

Develop your own dialog by implementing a knowledge base. For instance:

Line.jpg

Back to main page