Human-agent negotiations

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

1. Introduction:

"The art of negotiation is perhaps what most deeply distinguishes man from the animals, and it is this art and this will to negotiate that has brought man forward, elevated him beyond the animals."

- Harry Martinson (1904-1978) (

"An agent is an encapsulated computer system that is situated in some environment and is capable of flexible, autonomous actions in its environment in order to meet its design objectives." (Wooldridge, 1997)

The means of attaining a state where the proposals, offers and concessions are found mutually agreeable between the agent and the acquaintance is called as negotiation. The research on autonomous negotiation has always been based on three major topics (Jennings et al, 2002):

1.1 Negotiation protocols: the rules which govern the actual interaction which includes the participants, the negotiation state, factors which influences the changes in these states and valid actions of the participants during these states.

1.2 Negotiation objects: the issues over which the agreement must be reached. This could include factors such as price, quality, time and terms and conditions.

1.3 Agents' decision making model: the tool which is used to make decisions in accordance with the negotiation protocols over the negotiation objects.

The impact and importance of these three topics vary from model to model depending on the environment of the negotiation.

2. Automated Agent Negotiator:

The research on automated agent negotiators has been taking place for several years. But developing an agent to negotiate with a human counterpart is extremely hard compared to agent - agent negotiation. Kraus and Lehmann developed an automated agent that played a Diplomacy game with humans nearly twenty years ago however designing adept automated agents have not been resolved even now. During the research phase several assumptions are made that might not apply for the actual negotiation process. Some such assumptions are expecting the human to act rationally and maximize the expected utility of the agent. Social studies results show that the equilibrium strategy is not the ideal strategy for human negotiation. (Grosz et al) Irrespective of this, some assumptions are still made i.e. mainly that the human party will not always maximize the utility but still when given an option the one with maximum utility will be chosen (Raz Lin et al, 2009).

It has been shown that irrespective of whether the opponent is well-informed or not, the presence of a computer based agent can change the overall result of a negotiation. Grossklags and Schmidt (2006) showed that when human subjects knew about the presence of autonomous agents in a double auction market environment, it increased the efficient market price. Sanfey et al (2003) matched humans with other humans and computer agents in an Ultimatum game and showed that humans rejected the unfair offers made by other humans at a much higher rate compared to those made by the computer agents.

3. Agents Negotiating with People:

These factors were taken into consideration by the researchers while developing the agents and have suggested new concepts such as the trembling hand equilibrium (Ramusen, 2001). Chavez and Maes (1996) developed Kasbah, a significant negotiation model between agents and humans which can be controlled by human players. It was meant to serve as an intermediate between buyers and sellers. Even though it wasn't sophisticated it served as the introduction to multi-agent negotiation.

Tennenholtz (1996) suggested using qualitative decision theory instead of quantitative decision theory. In such situation, it is not necessary to assume that the user will try to be a utility maximize or an equilibrium strategy follower.

Kenny et al (2007) has been working on developing virtual humans to train people in interpersonal skills. This can be achieved by implementing techniques such as natural language processing, speech and knowledge processing, cognitive and emotional modelling. All this has to be done in addition to the actual construction and implementation of the logic that is required for the negotiation process to make the virtual human a good trainer. A special interest has been shown by schools and commercial companies on the automated negotiation technologies. Special courses and seminars are held which promise that upon completion of the course, you will "Know many strategies on which to base the negotiation", Discover the negotiation secrets and techniques", Learn common rival's tactics and how to neutralize them" and "Be able to apply an efficient negotiation strategy" know many strategies on which to base the negotiation", Discover the negotiation secrets and techniques", Learn common rival's tactics and how to neutralize them" and "Be able to apply and efficient negotiation strategy" (eg:, However the agents in these courses are restricted to one particular domain and cannot be generalized and some of these agents are restricted to a single attribute negotiation only.

Laboratory results and field reviews given by esteemed publications (such as Raiffa, 1982 & Fisher et al, 1991) provide guidelines for the design of these computer agents. But it is a great challenge to implement these guidelines to develop a virtual agent to negotiate proficiently with humans.

4. Challenges Faced:

The most important challenge faced during the development of automated agents is that they should be able to work in situations with opponents with delimited consistency and partial information. The fact that humans are influenced by behavioural aspects and communal preferences can also cause problems since it makes it tough to predict the choices made by individuals. It is very complex to tackle the above mentioned challenges. An automated agent must be developed with two inter-dependant mechanisms to tackle these issues. The first mechanism is a decision making component which operates by modelling human factors. This is responsible for generating offers and to decide whether to accept or refuse the offers provided by the opponent. The challenge in this mechanism lies in reasoning the human behaviour and not in the calculating the complexity of making decisions. The second mechanism is for learning the preferences and strategies of the opponent based on their actions (Raz Lin et al, 2009).

Generalizing the behaviour of an automated agent poses as another major problem during the development process. It is necessary to decide whether the agent will be able to perform as a general purpose negotiator capable of performing in all the domains similar to a human negotiator or if the agent will be a domain specific negotiator constrained to one particular domain (Krauss et al, 1995). However, developing an agent suitable for a particular domain has its own advantages. When an agent has its own specific domain then it is possible to develop better strategies thereby achieve better agreements when compared to a generalized agent. This is because the specificity allows the designer to provide better strategies for the agent and perform several tests to improve its proficiency. On the other hand, generalized agents which are domain independent are harder to test against all possible conditions.

Trust also plays an important role during a successful negotiation. Trust is generally developed by "cheap talks" i.e. data which cannot be verified by the other party. The AutONA agent developed by Byde et al (2003) was capable of performing "cheap talks". The result showed that the negotiators were not able to detect who the computer agent was.

5. Development of Agents:

5.1 The Diplomat agent:

Krauss and Lehmann (1995) developed the Diplomat agent over twenty years ago with the goal to win the Diplomat game. The game involves negotiating multi-issue settings with incomplete information about the other agents and misleading information can be shared among different agents. The Diplomat agent was capable of changing its personality for each game. It had limited learning capability to understand the personality of its opponents and also to incorporate randomization during decision making which determines if the agreements would be fulfilled or not. The result provided by Kraus showed that the Diplomat agent played its game really well and also the human players were not able to determine who the computer player was. The major drawback of Diplomat agent was that it was domain specific.

5.2 The AutONA agent:

AutONA was developed by Byde et al (2003) whose domain involves negotiation between sellers and buyers about a product over quantity and price. The protocol follows "alternating offers model". Each offer is private and is made to only one player. AutONA will receive data from the previous round even though it appears as a one-shot round. AutONA was capable of performing "cheap talk". Results showed that AutONA was not aggressive enough and also that most of the negotiation was incomplete. The fine tuned version was better but not good enough to compete with human negotiators. Finally it was concluded that the AutONA requires changing configuration for different environments.

5.3 Cliff Edge agent:

Katz and Kraus developed the Cliff edge agent which utilizes virtual learning with reinforcement learning used in one-shot interactions. Any offer higher than the accepted offer is treated as successful and any offer lower than the previously accepted offer is treated as unsuccessful. It was also allowed to slightly deviate from its characterizations. Its improved version (Katz, 2006) was gender sensitive and the results showed that it yielded better payoff than generic agents.

5.4 Colored-Trails agent:

Ficci and Pfeffer developed the Colored-Trails agent which similar to AutONA (Byde, 2003) collected data of previous interactions and used it to develop different models of human reasoning on the game. When matched with the humans in the game, the end results showed that agents performed similar to humans and also that the agent contributed towards the social good by providing ample utility scores to other players. Gal et al (2004) introduced machine learning techniques in this agent. This proposed model was successful in learning the social preferences of the opponents.

5.5 The Guessing Heuristic agent:

Jonker et al (2007) developed the agent which uses the "guessing heuristics", i.e. the agent tries to predict the opponents preferences based on their negotiation history. This is done by assuming that the opponent has a linear structure of utility. This agent acts a better buyer when compared to an actual human but during an experiment when the humans acted as the buyers the performance deteriorated. However the fact that the humans forced better concessions on the agents during the experiment must be taken into consideration.

5.6 The QOAgent:

R. Lin et al (2008) developed the QOAgent which is a domain independent agent that can negotiate with people in a bilateral environment with incomplete information. The negotiator is designed in such a way to be cost and time effective by attaining status quo if the negotiation hasn't reached its end even after a particular time. It can also break out of a negotiation if it feels that the negotiation is not moving in a favourable path. The incomplete information is tackled by using a Bayesian algorithm which after each action tries to determine the offer best suited the opponent. Lin et al showed that their agent was capable of reaching more agreements in different domains than their human counterparts. However the major drawback is that the agent is incapable of generating an accurate model of their opponent.

5.7 The Virtual Human agent:

Kenny et al (2007) worked on virtual humans capable of training humans on inter-personal skills such as leadership, cultural training etc. Their agent was based on the Soar Cognitive architecture, a symbolic reasoning system used for decision making. There are a set of strategies that needs to be followed by the agent during different situations (Traum et al, 2008). There are several factors that influence the strategy chosen during each situation such as the estimated and best utility of an outcome, the control of the agent over the negotiation, the trust of the agent etc. The agent was tested in different scenarios some of which were used by soldiers to carry out bilateral engagements with virtual humans. The limitation of the virtual human agent is that, it is incapable of understanding subjective offers made by the human negotiators. Also it is necessary to consider several strategies for developing a rich environment.

6. Significance of Automated Negotiators:

The role of automated agents which can effectively negotiate with humans cannot be underestimated. It could become highly flexible by following classic methods of opponent modelling and decision making. And their efficiency would rise significantly if they are not domain specific. The automated negotiators are not intended to replace human negotiators but to aid them as effective decision making tools or to train them in the art of negotiating. So, they can not only be used in the field of e-commerce and electronic negotiations but they can also be used for online training to develop better human negotiator out of a trainee.

7. Future of Research on Automated Negotiation:

The research on automated negotiators provides lots of challenges which make it an exciting field to work at. Some of the challenges are enriching the language, producing generic agents etc. Most researchers restrict their agents to provide basic models of offers and counter-offers alone. However the agents can be developed in such a way that their language is much more realistic and rich. They could also be designed to do different actions such as providing comments, promises and queries. If these behaviours are enabled then it could provide better interaction with human beings. As mentioned, another challenge is developing a general purpose negotiator. This can be done by comparing a generic negotiator with a domain specific negotiator and thereby improve the efficiency of the general purpose automated negotiator. Hindricks et al (2008) and Oshrat (2009) have already started the preliminary works on this particular challenge. Another important challenge faced by the researchers is the argumentation. This is very complex because argumentation is based on logic and the current model used by the agents is based on a complex opponent model and therefore it has to be incorporated in the agent. The development in the field of automated negotiators is happening at an encouraging speed but still several exciting challenges are still present and research in this field could lead to great development.

8. References:

1. Byde, A., Yearworth, M., Chen, K.Y. and Bartolini, C. (2003), AutONA: A system for automated multiple 1-1 negotiation. In The 2003 IEEE International Conference on Electronic Commerce. 59-67.

2. Chavez, A. and Maes, P. (1996) Kasbah: An agent marketplace for buying and selling goods. In The First International Conference on the Practical Applications of Intelligent Agents and Multi-agent Technology, 75-90.

3. Faratin, P., Sierra, C., Jennings, N.R., (2002) Using similarity criteria to make tradeoffs in automated negotiation, Artificial Intelligence, Volume 142, 205-237Ficici, S. and Pfeffer, M. (2008) Modelling how humans reason about others with partial information. In The 7th International Conferenc on Autonomous Agents and Multi-agent Systems. 315-322.

4. Jonker, C.M., Robu, V. and Treur, J. (2007) An agent architecture for multi-attribute negotiation using incomplete preference information. Autonomous Agents and Multi-agents Systems, 15(2) 221-252.

5. Katz, R. and Kraus, S. (2006), Efficient agents for cliff edge environments with a large set of decision options. In The 5th International Conference on Autonomous Agents and Multi-agent Systems. 697-704.

6. Kenny, P., Hartholt, A., Gratch, J., Swartout, W., Traum, D., Marsella, S. And Piepol, D. (2007) Building interactive virtual humans for training environments. In the Inter-service/Industry Training, Simulation and Education conference.

7. Kraus, S., Lehmann, D., (1995) Designing and building a negotiating automated agent. Computational Intelligence 11(1).

8. Lin, R., Kraus, S., Wilkenfeld, J. and Barry, J. (2008) Negotiating with bounded rational agents in environments with incomplete information using an automated agent. Artificial Intelligence 823-851.

9. Raz Lin and Sarit Kraus (2009), Can Automated Agents Proficiently Negotiate with Humans. Communications of the ACM. Volume 53, Pages 78-88.

10. Bargaining negotiations course, <> (2008)

11. Online negotiation course, <> 2008.