The Unintended Consequences Fairness Brings to Automated Negotiation

More Info
expand_more

Abstract

In this paper, the unintended consequences, also named edge cases in this paper, of integrating fairness into the automated negotiation process are researched. By finding these unintended consequences, we can deal with them accordingly or avoid them, as to not cause any problems with our fairness metric that might make our negotiation process less fair or cause any undesired behaviour. Edge cases are searched for in a small-scale experiment by implementing the difference principle from the 'Justice as Fairness' notion by John Rawls into the negotiation process. The negotiation has two agents, and the behaviour of one of the agents is changed to adhere to the difference principle. By using these agents in automated negotiations with different domains, the behaviour and outcomes of these negotiations between two agents will be checked for any abnormalities that could be considered an unintended consequence. From this, we conclude that agent implementing fairness has a smaller available bidding space which leads to a staler negotiation process. Furthermore, the outcomes show that not always an optimal result is found. However, no unintended consequences directly related to fairness were found. Since finding edge cases is an exhaustive process, which can be compared to finding bugs in a computer program, the research done is not proof that the Rawlsian notion of fairness, or any kind of fairness for that matter, has no other unintended consequences. The research in this paper can be used as inspiration for further research into the edge cases of fairness in automated negotiation, and to have a general idea as to what unintended consequences Rawlsian fairness brings.

Files