Machine Dialogues on the Web

A large part of my personal research focus explores how we can facilitate structured dialogues and negotiations between these agents in a decentralised setting in order to enable them to work autonomously on our behalf. For instance, if I’m booked to present a talk on the 15th of February in Boston, how can we enable Charlie to safely discover the best travel options to suit my personal needs, book it on my behalf and then organise dinner reservations with my colleagues in the area at a restaurant that can cater to my coeliac allergy as well as my colleagues’ dietary requirements without my input.

Let's break it down!

Firstly, we need to make sure that all of our agents are able to effectively communicate (interoperate) with one another, and talk about any topic we could imagine. Since we don’t want to create a new HTTP API for every type of exchange - the typical smorgasbord of JSON, GraphQL and other interfaces that modern Web developers are accustomed to, is not up to scratch. Instead we need a uniform interface that ‘understands’ semantics; thankfully, the RDF data model of the Semantic Web (I strongly recommend reading the design issue if you’ve not heard of it before), and which is used by Solid fit’s the bill. In particular, our agents communicate with one another by exchanging data packages documents serialised in a RDF syntax such as Turtle.

The next challenge is ensuring that my agent only interacts within a safe network and is not operating on false claims. To achieve this, we need to actively include the complementary concepts of proof and trust in our negotiations.

Trust in Society

Life is full of unknowns, so interpersonal and institutional trust is required for society function. Just to get my morning coffee I need to trust that the driver at the traffic light won’t try and run me over as I cross the road to the cafe, my bank won’t disappear with my money before I tap my card and that the barista won’t poison my drink. Yet I do this everyday without so much as a thought about it. It’s not guaranteed that these things won’t happen: 361 pedestians were killed in the UK in 2021 (https://www.gov.uk/government/statistics/reported-road-casualties-great-britain-pedestrian-factsheet-2021/reported-road-casualties-great-britain-pedestrian-factsheet-2021 ), banks do fail as we saw in the 2008 GFC and millions suffer from food poisoning annually (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5489142/ ) - but it is unlikely enough that I consider the risk worth it in order to get my daily latte.

Trust on the Web

Such trust is just as important on the Web as it is in every other aspect of our lives. This can range from my trust in the information that I read on Wikipedia is reliable enough for most scenarios because of the collaborative editing and review process that it has in place combined with my ongoing positive experience in using the platform. Equally I trust that the British Airways website is going to provide reliable information about flight services; and will not steal my credit card details when I enter them. This time, however, I am trusting the website because it is a reliable company and the damage to their brand identity far outweighs the benefits of them scamming some users' credit cards.

Trust in Dialogues

Similarly, an agent working on the Web also needs to be able to know what assumptions around trust that it can make. Hence, we need to be able to model our existing notions of interpersonal and institutional trust on the Web. In particular, the agent needs to be able determine can I trust Fred (Denise’s agent) for the purpose of providing a time Denise is free for dinner; or _can I trust Zorb (xalbsoi89213’s agent) for the purpose of determining which heart medication I need to order _(note this is really just situational trust as described in https://link.springer.com/article/10.1007/s10462-004-0041-5). This can be achieved by having my agent learn the factors that influence my trust: for instance I might trust people that my friends endorse, or organisations endorsed by government approved certification authorities; and then use this information to make judgements on what information it trusts in a negotiation.

Proof in Dialogues

Including Logical and Cryptographic proofs can be used to supplement trust by providing further guarantees on data integrity. For instance, I don’t need to trust Bob when he is telling me his birth date is 03/05/1993 if he presents it alongside a signature showing that the UK government also claims his birthday is 03/05/1993. It is also common for agents to perform logical inference on the data that is being passed around - in this case any derived facts should include a proof of their derivation from the source materials in order to avoid needing to trust the intermediary who did the calculation.