Proposal: A Conversational Interface for trusted data
In this post I set out the interface that I would like to have in order to get more value out of language models for tasks such as research that require high levels of information accuracy and provenance. These include:
- Indicate the likelihood of correctness of statements in a response via color coding of the text. This color coding should be personalisable based on whether I:
- confirm whether entities in a sentence are the ones I want to talk about
- confirm that I trust information sources from which the answer was derived
- confirm further background questions that the model has if I ask it to "improve its certainty"
- Hyperlinking entities (things such as people, places, objects etc.) in a response, so I can confirm that this is the entity I care about. If I hover over the hyperlink I should be able to see extra information about the entity, such as a picture, full name and other key details and have the option to say "this is not what I'm after" or hit a green check mark in the top right corner to allow the interface to update its accuracy color-codings.
This example has been produced using generative AI can be found here. Note only the bottom popup example works at the moment.
This requires advancements beyond current Large Language Models (LLMs) and emergent Large Concept Models (LCMs) architectures. In particular, this would require models to:
- Have in-built mechanisms for entity resolution
- Have knowledge of the provenance trails of the information it has learned
- Discern - from information such as these provenance trails - between ideas and proven results/observations in order to be able to accurately frame what is ground truth and what is conjecture that may or may not have consensus amongst a large group of people.
I am interested in developing architectures that are in support of this vision.