To visualise this it is important to keep in mind that it is the interactions which which are important. For example, a timeline based on publication would be trivial, as would it be to add lines connecting the documents based on who cited who. Already we can see how we can build meaning into the screen layout for this. For example, the left is the past, right is the future, but what about up and down on the screen? Maybe the further up they are the more they have been cited?
We can go on with visualisations like this but it is important to keep in mind that it can quite quickly get messy, as discussed in http://wordpress.liquid.info/07/dynamic-view-w-citations-spatial-layout-issues/frode/
This is precisely is why the interactions are important: It’s a bit like a desk–it’s not a mess if it’s your mess–and if you can change the view you change your perspective since any specific view will be incomplete. Even a basic Glossary system can get very complicated quickly: http://wordpress.liquid.info/12/key-interactions-glossary/frode/
I think that Dave King’s Cognitive City, as demonstrated here: https://www.youtube.com/watch?v=rU9sOKcIT5A, is a fantastic example of such an interactive space. What Visual-Meta does is simply provide the infrastructure for documents to be part of such a space by letting the server know what’s inside and how the documents connect, through the explicit citation information.
Let us go through a few possible visualisations which can come out of Visual-Meta and more interactive text systems in general: You type in any keywords and they appear on the screen with lines connecting to any documents which have the keywords and the lines are thicker for those documents where they occur more frequently.
You could also choose to view the documents as piles of the images they contain, or even only look a the images, hiding the documents. This could be useful for historical analysis or for comparing charts.
If you have already read all or some of the documents you should be able to, in a list or graph view, see all the text you have highlighted–or a subset of that text based on a search.
How about fading out the documents and the images and instead seeing a screen full of people’s names also (as enabled by ML provided by Apple fx but which Visual-Meta could help make accessible) and lines showing how they connect and then we could click on a person’s name and see where they are mentioned?
How about switching to a map view where any locations mentioned in any document, the publication places of the documents or the place of work of the author’s are indicated? Then switch right over to another view. Effortless (after basic training, as Doug would have insisted: Functionality first, Usability to serve the functionality, not the other way around!).
This should make you feel like a virtual Spiderman, where you control the views and connections as though you are holding a magical web or a sculpture, changing the views to change your thoughts and changing your thoughts to change your views.
Let’s not forget that you should be able to save any view to easily flip back to it.
You should be able to make clear that you are the author of certain documents and they can be used as a spine to see what else was going on in the field when you wrote them and you should be able to follow those you have cited and so on, and so on.
And when you have a view which shows something interesting, you should be able to say that you want an update if document matches the criteria for the view is published.
When documents are aware of what they contain, their own internal and external citation information and can communicate this to software, the only limit to what you can see and do will be your imagination, to paraphrase the classic Macintosh commercial: