Visual-Metadata for Augmented Documents

Visual-Meta is a method of including metadata in a document, visibly as an appendix rather than hidden in a datafile, augmenting ordinary documents and making them richly interactive. This brings specific advantages to you as author and reader.

Visual-Meta augmented documents know what they are, who authored them, their title and date of publication, and this information is included when you copy text from a VM document, so that you can copy & paste as a citation, reducing the opportunity for human error.

Visual-Meta documents also know their structure so that you can fold them into tables of content for quick navigation. 

VM documents also know what they cite, so you can easily view citation threads without worrying about citation errors which are too to frequent in academic documents. You can even click on a citation in a document to see all the Reference information. 

The result is richer views for richer insights. Doug Engelbart felt views were vital to extend the grasp of our knowledge. Bertrand Russell, writing more abstractly about views, used the example of how binocular vision provides a richer sense of what is being viewed than is afforded by a single eye, with a single point of view. Now we must imagine what an almost unrestricted amount of views can give us because our documents now contain the metadata to build such views.

Let’s first look at how metadata is ‘imprinted’ in a paper book: 

Traditional Book Approach. A printed book features a page, before the main text, with ‘meta’ data ‘about’ the book, including the name of the author, title of the book and imprint information and so on. For example: 

Copyright © 2010 Alberto Manguel.
All rights reserved.
Printed in the United States of America. Library of Congress Cataloging-  Manguel, Alberto. A reader on reading / Alberto Manguel. p. cm. Includes  bibliographical references and index.
ISBN 978-0-300-15982-0 (alk. paper)    


The Visual-Meta Approach. Visual-Meta puts this metadata into an Appendix at the back of the document instead of at the front (to make it less obtrusive), written out as plain text in the very human readable format, as shown below (It is not a new document format, it is a novel use of the existing BibTeX standard though is comparable with other protocols, such as MARC and Dublin Core). The PDF viewer software can then use this to make the text in the document interactive. Here is a basic indication of what Visual-Meta can contain: 


   author = {Manguel, Alberto},
   title = {A reader on reading},
   year = {2010},
   isbn = {978-0-300-15982-0},



The documents we share with each other today are generally paper-facsimile with few digital interactions afforded to them. To truly unleash digitally connected discourse we need documents to ‘know’ what they are; who authored them, what their title is, when they were published, how they connect to other documents and so on, as well as what their structures are. 


To achieve this, it must be done in a robust way so that this enabling metadata does not get stripped from the document over time. It is relatively easy to invent a format to provide this but with the ubiquity of PDF it would be prohibitively expensive to promote as a universal standard. It should therefore bootstrap what we have, it should augment what is already used for important documents, particularly academic documents; PDFs. This achievable–it can be as simple as simply writing a few lines of text at the back of the document.


Implementing Visual-Meta


Visual-Meta aims to be self explanatory for those who come across it. Create Visual-Meta for your documents follows the BibTeX format:


Implementing Visual-Meta

Visual-Meta for VR


Visual-Meta in ACM Communications

Co-inventor of the Internet, Vint Cerf, wrote about Visual-Meta in his editorial in the main publication for the venerable, prestigious and influential Association for Computing Machines (ACM), ‘Communications’:


Introduction {Hypertext ’21}

Brief presentation and explanation of Visual-Meta to the attendees of the Hypertext ’21 Conference, where Visual-Meta is a part of all the conference proceedings, by Frode Hegland, after an introduction by Vint Cerf. The Full Introduction is also available, with the introduction by Vint and discussion after the presentation:


Visual-Meta for VR/AR/XR

Visual-Meta can be used to get rich metadata from traditional documents into augmented environments and then a new Visual-Meta appendix can be added when leaving the environment for the augmented views to be stored. This work is being done at the Future Text Lab.


Visual-Meta augments ML

Visual-Meta augments Machine Learning (ML) by providing a stable structure within which ML can work. An example of this is Headings, which Visual-Meta surfaces to reading software so that when ML analysis is performed the Headings can act as visual anchors to help with context.


BBC World Podcast

Visual-Meta was featured on the BBC World ‘Digital Planet’ and is now available as a Podcast:


Visual-Meta Implementations

Visual-Meta has been implemented in academic journals, authoring software and visualisation systems:



This is not a new document format, Visual-Meta augments regular PDF documents and is being implemented for web pages.