Visual-Meta

Visual-Meta is an open & robust approach to augment documents to communicate what they are, allowing you to do more useful things with them.

The result is richer interactions for more control & richer views for deeper insights. 

Let’s first look at how metadata is ‘imprinted’ in a paper book: 

Traditional Book Approach

A printed book features a page, before the main text, with ‘meta’ data ‘about’ the book, including the name of the author, title of the book and imprint information and so on. This is called ‘impressum’. For example: 

Copyright © 2010 Alberto Manguel.
All rights reserved.
Printed in the United States of America. Library of Congress Cataloging-  Manguel, Alberto. A reader on reading / Alberto Manguel. p. cm. Includes  bibliographical references and index.
ISBN 978-0-300-15982-0 (alk. paper)    

The Visual-Meta Approach

Visual-Meta puts this metadata into an Appendix at the back of the document instead of at the front (to make it less obtrusive), written out as plain text in the very human readable format, as shown below.

This is not a new document format, it is a novel use of the existing BibTeX standard and is compatible with other protocols, such as MARC and Dublin Core

PDF viewer software can then use this to make the text in the document interactive. Here is a basic indication of what Visual-Meta can contain: 

  
@{visual-meta-start}

   author = {Manguel, Alberto},
   title = {A reader on reading},
   year = {2010},
   isbn = {978-0-300-15982-0},

@{visual-meta-end}

 

The documents we share with each other today are generally paper-facsimile with few digital interactions afforded to them. To truly unleash digitally connected discourse we need documents to ‘know’ what they are; who authored them, what their title is, when they were published, how they connect to other documents and so on, as well as what their structures are. 

 

To achieve this, it must be done in a robust way so that this enabling metadata does not get stripped from the document over time. It is relatively easy to invent a format to provide this but with the ubiquity of PDF it would be prohibitively expensive to promote as a universal standard. It should therefore bootstrap what we have, it should augment what is already used for important documents, particularly academic documents; PDFs. This achievable–it can be as simple as simply writing a few lines of text at the back of the document.

Doug Engelbart felt flexible view controls were vital to extend the grasp of our knowledge. Bertrand Russell, writing more abstractly about views, used the example of how binocular vision provides a richer sense of what is being viewed than is afforded by a single eye, with a single point of view. Now we must imagine what an almost unrestricted amount of views can give us because our documents now contain the metadata to build such views.

      

Implementing Visual-Meta

 

Visual-Meta aims to be self explanatory for those who come across it. Create Visual-Meta for your documents follows the BibTeX format:

 

Implementing Visual-Meta

Visual-Meta for VR

   

Visual-Meta in ACM Communications

Co-inventor of the Internet, Vint Cerf, wrote about Visual-Meta in his editorial in the main publication for the venerable, prestigious and influential Association for Computing Machines (ACM), ‘Communications’: https://cacm.acm.org/magazines/2021/10/255699-the-future-of-text-redux/fulltext

 

Introduction {Hypertext ’21}

Brief presentation and explanation of Visual-Meta to the attendees of the Hypertext ’21 Conference, where Visual-Meta is a part of all the conference proceedings, by Frode Hegland, after an introduction by Vint Cerf. The Full Introduction is also available, with the introduction by Vint and discussion after the presentation:

       

Visual-Meta for VR/AR/XR

Visual-Meta can be used to get rich metadata from traditional documents into augmented environments and then a new Visual-Meta appendix can be added when leaving the environment for the augmented views to be stored. This work is being done at the Future Text Lab.

 

Visual-Meta augments ML

Visual-Meta augments Machine Learning (ML) by providing a stable structure within which ML can work. An example of this is Headings, which Visual-Meta surfaces to reading software so that when ML analysis is performed the Headings can act as visual anchors to help with context.

            

BBC World Podcast

Visual-Meta was featured on the BBC World ‘Digital Planet’ and is now available as a Podcast:

https://www.bbc.co.uk/sounds/play/w3ct1lt4

       

Visual-Meta Implementations

Visual-Meta has been implemented in academic journals, authoring software and visualisation systems:

 

 

This is not a new document format, Visual-Meta augments regular PDF documents and is being implemented for web pages.