Visual-Meta

Visual-Meta is an approach to augmenting documents in order to enable rich interactions in a robust manner.

         .

The Traditional Book Approach

A printed book features a printer’s page before the main text, with metadata including the name of the author, title of the book and imprint information and so on. For example: 

Copyright © 2010 Alberto Manguel.
All rights reserved.
Printed in the United States of America. Library of Congress Cataloging-  Manguel, Alberto. A reader on reading
ISBN 978-0-300-15982-0 (alk. paper) 

 

        .

The Visual-Meta Approach

Visual-Meta puts this metadata, and more, into an Appendix at the back of the document instead of at the front (to make it less obtrusive), written out as plain text in the very human readable format, generally based on the academic BibTeX format, as shown below:

@{visual-meta-start}

   author = {Manguel, Alberto},
   title = {A reader on reading},
   year = {2010},
   isbn = {978-0-300-15982-0},

@{visual-meta-end}

    

Please note, this is only an exemplar of the very basic metadata Visual-Meta can convey. Additional metadata includes document structure, interactive glossary, graph data, spatial data, AI extracted entities, summaries & more. To expand how we think, we must expand how we share knowledge.

Software tools break.
Robustly embodied, accessible &
addressable metadata does not.
This is how we can ensure interactions
can remain possible
for the very long term.

             

Characteristics

Enables Powerful Interactions
  • Copy & Paste to Cite. Copy & paste text as full citations, automatically
  • Click to Access Citation Information. Click on a citation to open the cited document directly, not just a download page [quick demo]
  • Flexible Views. Change the view of the document as you see fit [quick demo]
  • Glossaries of Defined Concepts
  • Computational. Support advanced interactions similar to what computational notebooks enable
  • Connected. Makes clear citations for citation analysis and supports link types and external resources, such as LLM hints, glossaries etc.
Rich & Open
  • Rich. Supports a large amount of metadata
  • Open. Anyone can read the metadata now & in the future
  • Free. No license constraints
  • Extensible. Anyone can add their own fields should they wish, as long as they make clear what the fields are
  • Accessibility Support. Provides useful data and tags for accessibility
  • Augments XR &AI. Allows for temporary appendices for extended reality and AI analysis
Robust
  • Robust. Metadata stays available for as long as the document does
  • Document encapsulated. Does not need a server to be maintained to function, even for linking to other documents
  • Compatible with all valid PDF viewers since it simply adds text to the document.

All metadata standards can potentially accommodate all metadata but there are real differences in how it is produced and to what degree it becomes accessible. Anything can be put into metadata fields but at what production cost and at what effort to access it?

The inclusion of rich and open metadata in documents, such as through Visual-Meta, allows software developers to create precision tools for different groups of users and not with proprietary data. Our choice of what software we would like to use should be similar to how we might choose a car to drive–all cars can drive on the roads, all roads are open for exploration.

Please refer to the Metadata Comparison Table to see how this approach differs from what is currently available. This is not a demo, this is not only for today–to enable rich interactions, we need a robust infrastructure for rich metadata, as the games industry has known for years and the movie industry has teased. We need something like this approach.

        .

Implementing Visual-Meta

Visual-Meta aims to be self explanatory for those who come across it and straight forward to implement. The approach is compatible with any ‘publish’ format where a document is exported from authoring software in a stable form, such as PDF, which is the primary use case.

 

FAQ

Please note that this approach does not compete with other formats like XML.

  

Status

BBC World’s ‘Digital Planet’ presented Visual-Meta: Podcast.

Hypertext ’21 Conference. A Brief Introduction is available, as well as a Full Introduction. The conference proceeding documents of ’21 & ’22 features Visual-Meta.

For a book size demonstration of Visual-Meta, you can download the free PDF versions of The Future of Text, and if you are on macOS, view it in the ‘Reader‘ PDF viewer to unlock the Visual-Meta capabilities. You can of course still see the Visual-Meta if you open the PDF in any other PDF viewer, though the augmented capabilities are only available where implemented in viewer software.

In short, we are working towards richly interactive, robust documents. If this is interesting to you, please have a look at who is behind Visual-Meta and feel free to get in touch.

Frode Alexander Hegland
frode@hegland.com
London, 2023

 

What people are saying

Visual-Meta adds “an exploitable self-contained self-awareness within some of the objects in this universe and increases their enduring referenceability.”
In ACM ‘Communications’: The Future of Text Redux.

Vint Cerf

“The more information, the more easily understood and not just shared, the better. And so, making it visible and usable and searchable is great.”

Esther Dyson

“If a PDF has visual meta, Scholarcy’s tools can read it, rather than having to look up the record in a multitude of databases, or try to find it in incomplete embedded data. That means a faster and smoother experience for our users – they get accurate information immediately.”

“All publishers should support Visual-Meta.”

Phil Gooch

“I like this a lot because you’ve got a, ‘I can’t be bothered option’ (standard BibTeX format) and an ‘I really care option’ (extended format).”

Christopher Gutteridge

“I’m struck by the underlying simplicity, where the complexity is built up. And I like those kinds of design approaches, because they create much lower barriers to adoption.”

Dave Crocker

“It’s not hypertext or linear documents. I think the pushing that went on in the early years of hypermedia—pushing toward resisting the linearity of print and thinking ‘let’s have another paradigm that gets us away from print’— wasn’t useful. Because the reality is, we’re working with both. Visual-Meta creates a docuverse in an interesting way.”

Jane Yellowlees Douglas

On exporting defined terms as glossary and then clicking on terms inside the definition to load them: “Oh, my God. How did you do that?” On how also structural information is included on exports, such as headings which can be folded into a table of contents in the reader software: “that’s really useful.”

”This would be great for all researchers.”

Esther Wojcicki

“You have a system you have put together which can have a real impact on readers and writers.”

Livia Polanyi

“As Engelbart said, if you can automate the lower level tasks, then that enables you to think better about the higher level tasks. And I think this does a really great job of taking all of the different parts of what a proper scholar does, and put it really at your fingertips.”

“I wish I had this when I was writing books.”

Howard Rheingold

“What you’re doing is awesome. Keep going.”

Jack Park

“What you’re trying to do is imprint the digital into the print.”

“I’m glad you’re doing this, it’s important work.”

Dene Grigar

“This deserves to go further than just students in academia.”

“I can absolutely see this being you know used in a corporate industrial research lab.”

Simon Buckingham Shum

“It’s all about context. But my whole world is context, right? And this brings it out. It’s super exciting … Perfect. Perfect. That’s fantastic.”

Bruce Horn

 

If we truly value knowledge, we must also value how knowledge is created, stored, accessed & analysed.