Visual-Meta

Visual-Meta is an approach to augmenting documents in order to enable rich interactions in a robust manner.

         .

The Traditional Book Approach

A printed book features a printer’s page before the main text, with metadata including the name of the author, title of the book and imprint information and so on. For example: 

Copyright © 2010 Alberto Manguel.
All rights reserved.
Printed in the United States of America. Library of Congress Cataloging-  Manguel, Alberto. A reader on reading
ISBN 978-0-300-15982-0 (alk. paper) 

 

        .

The Visual-Meta Approach

Visual-Meta puts this metadata, and more, into an Appendix at the back of the document instead of at the front (to make it less obtrusive), written out as plain text in the very human readable format, based on the academic BibTeX format, as shown below:

@{visual-meta-start}

   author = {Manguel, Alberto},
   title = {A reader on reading},
   year = {2010},
   isbn = {978-0-300-15982-0},

@{visual-meta-end}

    

               

Characteristics

Enables Powerful Interactions
  • Copy & Paste to Cite. Copy & paste text as full citations, automatically.
  • Click to Access Citation Information. Click on a citation to open the cited document directly, not just a download page [quick demo]
  • Flexible Views. Change the view of the document as you see fit, to see it as an outline, see how glossary terms connect, and view citations maps with ease [quick demo]
  • Glossaries of Defined Concepts for use in Maps and Graphs in software which supports it
  • Computational Text. Support advanced interactions, not only static text, for similar to what computational notebooks enable, with clearly visible data, including embedding executable code
  • VR/AR Aware. Clear approaches as to how to deal with spatial computing environments & beyond, including supplementary appendices for placements of elements in virtual space
Rich & Open
  • Rich. Supports a large amount of metadata
  • Open. Anyone can read the metadata with no special background or tooling, now & in the future
  • Free. No license constraints. Anyone can freely build tools to parse and append Visual-Meta
  • Extensible. Anyone can add their own fields–not only have predefined fields available–should they wish, as long as they make clear what the fields are
  • Accessibility focused. By providing useful data and tags for accessibility, (including option to include pristine full text for reader software to present in specific ways) all users benefit
  • Augments AI. Gives AI clear and authoritative information, literally
Robust
  • Robust. The metadata cannot easily be lost if the document changes versions or even formats since the metadata is not embedded, it is on the same level as the contents
  • Document encapsulated. Does not need a server to be maintained to function
  • HTML Compatible. Can embed full sections of HTML to provide rich interactions to readers which supports it
  • Compatible with all valid PDF viewers since it simply adds text to the document.

Benefits

  • Student End Users – When authoring, copy and paste to cite. When reading, click on a citation to see the source and then click on the source to open it directly, as well as having much more flexible views of the text than plain PDFs afford
  • Educators – Better understanding of student thinking through the Glossaries of Defined Concepts, clearer access to citations and simpler environment for students
  • Publishers – For production, sync’s up with their XML data for cheap production and connects their work better for more use of their content. For ingestion of manuscripts makes extraction of metatada faster and places much less onus on the author to format to style, which costs effort. Currently in testing with the ACM.
  • Software Developers – Easy and cheap to produce Visual-Meta based on the data in the manuscript and also easy and cheap to extract from PDF, with no licence or external software needed
  • Apple – Encourages an open ecosystem where developers can compete by building the most powerful user experiences

All metadata standards can potentially accommodate all metadata but there are real differences in how it is produced and to what degree it becomes accessible. Anything can be put into metadata fields but at what production cost and at what effort to access it?

Please refer to the Metadata Comparison Table to see how this approach differs from what is currently available.

This is not a demo, this is not only for today–to enable rich interactions, we need a robust infrastructure for rich metadata, as the games industry has known for years and the movie industry has teased. We need something like this approach.

        .

Implementing Visual-Meta

Visual-Meta aims to be self explanatory for those who come across it and straight forward to implement. The approach is compatible with any ‘publish’ format where a document is exported from authoring software in a stable form, such as PDF, which is the primary use case.

All of the above capabilities, and much more, are made possible by the document being able to store and communicate information about what it is. Importantly, this is done at very low cost since most of this is already known in the manuscript document–it is simply transferred to the published PDF–rather than being discarded.

 

FAQ

  

Status

BBC World’s ‘Digital Planet’ presented Visual-Meta: Podcast.

Hypertext ’21 Conference. A Brief Introduction is available, as well as a Full Introduction. The conference proceeding documents of ’21 & ’22 features Visual-Meta.

For a book size demonstration of Visual-Meta, you can download the free PDF version of The Future of Text, and if you are on macOS, view it in the ‘Reader‘ PDF viewer to unlock the Visual-Meta capabilities. You can of course still see the Visual-Meta if you open the PDF in any other PDF viewer, though the augmented capabilities are only available where implemented in viewer software.

In short, we are working towards richly interactive, robust documents. If this is interesting to you, please have a look at who we are and feel free to get in touch.

Frode Alexander Hegland
frode@hegland.com
London, 2023

 

What people are saying

Visual-Meta adds “an exploitable self-contained self-awareness within some of the objects in this universe and increases their enduring referenceability.”
In ACM ‘Communications’: The Future of Text Redux.

Vint Cerf

“The more information, the more easily understood and not just shared, the better. And so, making it visible and usable and searchable is great.”

Esther Dyson

“If a PDF has visual meta, Scholarcy’s tools can read it, rather than having to look up the record in a multitude of databases, or try to find it in incomplete embedded data. That means a faster and smoother experience for our users – they get accurate information immediately.”

“All publishers should support Visual-Meta.”

Phil Gooch

“I like this a lot because you’ve got a, ‘I can’t be bothered option’ (standard BibTeX format) and an ‘I really care option’ (extended format).”

Christopher Gutteridge

“I’m struck by the underlying simplicity, where the complexity is built up. And I like those kinds of design approaches, because they create much lower barriers to adoption.”

Dave Crocker

“It’s not hypertext or linear documents. I think the pushing that went on in the early years of hypermedia—pushing toward resisting the linearity of print and thinking ‘let’s have another paradigm that gets us away from print’— wasn’t useful. Because the reality is, we’re working with both. Visual-Meta creates a docuverse in an interesting way.”

Jane Yellowlees Douglas

On exporting defined terms as glossary and then clicking on terms inside the definition to load them: “Oh, my God. How did you do that?” On how also structural information is included on exports, such as headings which can be folded into a table of contents in the reader software: “that’s really useful.”

”This would be great for all researchers.”

Esther Wojcicki

“You have a system you have put together which can have a real impact on readers and writers.”

Livia Polanyi

“As Engelbart said, if you can automate the lower level tasks, then that enables you to think better about the higher level tasks. And I think this does a really great job of taking all of the different parts of what a proper scholar does, and put it really at your fingertips.”

“I wish I had this when I was writing books.”

Howard Rheingold

“What you’re doing is awesome. Keep going.”

Jack Park

“What you’re trying to do is imprint the digital into the print.”

“I’m glad you’re doing this, it’s important work.”

Dene Grigar

“This deserves to go further than just students in academia.”

“I can absolutely see this being you know used in a corporate industrial research lab.”

Simon Buckingham Shum

“It’s all about context. But my whole world is context, right? And this brings it out. It’s super exciting … Perfect. Perfect. That’s fantastic.”

Bruce Horn

 

If we truly value knowledge, we must also value how knowledge is created, stored, accessed & analysed.