Visual-Meta is a method of including metadata about a document and its contents visibly in the document, in a human and machine readable Visual-Meta Appendix, on the same visual level as the content, rather than hidden in the datafile, making ordinary text in a PDF richly interactive in a robust form.

This may sound dry but instead of trying to invent a new document format substrate to unleash the potential of richly interactive digital text, this approach takes ‘normal’ PDF text and makes it interactive. It is currently possible to embed some metadata in PDF but it is rarely done and does not include structural information.

Let’s first look at how metadata is ‘imprinted’ in a paper book:

Traditional Book Approach

A traditional book features a page inside the book, before the main text, with ‘meta’ data ‘about’ the book, including the name of the author, title of the book and imprint information and so on:

Copyright © 2010 Alberto Manguel.
All rights reserved.
Designed by Sonia Shannon Set in Fournier type by Tseng Information Systems, Inc.
Printed in the United States of America. Library of Congress Cataloging-Manguel, Alberto. A reader on reading / Alberto Manguel. p. cm. Includes bibliographical references and index.
ISBN 978-0-300-15982-0 (alk. paper) 1. Books and reading. 2. Manguel, Alberto— Books and reading. I. Title. z1003.M2925 2010 028’.9—dc22 2009043719 A catalogue record for this book is available from the British Library.

The Visual-Meta Approach

Visual-Meta puts this metadata into an Appendix at the back of the document instead of at the front (to make it less obtrusive), written out as visible plain text. It contains citation metadata and can also contain addressing, interaction and formatting information. The PDF viewer can then use this to make the normal PDF text in the document interactive:

author = {Hegland, Frode},
title = {Visual-Meta: An Approach to Surfacing Metadata},
booktitle = {Proceedings of the 2Nd International Workshop on Human Factors in Hypertext}, series = {HUMAN ’19},
year = {2019},
isbn = {978-1-4503-6899-5},
location = {Hof, Germany},
pages = {31–33},
numpages = {3},
url = {},
doi = {10.1145/3345509.3349281},
acmid = {3349281},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {archiving, bibtex citing, citations, engelbart, future, glossary, hypertext, meta,
metadata, ohs, pdf, rfc, text}, }


This means that the PDF viewer software knows the citation information of this document so that a reader can cite with a simply copy and paste, change the view of the text since the reader software is aware of the document’s structure and since this metadata is visible at the same level as the content of the document, it will not be stripped out as document formats change and it will not interfere with viewers which are not Visual-Meta aware.

There are further Immediate User Benefits, different User Community Benefits and Visual VS. Embedded Benefits.

Visual-Meta Unleashes Hypertextuality and advanced interactions such as Augmented Copying (copies with full citation information), References and Glossaries, as well as included information for how to parse tables, images and special interactions for graphs. This enables dynamic re-creations of interactions with sophisticated visualisations, which no longer needs to be flattened when committed to PDF.

It is also Extensible for Computational Text, Rights Management and Provenance. Visual-Meta is Not A New Standard (it is BibTeX in a novel use, supported by JSON when that is useful), it builds seamlessly on the legacy PDF format by simply adding plain text metadata in an appendix, and basic Visual-Meta Is Quick & Easy To Add ToLegacy Documents.


Visual-Meta has been implemented in the Augmented Text suite of software:

Presentation at the 2020 Summit of the Book

ACM Hypertext 2019 Visual-Meta Presentation

Future Text Initiative

The Visual-Meta approach is part of the Future Text Initiative which also includes the book The Future of Text and the Author, Reader and Liquid software projects.

Further Information

There is more Sample Visual-Meta available for reference, with explanations of different categories.

Further description is on the blog: and at: Visual-Meta Example & Structure. Full source code for parsing visual-meta will be made available here. Addressing is discussed at
To get involved, please feel free to contact the developer of Visual-Meta Frode Hegland

The Visual-Meta approach is very much inspired by Doug Engelbart’s notion of an xFile and his insistence that high-resolution addressability should be human readable. Here is an brief chat with Doug from the early 2010s, with more available on