Visual-Meta is a method of including metadata about a document and its contents visibly in the document, in a human and machine readable Visual-Meta Appendix, on the same visual level as the content, rather than hidden in the datafile. This may sounds dry but instead of trying to invent a new document format substrate to unleash the potential of richly interactive digital text, this approach takes normal text and makes it magic*. The ‘normal’ part is crucial–let’s first look at how metadata is ‘imprinted’ in a paper book:

The Traditional Book

A traditional book features a page inside the book, before the main text, with ‘meta’ data ‘about’ the book, including the name of the author, title of the book and imprint information and so on:

Copyright © 2010 Alberto Manguel. All rights reserved. This book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except by reviewers for the public press), without written permission from the publishers. Designed by Sonia Shannon Set in Fournier type by Tseng Information Systems, Inc. Printed in the United States of America. Library of Congress Cataloging-Manguel, Alberto. A reader on reading / Alberto Manguel. p. cm. Includes bibliographical references and index. ISBN 978-0-300-15982-0 (alk. paper) 1. Books and reading. 2. Manguel, Alberto— Books and reading. I. Title. z1003.M2925 2010 028’.9—dc22 2009043719 A catalogue record for this book is available from the British Library. This paper meets the requirements of ANSI/NISO z39.48- 1992 (Permanence of Paper). 10 9 8 7 6 5 4 3 2 1


Visual-Meta puts this metadata into an Appendix at the back of the document instead of at the front (to make it less obtrusive). It contains citation metadata and can also contain addressing, interaction and formatting information. The PDF viewer can then use this to make the normal PDF text in the document magic* 

@{visual-meta-start} author = {Hegland, Frode}, title = {Visual-Meta: An Approach to Surfacing Metadata}, booktitle = {Proceedings of the 2Nd International Workshop on Human Factors in Hypertext}, series = {HUMAN ’19}, year = {2019}, isbn = {978-1-4503-6899-5}, location = {Hof, Germany}, pages = {31–33}, numpages = {3}, url = {}, doi = {10.1145/3345509.3349281}, acmid = {3349281}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {archiving, bibtex citing, citations, engelbart, future, glossary, hypertext, meta, metadata, ohs, pdf, rfc, text}, } @{visual-meta-end}


There are Immediate User Benefits, different User Community Benefits and Visual VS. Embedded Benefits. Visual-Meta unleashes Hypertextuality and advanced interactions such as Augmented Copying (copies with full citation information), References and Glossaries, as well as included information for how to parse tables, images and special interactions for graphs. This enables dynamic re-creations of interactions with sophisticated visualisations, which no longer needs to be flattened when committed to PDF. It is also Extensible for Computational Text, Rights Management and Provenance. Visual-Meta is not a new standard (it is BibTeX in a new use, supported by JSON when that is useful), it builds seamlessly on the legacy PDF format by simply adding plain text metadata in an appendix, and it is quick and easy to add to legacy documents. Implementation Visual-Meta has been implemented in the Augmented Text suite of software: Presentation at the 2020 Summit of the Book Second video down on the page is the Panel discussion with Vint Cerf and Ismail Serageldin.

ACM Hypertext 2019 Visual-Meta Presentation

Future Text Initiative

The Visual-Meta approach is part of the Future Text Initiative which also includes the book The Future of Text and the Author, Reader and Liquid software projects.

Further Information

There is more Sample Visual-Meta available for reference, with explanations of different categories. Further description is on the blog: and at: Visible-Meta Example & Structure. Full source code for parsing visual-meta will be made available here. Addressing is discussed at To get involved, please feel free to contact Frode Hegland The Visual-Meta approach is very much inspired by Doug Engelbart’s notion of an xFile and his insistence that high-resolution addressability should be human readable. Here is an brief chat with Doug from the early 2010s, with more available on  

* Visual-Meta takes normal text and makes it magic.


Any sufficiently advanced technology is indistinguishable from magic


Arthur C. Clarke Profiles of the Future