Optional Post @{visual-Meta Appendices

The following optional Visual-Meta appendices are added after the main, required Visual-Meta appendix, and can therefore be removed or appended again and again, easily. This means that errata can be attached without amending the original document and that annotations and VR/AR position data can travel with the document.

Mechanics for the issuing of errata pages. 
Future Development

Mechanics for presenting reader annotations in an easily accessed format. 
Future Development

Mechanics for recording VR/AR/Metaverse positions of elements in the document.
Future Development


Contains spatial coordinates of locations, orientations and appearance of elements in the document including References, Glossary terms, Map of Glossary terms, Outline/Table of Contents, Images, Graphs/Charts/’Murals’ and so on.

Also contains tags for user’s preferences as to what portals different elements should appear, such as the user having a preference that the References should always appear in a virtual bookshelf or Glossary terms should appear on the user’s cork board. In this case the ‘tags’ refer to ‘portals’ in the sense of specified locations in the user’s VR environment.

Embedded interactions the user has added while in VR, such as touching one element having a programatic effect, can also be included.

Open and Portable. The result of this is that the spatial information with the document in a VR space can be encoded so that when the document is next viewed in VR all the elements will easily be able to see in their original spatial layout, no matter what the VR environment vendor/producer is, as long as it knows how to read Visual-Meta.

Live Documents (Word/Pages/Docs). This appendix does not only apply to exported/published documents such as PDF documents, but also ‘live’ and editable documents such as .pages and .docx where the appendix is normal, editable text which the user can keep or delete, allowing them seamless transitions from working in a traditional environment or VR.

Robust. As with Visual-Meta in general, it is worth pointing out that this not only makes the spatial information open, since it is at the same level as the document information and therefore any person or system can read it, it also makes it very robust, even to the point of allowing it to be printed and later scanned and no metadata is lost.

Experimental, being worked on by the Future Text Lab.