Origami Text Song

(A song for the future of text)


[Verse 1]
First we learned to surf between the networks —
Signals riding open protocols,
Then TBL gave the world a web,
Linked us all with open thread.
The greatest gifts to human thought
Since the page, since the press, since the word was set.

Now we need to link what’s in them —
Connect the objects of what we know.
But even gifts this wide and deep
Have edges where the light falls short.
Knowledge trapped in frozen pages,
Bold text masquerading as a heading,
Citations dressed in formatting —
No machine can read the meaning.

HTML alive and breathing,
But it drifts and shifts and disappears,
Servers down, the stylesheet’s missing,
Your thinking — gone within the year.

[Pre-Chorus]
Neither one can carry
What you built while building what you wrote —
The map you made before the territory,
The structure underneath the notes.

[Chorus]
Fold it flat for transport,
Let it travel light,
Every layer still holds
When you unfold it right.
A zip file full of text files —
That’s the floor, not the ceiling.
Open standard, open future,
Origami feeling.

[Verse 2]
Not a revolution — a discipline,
Fewer elements, not more,
Semantic HTML inside a package,
EPUB with an open door.

Headings that know they’re headings,
Concepts marked as what they are,
BibTeX and CSL-JSON riding
Quietly beneath each star.

[Pre-Chorus]
The wing won’t lift you by itself —
It’s the speed of interaction.
Too slow and you’re a car on a runway,
Too rigid for the action.

[Chorus]
Fold it flat for transport,
Let it travel light,
Every layer still holds
When you unfold it right.
Apple Books, a text editor,
An AI, or XR —
Same file serves them all,
Near and far and far.

[Bridge]
Doug said think of skiing downhill —
The mountain doesn’t wait.
If you can’t interact fast enough
You’ll fall before you’re great.

So shape the document to interact,
Addressable and clean,
A single export carries
Every way it can be seen:

A colleague clicks, the citations import,
An AI reads the topology,
A student fifty years from now
Walks through your epistemology.

[Verse 3]
Vint called it self-contained self-awareness —
A document that knows its name,
No cloud, no database, no proprietary prayer,
It explains its own terrain.

Forty-seven million galaxies mapped
Across the universe we’ve charted —
Should we not map knowledge just as well
Here where thinking started?

[Verse 4]
The painter cares about the paint,
The sculptor feels the stone,
The photographer knows that print
Gives light a different tone.

The medium is not a minor thing —
Material has a material say
In the quality of what you make,
In what survives the day.

So why should knowledge workers settle
For formats blind to thought?
The medium we think in shapes
The thinking that gets caught.

[Final Chorus]
Fold it flat for transport,
Ready to unfold,
The thinking, not just text,
Is what the format holds.
Not to replace what exists —
But to let documents carry more
Of what their authors know,
Of what their authors know.

[Outro — spoken or sung softly]
A paper’s still a paper.
It still prints. It still reads.
But now it holds a knowledge space
Folded in its seams.

Revolutions seem impossible
Before they happen…
And inevitable
After they succeed.

[spoken]
The future of text lives in the future of the substrate…

Images

The inclusion of images in Visual-Meta is to enable the user to interact with the image in useful ways, by letting the readers software understand aspects of what the image is.

Location. Image numbering (img001) is done starting with the first known image, then going through the document starting top left on a page, then going from left to right, and down. A reference to page and location in relation to other images is also included. This is the image ID.

Cite. An image is a citable unit of information and as such is addressable and indicates where it is from, using the same format as any citation.

Type. The image can be any of the following or more (it depends on the reading software how this will be parsed):

  • Image (default)
  • Photograph (this can include EXIF data)
  • Mural (this can indicate expected full dimensions)
  • Map (this can indicate location in text: such as location = {London})
  • Graph (this can indicate variables used)
  • Diagram (this can indicate variables used)
  • Imagemap (if so, there is image map information to follow, using standard W3 image map data and you can see the example here is taken from W3)

Example

@{images-start}

ID = {img001},
page = {12},
pagelocation = {1},

author = {Munch, Edward},
title = {The Scream},
year = {1893},
url = {http:www.imageorigin.com},

type = {imagemap},

<map name=”workmap”>
  {<area shape=”rect” coords=”34,44,270,350″ alt=”Computer” href=”computer.htm”>
  <area shape=”rect” coords=”290,172,333,250″ alt=”Phone” href=”phone.htm”>
  <area shape=”circle” coords=”337,300,44″ alt=”Coffee” href=”coffee.htm”>
</map>},

@{images-end}

@{glossary-start}

This is an example of an event glossary, for use in a timeline for example:

@entry{defined-term-same-as-name-below,
name = {first name of entry},
alt-name1 = {second version name of entry},
alt-name2 = {third version name of entry},
description = {rectangular array of values} (plain text definition description)
cite = {transitory id/label from the citation in the References section},
cite2 = {transitory id/label from the citation in the References section}, (if more than one)

category = {event}, (determines what options will be useful next and it is expected that the authoring software shows different possible fields depending on which category is chosen)
eventtype = {invention}, (as above, further describes what this is)

title = {Carbon Paper},
subject = {}, (same as title unless filled in)
purpose = {making paper copies cheaper and less error prone},
available alternatives = {hand copying, Jefferson system of mechanical sticks},
altvocabulary = {alternative vocabulary for specific terms, like this: paper:substrate},
cost = {cost to produce in $},

year = {1801}, (can be ‘none’ which means not historical event but event which can take place any time, according to data below},
yearprobablity = {}, (can have ‘ca.’ to indicate ‘circa’ and other means)
month = {November},
date = {13},
time = {12:01:20},
duration = {},
fuzziness = {1 year}, (this is a hard one to define, it means how the border of time should fade when shown visually)
infuture = {}, (if yes, this is in the future of when this entry was created)
useduntil = {1970s},

byfirstname = {Pellegrino}, (this section can be replaced by reference to entry if this person has a Glossary entry)
bymiddlename = {},
bylastname = {Turri},
byprefix = {},
bypostfix = {},
bytitle = {},
byalternative = {},
byformerly = {},

issoleinventor = {yes},
iscollaborator = {},
isorganisation = {} (if ‘yes’ then this is an organisation. Entry can have both, and more than one person),

onlinesource = {URL of wikipedia or better},
onlineobject-representation = {link to image or 3D model},

orignaldialogsource = {link to audio/video/text source of how this entry came about},

entrycreatedby {} (author of this document if blank, or other, named person or entity. URL or citation if copied and pasted),

@{glossary-end}

The data above should be able to present a sentence such as:

1801 Carbon Paper by Pellegrino Turri

Image Map

@{image-map-start}

map-name1 = {full name of graph},
map-page = {page graph is on},
map-location-(%-of-page-on-x-axis-from-top-left) = {10}
map-location-(%-of-page-on-y-axis-from-top-left) = {200}
map-size-pixels-x = {300}
map-size-pixels-y = {300}
category = {optional category for entry},
html = {
<map name=”workmap”>
  <area shape=”rect” coords=”34,44,270,350″ alt=”Computer” href=”computer.htm”>
  <area shape=”rect” coords=”290,172,333,250″ alt=”Phone” href=”phone.htm”>
  <area shape=”circle” coords=”337,300,44″ alt=”Coffee” href=”coffee.htm”>
</map>
}
}

@{image-map-end}

Graph

@{graph-start}

graph-name1 = {full name of graph},
graph-page = {page graph is on},
map-location-(%-of-page-on-x-axis-from-top-left) = {10}
map-location-(%-of-page-on-y-axis-from-top-left) = {200}
graph-size pixels x = {300}
graph-size pixels y = {300}
category = {optional category for entry},
html = {
<canvas id=”pieChartLoc” height=”300″ width=”300″></canvas>  
<head>  
<title>Bar Chart</title>  <script src=”js/Chart.min.js”></script>   
</head> 
 <body>  
<canvas id=”barChartLoc” height=”300″ width=”300″></canvas>  
<script>   
var barChartLocData = {  labels: [“January”, “Feburary”, “March”],   datasets: [{ fillColor: “lightblue”, strokeColor: “blue”, data: [15, 20, 35] }]  };  var mybarChartLoc = new Chart(document.getElementById(“barChartLoc”).getContext(“2d”)).Bar(barChartLocData);  
</script>  
</body>  
}
}

@{graph-end}

Data from this example taken from https://www.c-sharpcorner.com/UploadFile/1e050f/draw-charts-in-websites-using-chart-js/

Main point is how this method allows for the insertion of raw HTML by labelling it as such.

@{glossary-start},

The Glossary goes by different names in the authoring software depending on who designed it. For example, in our proof of concept ‘Author’ for macOS, we call this ‘Defined Concepts‘ and all the defined concepts are exported in a ‘Glossary’ in Visual-Meta.

The glossary entry can be as simple as a term and a description/definition, but the author can also assign categories to further clarify what the term is and how it relates to other information, to allow for flexible views.

@entry{defined-term-same-as-name-below,
name = {first name of entry},
alt-name1 = {second version name of entry},
alt-name2 = {third version name of entry},
description = {rectangular array of values} (plain text definition description)
cite = {transitory id/label from the citation in the References section},
cite2 = {transitory id/label from the citation in the References section}, (if more than one)

Everything below is optional:

category = {}, (determines what options will be useful next and it is expected that the authoring software shows different possible fields depending on which category is chosen)

If Category is Concept
If Category is Event (category with sample fields)
If Category is Person
If Category is Organisation
If Category is Location
If Category is Question
If Category is Possible Answer
If Category is Tool
If Category is Process
If Category is Resource
If Category is Custom

@{glossary-end}

Why not simply embed the metadata in the PDF?

The first answer to that question is that the industry simply doesn’t do it. Furthermore, it is a little technically tricky, which is few bother do it.

Having the metadata embedded means that it is not as robust as having it on the same level as the document ‘contents’ itself–as long as you don’t loose the contents, you won’t loose the metadata, even if the reading software and standards change–or even if you print the document.

Perhaps the most important reason for the Visual-Meta approach is that it is completely extensible: Just specify what you will include and how you will include it and you can include any data.

This is a Future of Text Initiative

The Visual-Meta approach is part of the Future Text Initiative which also includes the book & symposium The Future of Text. Our community is at future-of-text.circle.so which you should feel free to visit and perhaps even join, it is no cost. Specifically, you may want to watch one of the recorded presentations on Visual-Meta with Co-Inventor of the Internet Vint Cerf and founder of the Modern Library of Alexandria, Ismail Serageldin. You can also to read more about the benefits of this approach and what metadata the system is designed to handle.

Steering Group

Frode Hegland, Jacob Hazelgrove, Vint Cerf, Ismail Serageldin, David De Roure, Pip Willcox, Mark Anderson, Jakob Voß, Christopher Gutteridge, Adam Wern, Peter Wasilko, Rafael Nepô, Adam Laidlaw, Günter Khyo, Gyuri Lajos & Stephan Kreutzer. University of Southampton: Dame Wendy Hall, Les Carr and David Millard.

To get involved please feel free to contact the developer of Visual-Meta Frode Hegland : frode@hegland.com

How legacy safe is Visual-Meta?

This approach does not violate any PDF standard since it is indeed just text at the end of the document and thus PDF documents with Visual-Meta can be opened by any PDF viewer.

It is also legacy safe for the future since this approach to storing metadata is highly robust since as long as the content of the document is available, the metadata will also be available, even to the point of printing the document, then scanning it and performing OCR on it. 

Furthermore, the actual Visual-Meta contains instructions for how to implement it in plain language, which will allow any developer to integrate Visual-Meta import or export, now, and in hundreds or thousands of years in the future. This is far and away the most robust way to store rich digital text metadata currently available.

Data which exists only in proprietary formats are more likely to become unreadable since there are less of them and thus less of a reason for future developers to support access to them. Data which is accessible through the web relies on upkeep of paying for domain names and server costs. Data which is contained in a widely shareable, open format, such as PDF, on the same level as the ‘contents’ and which connects using the citation method of specifying the bibliographic details of a source so that it can be located and used from any location (like a traditional printed journal) rather than only from a web addressed repository, makes for a robust, long term solution for publishing and sharing our knowledge.