Visual-Meta {XR}

Spatial coordinates and relationships can be encoded.

Imagine opening a document in XR/VR/AR space, and choosing to put the pictures in the document on the walls of your document. You may choose to label the walls in your office, cafe, and home (for example) so that you can simply gesture for all the pictures to go onto a picture wall or a section of your wall you have designated as a picture wall, in which case when you open the document in another location where you normally use your headset, the images can automatically appear on any wall or area with the same designation.

You can choose to move any aspect of your document manually to any location as well.

When you close the document, this data can then be stored as a Visual-Meta {XR} appendix for use by you XR software next time you don your headset, but will not interfere with the document remaining a traditional document to be viewed or interacted with in any traditional software.

The logic is based on the section of encoding images and their contents, and using the ID for those encodings to refer to what the location information refers to:

Basic, using the system settings for an assigned picture wall:

@entry{image-ID
xr-category = {picture},
xr-assigned-location = {picture wall},
xr-order = {default}

Freeform placement:

@entry{image-ID
xr-category = {picture},
xr-assigned-location = {floor relationship in cm from centre of room: 200,130,201},
xr-alignment-xyz-degrees = {90,0,90}

Visual-Meta will also support USD spatial data in raw form.