Current exploration : open scaffolding to spacially and textualy explore interfaces https://git.benetou.fr/utopiah/text-code-xr-engine/

Exploring my triage and annotation process

To dive more in depth see the ~60min interview VR & Philosophy #04: A VR Wikipedia with Fabien Benetou.

I like to take notes. I like to take notes because I have ideas, a lot of them. Most of them are a bit stupid or not that original but, I hope, few of them are quite nice. In order to relax myself I write those ideas down. I also write down notes about events, books, recipes, etc. That helps me to organize my thoughts and, hopefully, have more and better ideas. In order to do that I used :

  • paper notebook, very flexible and mobile but can't search
  • random files, flexible but no synthesis
  • 1 text file with numbered lines to allow for recursively reference previous ideas, quickly gets messy
  • multiple text files, mess
  • paper mindmaps and electronic mindmaps, amazing for synthesis, doesn't scale so well

About 8 years ago I settled for an online wiki which now has about 1473 pages. In fact I have a network of wikis, a lot more than just 1 wiki and some of them are even offline or not publicly accessible. I find it amazing because it lets me create new pages, link back to future and older pages, search, share with others, embed multimedia content including slideshows and even the the latest craze like 360 photos or 3D objects and animations. Unfortunately it is quite intangible. More importantly the more it grows the more I feel the need to make it tangible.

I started to work on a 3D model of it to 3D print but it didn't give any useful result. I started to print pages of it to put on my walls but it doesn't work for 2 obvious reasons :

  1. 800 pages requires a LOT of walls,
  2. multiple pages are updated every day so reprinting isn't sustainable.

Thankfully nearly 2 years ago I tried the Oculus and was blown away. My second reaction after "Wow... that's amazing" was of course how I could apply this to my notes. I started to see what existed and didn't find anything relevant. I then started to make several prototypes with the technology available at the time (cardboard + threejs). Since then both the hardware and software evolved. My understanding also evolved thanks to failed and successful prototypes.

This brought us to, later 2016. The latest prototype then worked with 2 or more HTC Vive networked and used Aframe 0.3.1. It means 2 persons could manipulate a set of notes together, in a shared virtual space.

Here are some visuals :

Graph visualization using D3 (partial)

Relying on d3.force.layout() expected to converge then adapting positions.

Testing aframe-htmlembed-component allowing to embbed relatively simple HTML but also CSS images and SVG. Unfortunately most pages look too complex.

Thanks to its convenience though available on the entire wiki with a dedicated user action (appending ?action=xr to any page URL of this wiki)

Data visualisation concept applied to this wiki

Managing with zones (inspired by VRHackathonBXL External Mind's prototype but also few months earlier my Application.Valve)

Graph visualization using D3 (partial)

See also 3D graph demo + ngraph.forcelayout3d

Initial threejs prototype (gaze only)

Networked Vive prototype

Visual metaphor

Draw on any page of this very wiki with custom brushes (cf save()/loadFromUrl() for persistence )

Hubs explorations

  • sync seletecd PDF to reMarkable
  • organize Github issues
  • laod remote content

Your suggestions?

Your suggestions?

There is still a LOT to do. More precisely :

Status of latest prototype :

Working

  1. grab objects
  2. save state over sessions of elements and code
  3. graph based layout
  4. handle dead textures (with fallback querying to server for live generation)
  5. link traversal
  6. periodic dataset refresh (hourly for partial update and nightly)
  7. networking (using NetworkedAframe or Mozilla Hubs)
  8. loading dataset (e.g JSON file or via API like Github issues)
  9. displaying part of dataset
  10. displaying part of used codebase
  11. replay past session (replay)
  12. display past painting (e.g. homepage)
  13. navigate in the local filesystem https://github.com/Utopiah/vrify
  14. in VR page editing and saving with page re-rendering on request https://vatelier.net/MyDemo/WikiVREditor/
  15. D3 based Observable notebook
  16. apply level of details (using e.g. mflux/aframe-lod) as PoC (with texture issue)
  17. Cytoscape based visualization and graph analysis including headless mode that might be interesting in a worker (e.g. betweeness centrality)
  18. type text and edit text (pinch to move, keyboard to input, optionally virtual keyboard pinching letters)
  19. execute text as code (own JXR, based on AFrame shortcuts and JavaScript)
  20. federation (via ImmersSpace)
  21. desktop or container streaming
  22. executing in container (to expand beyond JavaScript to other languages)
  23. grouping text or code
  24. movable virtual keyboard
  25. WebDAV support

To do

  1. integration (perpetual challenge)
    1. see https://git.benetou.fr/utopiah/text-code-xr-engine/
  2. design a proper UI to improve efficiency (KPI to define)
  3. extending WebDAV support to handling MIME types

Overall see https://git.benetou.fr/utopiah/text-code-xr-engine/issues

What's the VR added value

  • full focus
  • bringing back physicality to cognitive tasks
  • unlimited display space while still being room-scale

References

What it is not

This is also not about managing a desktop like Virtual Desktop or envelopVR . Those are also nice tools to expand a classical computer desktop but here the focus is on personal information management. Not how to start a game, how to watch a video or install a program but rather how to sort your notes about the games, how you felt watching the video and how maybe thinking and writing about both can help you discover what you truly like and eventually become able to make it yourself.

Note that this is not about memorization. I tried memorization and it doesn't work for the kind of creative tasks I'm interested in. In my opinion memorization is great for repetitive tasks required for a known optimal solution already exists. For discovering new solutions I believe understanding the structure of a problem or a set of information is way more efficient., if not the only way. Consequently the goal of this project is not to provide a way to recollect information but rather to organize visually and thanks to that process to discover the underlying structure of information, or at least a structure that allows for efficiently accessing and using that set of information, thanks to manipulation and first person spatial navigation. For memorization see instead my MemoryRecipe that relied an RSS feed to go against the Forgetting curve.

Past work

The dataset

Why a crowd-funding campaign

  • order my thoughts and prototype
  • get feedback from others on what I have done
  • estimate if there is a need
  • estimate if that need can make me focusing on it financially sustainable
  • getting funding to pay for proper visual and interaction design

See also the proper Kickstarter preview. Consider that it might also be a Patreon funding or... maybe no crowd-funding at all. This page is first and foremost a way to organize my thoughts by explaining to others. Asking for funding begs for *perfect* clarify so it makes it a good exercise.

Related wiki pages

See also