I like to take notes and sketch new ideas. It helps me to organize my thoughts and, hopefully, have more and better ideas.



This wiki holds those notes but manipulating them and the structure is not as natural as post-its. What if we could do it in VR? What if I could invite you in?



Relying on Hubs I tried the following explorations.

All the following explorations have source code available in their description. Following the Principle of sandboxed explorations all are independant from each other. There is currently no complete integration of all in one system. I planned to do for the most interesting ones related to knowledge mangement i.e. sketching from reMarkable (ideation), sorting Github issues of the project itself (planing), triage of papers for syncing PDFs to reMarkable (research) and finally changing the wiki own structure (groups, tags, etc).




To explore

  • real world mapping
    • using a specific set of coordinates, a constant zoom level and thus area
      • each set has a single URL that is a Hubs room connected to 4 other rooms
        • building and roads gets extruded as a scaled gltf, ideally as scene that can't be moved
      • e.g https://www.openstreetmap.org/#map=19/50.84250/4.38919
        • lattitude 50.84250, longitude 4.38919, level 19 = ~300m2
  • shared 1 page "browser"
    • render a very long page (e.g. ArcGIS Story) to an image
    • transform that image to a video,
      • using convert to crop and ffmpeg to assemble it,
    • then make interactable buttons where you "stand" on to scroll by seeking in that video

From personal use

My collection of utillity functions for Hubs is available for as a Gist, a WebExtension and an optional module on my own Hubs Cloud instance.

  • clone the pinned objects of a room thanks to objects.gltf
  • to efficiently test on standalone, assuming adb and remote debugging are allowed
    • start the browser adb shell monkey -p 'com.igalia.wolvic' -v 1
    • activate remote debugging then load the Hubs URL from the desktop appending ?vr_entry_type=vr_now
      • note that microphone prompt might still be needed
    • use scrcpy to visualize the result from the HMD
  • remove number of media per room, default to 20 document.getElementById("media-counter").setAttribute("networked-counter", "max", 100)
  • to get a usable stream from a PeerTube instance get the m3u8
  • for faster testing use ?vr_entry_type=2d_now on a room URL to enter in 2D mode, skipping some (but not necessarily all) steps
  • Hubs Cloud
  • scripting from the console works
    • document.querySelector("#avatar-rig").object3D.position.x += 1 translates your avatar 1m along the X axis, other connected can see it
    • by default position, rotation and scale are networked. Everything else should be double checked against NAF schemas
      • object from Spoke are not networked, they are unpacked as a tree of threejs Object3D that be traversed but modification will only be local
  • NAF schemas on what components get effectively networked are available via NAF.schemas.schemaDict
  • templates of NAF schemas can be modified via NAF.schemas.add()
  • window.APP.hubChannel.sendMessage("Hello world!") to send a chat message
  • entities with [media-loader] can be quiered via el.components["media-loader"].attrValue.src.match(hash)
  • entities with [networked-avatar] can be quiered via el.components["player-info"].displayName.trim()
  • adding a media object (video, glTF, mp3, jpg, etc) can be done by adding a new entity to the scene then correctly setting 2 attributes
    • to load the media itself el.setAttribute("media-loader", { src: url, fitToBox: true, resolve: true })
    • to be visible and interactable by all el.setAttribute("networked", { template: "#interactable-media" } )
    • media object will get a partial hash (non unique) that be used to be found back in the scene.
      • if the same URL is used to upload an object it will not be updated, adding an anchor with a different timestamp will force to re-upload it to the server
  • networked animation can be achieved via AFRAME.ANIME.default.timeline()
    • using e.g. {targets: star.object3D.position, autoplay: false} then adding animations
    • this also works on #avatar-rig this allowing for smoothly moving the camera and #avatar-pov-node to rotate it
      • using the camera mode allows to remove the interface
  • to move a random entity it is important to own it but also stop potential physic movements
    • NAF.utils.getNetworkedEntity(ball).then(networkedEl => { NAF.utils.takeOwnership(networkedEl); networkedEl.components["set-unowned-body-kinematic"].setBodyKinematic(); /* do things */ })
    • ownership can be tested via NAF.utils.isMine(networkedEl)
  • scene object in Spoke can be found in Hubs via their threejs Object3D name
    • useful to dynamically position objects in a scene relative to static objects
  • avatar can be changed by id using window.APP.store.update({ profile: { ...(window.APP.store.state.profile), avatarId } });
    • for test a random avatar can be used using window.APP.store.resetToRandomDefaultAvatar()
  • entities with [media-loader] can get a new source (of the same typp) using el.setAttribute("media-loader", "src", "https://domain.tld/newurl")
  • to add Earth-like gravity to an object el.setAttribute("body-helper", { type: "dynamic", gravity: { x: 0, y:-9.8, z: 0 } })

The code snipped search on Gist is experimental. Parsing for hash and URL parameters is only partially encoded.

Social VR and home automation

Layouts management

Control scene camera from Twitch chat

dat.gui example

Sketching from a physical tablet

See also 3D graph demo + ngraph.forcelayout3d

Director kit to record sessions

Blender model reloading

Github board manipulation

See also 3D graph demo + ngraph.forcelayout3d

BT glasses with acc/gyro control

Video camera as chest plate

Research paper triage process

ebook reader syncing PDF

Collaborative game design

Other explorations

See also


Note

My notes on Tools gather what I know or want to know. Consequently they are not and will never be complete references. For this, official manuals and online communities provide much better answers.