WoTWebXRGulliverFebruary2019, Saturday 9 February 2019 from 2pm to 6pm at Gulliver, Rennes, France

Own objectives

  1. meet the local FLOSS community
  2. help others to discover WebXR
  3. discover WoT

Attended activities

Hands on discovery workshop using connected things in VR or AR on the web

Motivation

Heading back to my hometown I quickly wondered if I could showcase what virtual reality and augmented reality is all about. I tend to do that automatically now. I head somewhere and think "Could I help others understand what I'm so passionate about? Can I help them to appropriate emerging technologies?". Unfortunately the decision was very last minute so having to find a space for the event I reached out to the local community. Luckily for me not only did Philippe reached back but we also met just few days earlier at FOSDEM, the largest free and open source software event in Europe, also the nicest in my opinion. We met as I was managing the JavaScript devroom there, giving a talk High end augmented reality using Javascript then welcoming Philippe to talk about how to Bring JavaScript to the Internet of Things and of course coming to hang out in the Mozilla devroom.

During Philippe's talk I couldn't help but notice that his last sensor was located nearby my home town in Bretagne, France, so we started chatted I was became very excited. Not only we both work in free software but also in emerging technologies on the web! Quite naturally we started to think on how our 2 topics, web of things and immersive web, could work together. What's next is the result of those exploration at the intersection of our fields.

The web of things is as the name very explicit suggests (yes, being sarcastic here) an efficient way to bring any physical object to the web. For example you have a lamp 💡 you usually use the physical switch to turn it on or off. That works quite well but what if you are sitting on your couch, munching chips, the movie is starting and the light is still on! This is catastrophic, you now have to stand up, walk there and turn it off. In 2019 this is simply unacceptable, us humans are not just walking apes anymore. No. What we modern humans deserve is telekinesis. Telekinesis gives you the ability to act on the world around you without moving a muscle! Now we are talking. But how? Well imagine that your 💡 could have, just like you might have its own web page. For the sake of simplicity your lamp web page would be https://mylamp.me (not a real link here, don't click it's just an example). Now if you visit the lamp web page you could see it's status e.g. ON or OFF. That's nice but you don't care about that now, you want to switch it OFF. Well what if this web page also had a page dedicated to that say https://mylamp.me/off ? You could then just visit it with your phone and voila, your light is off, you didn't stand up and walk! Who's the ape now?

How does it actually works? Well it's actually relatively straight forward :

  1. take a computer with its Internet compatible hardware
  2. plug it to the electricity and Internet
  3. plug your lamp to the computer
  4. make a website for it
  5. connect to the website

Obviously the steps are easy 1, 2 and 5 are easy. Since you reading this article you already know those steps. Steps 3 and 4 those are harder. That's where Mozilla Things Gateway comes in. If you are not an electronic expert and a web developer at the same time, it's all taken care of. Basically you can rely on a low cost and low consumption computer, the Rasberry Pi, install the operating system image, plug your hardware in, lamp and more, the connect to it via the web.

To be more specific here ... Philippe will explain ;)

Once your lamps but also sensor are available locally or to the entire world what you do not want is that annoying neighbor who always puts the trash out on the wrong day to control your lamp. Consequently in order to make the process efficient yet safe the Gateway can also take care of authentication by simply generating not only token but directly code snippets in multiple languages including JavaScript. As a newcomer to the project this was very welcomed, being able to copy/paste code that just worked then build on top of it. This means being able to focus on actually usage (or here just exploring novelty ;). At this point what we recommend is testing the simplest example : listing all the Things (your lamp 💡 is a Thing) plugged on your Gateway. Because yes, you don't need to one computer per Thing so the starting page will indeed list all Things connected. At this point you probably want to make sure what you have displayed on the nice web interface matches what you get programmatically speaking e.g. in your browser console thanks to the code snippet generated.

Once you have this running it's just fun. As you can get access to the Gateway with code you can

  • list all Things including the scheme to understand what you can do with them (e.g. turn on/off) and what information you can get back (e.g. amount of light in sensor)
  • get information from a Thing (e.g. what's the current temperature)
  • activate a Thing (e.g. activate Series 800 Terminator)
  • get the coordinate of a Thing on a floorplan (!)

and a lot more that I don't fully understand yet.

Now for our little experiment once we were able to programmatically get information (GET request) we only had to bind those values to a visual change e.g. changing the height of a cylinder or the color of a cube. This is made tremendously simple with a framework like AFrame. It's basically as simple as writing HTML... so yes by this point it's pretty clear that Philippe did the hardest part of the work (sorry Philippe I had to confess). For example to define that cube in AFrame we just have to do <a-cube>. That's it, it's really that simple. What we do then is change it's color with... <a-cube color="#00ff00"> for example. The next step is changing that fixed value by the information we get back from the sensor and voila. We connected the real world to the virtual world.

Arguably this is the main part connecting WoT + XR :

var token = 'Bearer SOME_CODE_FOR_AUTH'
// token = '' // testing token not set
var baseURL = 'https://sosg.mozilla-iot.org/'
var debug = false // used to display content in the console

AFRAME.registerComponent('iot-periodic-read-values', {
  init: function () {
    if ( !token || token.length < 10 ) {
      console.warn( "Gateway token unset. Visit your gateway Settings -> Developer -> Create local authorization" )
      return
    }
    this.tick = AFRAME.utils.throttleTick(this.tick, 500, this);
  },
  tick: function(t, dt){  
    if ( !token || token.length < 10 ) { return }  
    fetch(baseURL + 'things/http---localhost-58888-/properties/Color', {
      headers: {
        Accept: 'application/json',
        Authorization: token
      }
    }).then(res => {
      return res.json();
    }).then(things => {
      this.el.setAttribute("color", things.Color);
    });
  }
 })

Obviously we could stop there. We probably should but we didn't. So the same way we could get information with a simple fetch GET request we can send a command with a fetch PUT request. For this we use <a-cursor> to allow for in VR interaction. Once we look at an entity like another cube the cursor can then send an event. Under the hood the cursor is simply a ray caster. You can imagine a laser pointer used in your typical lecture. Once we catch that event we send our command to the Gateway. In our example when we look at a green sphere, we toggle the green LED, red sphere red LED and blue sphere blue LED.

This worked quite well on LAN but another feature of the Gateway is to work over the actual Internet, meaning that after I flew back to Brussels I was still, thanks to the generosity of Philippe still access remotely his Gateway with a new authentication token. This is possible thanks to Mozilla specific domain mozilla-iot.org redirecting (or tunneling) to his Gateway. This allowed me to finish the last attempt I failed on the day : connecting not just virtual reality to the real world but... also augmented reality.

Starting to mimic the Gateway interface

The session was fascinating. Being able to get live data from the real world extremely easily but also acting back straight for the virtual world is opening a lot of doors. From crazy silly ones as we did, to artistic projects making us reconsider our perception of reality but also very pragmatically high stake high pace jobs like a hospital already filled with sensors to the modern production line. This workshop and the resulting videos with code are very simple starting points. Once you start working on a similar project please do get in touch, we'll help however we can.

Results

Inspiration

Back to the Menu

Overall remarks and conclusions

  • difficulties of having an entire free stack the lower we go
  • networking issues have to be sorted out (http/https/CORS)

Back to the Menu

Other reviews or coverage

  • here

To do

  1. improve Template
  2. add map data (:ola-point lat= lon= text='':)

ContainsPersonalYoutubeContentToMigrate