A Newbie’s Experience in the WebXR Weekend Challenge 2018

Do you know anything beyond how to make basic circle and rectangular in VR?

I sure as hell didn’t, at least on June 29 when I decided it was an excellent idea to join the WebXR Weekend Challenge at Microsoft Reactor!

I was mentally freaking out and wondering what I was doing. If you read my Twitter posts, you would know that I love VR – and know next to nothing on the actual coding part. My last VR-code-related tweet was a floating ball I created by following A-Frame tutorial.

Once I sat down though, it wasn’t so bad. The lady I first met turned out to be a mentor from Microsoft – they had a VR team coming all the way from Seattle this weekend for this event! Microsoft have really throw all in for this event. They provided the space with office snacks and drinks, a team of WebVR-specialized mentors, a draw for a Windows Mixed Reality headset prize,  and another opportunity to get the headset by submitting a project interest form.

During the pitch, one of the person on stage was interested in the use of VR in science but also didn’t know WebVR,  so he’s here to learn! Fellow WebVR learner spotted! I later learned that he is a high school student working with VR in Berkeley Lab during the summer.

High School.

Talk about modern technology in our academic system – the only thing I know to do with computer in high school was reading online stories, surfing MySpace, and accidentally getting our family PC infected while watching videos.

Our mentors are more familiar with Babloy.js – I totally didn’t realized Babylon.js is a Microsoft open source project until then – so that’s what we went with.

My teammate, Sam, had a multi-page tiff file of a butterfly wing, and the initial step was simple – materialize it in WebVR. One of our mentor suggested an A-Frame project that shows MRI in VR: aframe-mri-poc. Fortunately I had went through the A-Frame tutorial, and the code was very easy to read. The developer simply used <a-image> tag to source multiple images, lower opacity, then position the images side by side. We just have to implement that with Babylon.js, preferably done with some sort of loop so we don’t have to manually create an image tag for each image – the tiff file had hundreds of images.

Turned out Babylon.js has a transparency example that does something similar to what we had in mind. With Babylon, it was definitely more of a JavaScript language. Instead of a bunch of <a-image> tags, we did a loop where we called the image link with Texture() and created planes using MeshBuilder’s CreatePlane().

The number of images and the size expected of tiff files was just too heavy on the performance though. We spent most of first night trying to research a solution that would parse the files. Eventually, we just went with converting the images to jpeg beforehand. Sam used Google Cloud’s features to do mass conversion of the tiff file and got the file names ordered for easy looping.

Now, onto the features. With the image in VR, how would user interact with it? Several ideas popped up throughout the hackathon – we kind of just came up with ideas and tried them as we went. We moved from just uploading science images to medical images. Our Microsoft mentor connected us with a medical professional who also work with VR, and we got to Skype and learned about the industry from him. By the end, our VR model can be move, scale, have control of threshold, and have drawing on it that can erase at the press of a virtual button.

I started with the move feature. I was totally confused by controller, since that was never much of an issue when you are just creating floating object. This time, a fellow participant graciously lent us his Oculus headset, so we were going to target to that headset, meaning controller is a must. My targeted object would move and rotate at the same time, and I couldn’t figure how to fix it. Fortunately, my teammate gave it a try and figured it out.

Our mentors were amazing, and we had at least one person there to patiently explain and debug issues with us during the entire hackathon. That was particularly helpful when Sam was working with the painting tool and the resulted performance issues. I also got more help with understanding how the code behind controller works. Another mentor helped me figure out why was my scale button was getting smaller instead of changing the object that I want to scale.

During the hackathon, I learned various concepts and terminologies that I am going to research more about, such as mesh, shader, and how different controller operate (browser war renewed in the form of VR controller in 2018….). I also got to talked to various professionals about their view on VR – VR artists, doctor, and Microsoft team. The later of which showed a Teia Solution building demo by Stereograph3D built with Babylon.js after I asked them about architecture in WebVR, which had my jaw dropped (you can do that in a browser?).

Being the only team to do the medical category… we won the medical category for the Weekend Challenge, but the greatest prize is the learning experience, which was my goal for the hackathon. This is the result:

And here our mentor having some fun with the app – doesn’t our skullman look happy? :

I also got selected to received the Mixed Reality headset, which was amazing and really helpful because I can’t afford a headset now. Once I get the set, I hope to do a review. In addition, since the project we did was true hackathon-style spaghetti code and geared toward Oculus, maybe I can clean it up and gear it toward the new headset?  I only did a very small portion of the codebase since I struggled a lot with the controllers, so I really hope to study more and get a better grip of what was done.