Nerdery Labs Program

Oculus Rift Experiment – Is Virtual Reality Ready for Business Applications?

Introduction to Oculus Rift

The Oculus Rift is a new Virtual Reality (VR) headset designed to provide a truly immersive experience, allowing you to step inside your favorite video game, movie, and more. The Oculus Rift has a wide field of view, high-resolution display, and ultra-low latency head tracking unlike any VR headset before it.

Nerdery Lab Program: Oculus Rift

Nerdery Lab Program

Lab partners Chris Figueroa and Scott Bromander collaborated on this Oculus Rift experiment; their respective Lab Reports are below. The Nerdery Lab program is an opportunity for employees to submit ideas for passion projects demonstrating cutting-edge technologies.  Nerds whose ideas show the most potential are given a week to experiment and produce something to show to other Nerds and the world at large.

Lab Report from Nerdery Developer Chris Figueroa:

How is the Oculus Rift Different from other Virtual Reality Headsets from the past?

The first thing to know is the Oculus Rift has very wide range. Previously you would put on a VR headset and have tunnel vision. It didn’t feel like a you were in the experience. This was critical because its called “Virtual Reality.” How can you feel like you are somewhere else if you just feel like you are watching a tiny screen inside of goggles?

Oculus Rift puts you in the virtual world. You have a full 110- degree field of view, which has never before been used in Virtual Reality. When you put on the Oculus Headset you immediately feel like you are in the virtual world. You actually look up and down and can just move your eyes slightly to see objects to the left and right. One of the key things about the Oculus is you have peripheral vision, just like in real life.

Rapid Prototyping at its finest

The first thing you always do is get a sense of what the 3D world will feel like. Put placeholder blocks everywhere – blocks in the size of the objects you will later put there. For example, the blocks you see below became a rocks. We placed a block there so when we put the VR headset on, we’ll know there will be something there.

oculus1

oculus2

Development Challenges

Developing for the Oculus Rift is a complete departure from developing video games, 3D movies, 3D graphics or any sort of media that involves 3D. You’ll quickly realize that things you create are making people sick with the Oculus Rift. Sometimes you won’t know what is making you sick – you just know something “feels wrong.” It’s a dilemma to have a very cool product that makes users sick because something on the screen moves wrong, or the UI is in their view or textures look wrong in the 3D world – it can be any number of things. Below is what we encountered.

1. Don’t Be Tempted to Control Head Movement

In real life you choose to look at something. Advertisers have experience in making lines a certain way with colors that guide someone’s eye to an object on a billboard, but with Virtual Reality you have to do that in 3D space. It has a whole new element of complexity that is unheard of and very few have experience in.

The easiest thing to do is just move the 3D camera so it points at something. What you don’t think about is that no one in real life has their head forced to look at something, so if you do it in Virtual Reality it literally can make people sick! It’s just ill-advised to make users ill.

2. User Interface vs World Space

The Oculus Rift wants you to feel like you’re experiencing real life. So how do you display information to users using the VR headset? The first thing people say is “Lets just put information in the top-right corner to indicate something important needed to get through the experience.” This sounds completely normal and works for everything except Virtual Reality – putting something in the view of your face will not only obstruct the view of the user – it could also make them sick!

Rule of thumb that I learned from the Oculus Rift Founder:

“If it exists in space, it doesn’t go on your face.”

3. Development Kit Resolution

The first development kit for the Oculus Rift has very low resolution in each eye. When people first put the headset on they will immediately say it’s low resolution. They are right and it was very interesting to work with because 3D objects and their edges, colors and lines don’t look the same as they do on your computer screen. Sometimes fonts are completely unreadable.

Everything must be tested before a user tries the experience or they may miss out on whatever the 3D world is attempting to show them.

4. High Resolution Textures vs Low Resolution Textures

Most people that work with 3D content or movies without restrictions know that higher resolution is better. The low resolution of the Oculus Rift made for some weird problems because higher resolution textures looked worse than low resolution textures. Even though people can look at a 3D rock and know its texture is low resolution, it didn’t matter because the high resolution textures didn’t look anything like what you wanted them to be.

Programs I used for the Oculus Rift Project:

  • Unity3D – Game Engine used to interact with 3D environments
  • Oculus Rift Dev Kit 1
  • C# and C++ (Oculus SDK)
  • MonoDevelop – I write C# on a mac with Unity3D
  • Blender 3D 2.69 with a python transform plugin I made.
  • Photoshop CS6

Lab Report from Nerdery Developer Scott Bromander:

Building 3D Modeling for the Oculus Rift

The process for this lab experiment was broken into two clear paths of work. The 3D modeling and SDK (software development kit) engine work could happen simultaneously since we had to have 3D visual assets to actually put into the environment, much like drafting a website in Photoshop before slicing it up and styling with HTML and CSS. The Oculus SDK focused more on the environment and user interactions, and I took placeholder objects in the environment and added in the realistic assets.

For my specific portion of this experiment, I handled the modeling of objects within the 3D experience. Since our goal was to create an example of a business application for a 3D simulator, I built a full-scale model of a residential house. Our experiment demonstrates how Oculus Rift could be used in visualizing a remodeling project, vacation planning, or property sales.

Building these real-world objects is a lot like sculpting with a block of clay. You start with nothing and use basic geometry to shape the object you would like to create. In this case, it was a house that started out looking very plain and very gray.

Typically in the 3D modeling process, the real magic doesn’t come together until later in the process – you change the flat gray 3D object and give it a “skin,” called a texture. Texturing requires that you take that 3D model and break it down into a 2D image. Creating 3D objects follows a specific process to get the best results.

My Process

Plan and prep; build a pseudo schematic for what would be built; create a to-scale model; texture/refactor geometry.

Tools

I used 3D studio Max to build out the front of the house, and I used measurement guides that I pre-created with basic geometry – in this case, I used a series of pre-measured planes for common measurements. I was able to then use those guides throughout the modeling experience to speed things up.

Additionally, I used a lot of the data-entry features of 3DS Max to get exact measurements applied to certain components of the house. This ensured that the scale would be 100% accurate. Once it was modeled in 3DS Max to scale, we then came up with a conversion ratio to apply before bringing the model into Unity.

Finally, we optimized texture maps by including extra geometry for repeating textures (like in the siding and roof). The trick here was to plan for it while at the same time ensuring the scale was accurate. In this case, guides help a lot in slicing extra geometry.

Photoshop for texture generation

To create textures for the house, we used photos I snapped from the first day. One problem here: I  didn’t set up the shot for texture use (lens settings), so there was a significant amount of cleanup work that needed to be performed. If you think about how we see things and how a lens captures images, it’s not in a flat space but rather a little more spherical. So using a combination of stretching/clone stamp/healing-brush techniques I’ve learned over the years, I was able to take this semi-spherized image and make it appear flattened-out.

After those textures were created, we took a pass at creating bump and specular maps. While the final product of that work ultimately never made it into the final experiment, I did follow the process. In both cases, I used an industry-standard tool called Crazy Bump. The purpose of these types of “maps” is to create the look of additional geometry without actually adding it. Basically, these maps tell Unity how the light should respond when hitting the 3D object to give the effect of actual touchable texture. So if you get up close to the siding, for example, it has the ridges and look of real siding.

Had we more time, we’d have used Mental Ray texturing/lighting to give a more realistic look, and then bake that into the texture itself. This effectively would’ve taken all of these different maps/texture/and lighting situations and condensed them down into one texture. Next time.

Challenging Aspects

One of the challenging aspects of this project was adding the actual geometry from the early designs based on “what is important” vs. using a texture. My initial thought was that if I was able to get close to these objects with the Oculus Rift on, I’d be able to catch a lot of the smaller details – planning for that and getting a little deeper in the geometry was on my radar from the get go. Ultimately though, with the prototype version of the Oculus Rift having a lower resolution than planned for final product, a lot of those details were lost.

Objects like the window frames, roof edging, and the other small details were part of the early process. You save a lot of time when you do this planning up front, but it’s more time consuming to make particular changes after the fact. While it doesn’t take a lot of time to go back and add those details, knowing their placement and their measurements ahead of time really smoothes the process.

New things that I learned

Important lesson: How to plan for the Oculus Rift since it doesn’t fit into the usual project specifications. Having a higher polygon count to work with was freeing after several years of building for mobile and applying all of the efficiencies that I’ve learned as a result of creating performant experiences for mobile. But I learned this maybe a little too late in the process, and it would have been great to include those in my initial geometry budgets. Ultimately, the savings helped us when it came time to texture. All of this is the delicate balance of any 3D modeler, but it was interesting being on the other end of it coming out of modeling 3D for mobile devices.

Things I’d have done anything differently in hindsight

I would have shifted my focus and time from the small details that didn’t translate as well, given the lower resolution of the prototype Oculus Rift that we were working with. I could have spent that time creating bolder visuals and texture maps.

Given more time, or for a next iteration or enhancement that would make for a better or more immersive experience, I’d also create more visually-interesting texture maps, build out the interior of the house, and add more tweeting-style animation – including more visually-interesting interactions within the environment.

I’d like to have spent less time on the details in the 3D-modeling portion and spent a lot more time getting the textures to a place that were vibrant and visually interesting within the setting that we ended up with. In any rapid 3D-model development, one needs to remember that it starts as a flat gray model. If you don’t plan to take the time and make the texture interesting, it will look like a flat gray 3D model. So having more time to go after the textures using some sort of baked Mental Ray set-up would have been awesome.

Which really brings me to what I would love to do in another iteration of the project: Take the time to make textures that look extremely realistic, but doing so in a way that utilizes the strengths of the Oculus Rift and the Unity engine – which can all be a delicate balance of texture maps between 3DS Max and Unity, in conjunction with how it renders in the Oculus Rift display. I think that would drive the “want’” to interact with the environment more. Then, beyond the model, include more animation and feedback loops to the interaction as a whole.

I’d also include an animated user interface – developed in Flash and imported using a technology like UniSWF or Scaleform – that would be used to make decisions about the house’s finishings. Then, as the user is making those decisions with the interface, I’d include rewarding feedback in the environment itself – stuff like bushes sprouting out of the ground in a bubbly matter, or windows bouncing in place as you change their style. Or that sort of thing – the type of interaction feedback we are used to seeing in game-like experiences, but instead are using it to heighten a product configurator experience.

Again, next time – so that’s our Lab Report for this time.

Apps That Know Where You Are: Our Experimentation With Apple’s iBeacon Technology

Introduction to the Lab Program:

Earlier this year, The Nerdery unveiled its Nerdery Labs program. It’s an opportunity for employees to submit ideas for projects demonstrating cutting-edge technologies. Those Nerd’s ideas which show the most potential are given a week of time to pursue it and produce something to show to other Nerds and the world at large.

I have a strong personal interest in extending user experiences beyond the bounds of traditional mobile apps by interfacing with external technologies. I saw the Nerdery Labs program as the perfect opportunity to pursue that interest…so I submitted a proposal to show the possibilities of Apple’s new iBeacon technology. I was tremendously excited when I heard my idea had been selected and as soon as I wrapped up the client project I was engaged with, I got to work!

Introduction to iBeacon

Buried in the ballyhoo surrounding the radical visual changes to iOS 7 was an all-new technology introduced by Apple: iBeacon. What it lacks in razzle-dazzle, it more than makes up for in enabling entirely new interactions and types of applications!

It is important to understand that iBeacon is not a device or a new piece of hardware like the TouchID thumbprint scanner. Instead, it is a public protocol or “profile” built on top of the Bluetooth LE (Low Energy) technology which has been present for several years in iOS devices: iPhone 4S and later, iPad 3rd Gen and later, and the 5th Gen iPod Touch. Bluetooth LE was released in 2010 as a lower-power, lower-speed alternative to traditional Bluetooth; devices broadcasting infrequently using Bluetooth LE can run for up to two years on a single coin-cell battery. Any device that announces itself using the iBeacon profile is an iBeacon, whether it is a small, dedicated radio device or an iDevice configured to broadcast as an iBeacon. Apple will not be producing any dedicated iBeacon hardware – that will be left to third parties. Android support for Bluetooth LE was added in 4.3 (Jelly Bean) so there will likely be Android iBeacons in the near future, too.

How iBeacon works

Figure 1-1. How iBeacon works

At its core, iBeacon is simply a “HERE I AM!” message broadcast roughly once per second to other devices within range of the Bluetooth radio (Figure 1-1). It has a few identifying characteristics so that apps can distinguish the iBeacons they’re interested in from a crowd. iBeacon broadcasts have no data payload; they simply identify themselves via a UUID (unique identifier) and 2 numbers, dubbed “major” and “minor”. You can think of the UUID as the application identifier: each app will use a different one (or more). An app can only listen for specific UUIDs provided by the developer, there is no way to see a list of all iBeacons visible to the device. The major and minor numbers have no intrinsic meaning, they are available for the app to use as the developer sees fit. A common scheme is to designate the major number as the general region and the minor as a specific location within that region. As an example, in an app for Macy’s, the UUID for all iBeacons in all Macy’s stores would be identical. The major number would refer to a particular Macy’s store (22 = San Francisco, 1 = NYC, etc.). The minor number would represent the different departments within the Macy’s store (14 = Women’s Apparel, 7 = Bedding, 29 = Men’s Shoes, etc). The numbers represent whatever what you decide as you plan out the app. The point is, major and minor could be used to identify more than just physical locations; people, pets, containers, kiosks, luggage, and many other objects that you want to keep track of as they are on-the-go could benefit from the technology.

Read more

I Can See The Music – The TasteMapper Lab Experiment

TasteMapperBlogTasteMapper

My name is Kevin Moot and I am a Senior Software Developer at The Nerdery. I was fortunate to be involved in one of the Nerdery’s first projects of the Nerdery Labs program.

The Nerdery Labs program is in the same vein as Google’s famous “20% time” (in which Google developers are offered an opportunity to invest 20% of their time at work fostering their own personal side-projects). Our Nerdery Labs program presents a great opportunity for developers to bring their own personal projects to life and play with new technologies, creating experiments ranging from software “toys” to projects with potential business applications.

The Vision

Our concept was ambitious: create a diagram of the entire musical universe.

I teamed up with one of our Nerds in our User Experience department, Andrew Golaszewski, to conceptualize how we could visualize a musical universe in which users could explore an interconnected set of musical artists and genres, jumping from node to node in much the same way that your curiosity might take you from a Wikipedia article on “Socioeconomics of the Reformation Era” and somehow end up on a video clip of “The Funniest Baby Sloth Video Ever!

Assuming there was some huge data set out there which would give us insights into the composition of people’s personal musical libraries and playlists, could we find out which genres and artists are most commonly associated with one another? Are fans of Ozzie Osbourne likely to see Simon and Garfunkel popping up in their listening history? What other artists should a loyal follower of Arcade Fire be listening to? Do certain genres display more listener “stickiness” that others – that is, do fans of pop music statistically branch out more often to other genres than devotees of death metal?

To narrow this ambitious plan down to a more reasonable, bite-sized problem set, we decided to concentrate on depicting a single central artist at a time as a “root” node. Connected to this root note would be a set of the most closely related artists.

Thus was born the TasteMapper experiment.

Read more