Technology

For security’s sake update WordPress to version 3.8.2

On April 8, 2014 WordPress released a security update to version 3.8.2. The announcement that accompanied the release states “this is an important security release for all previous versions and we strongly encourage you to update your sites immediately.”

WP 3.8.2 addresses two potentially serious security vulnerabilities, includes three security hardening changes, and addresses nine “other bugs.” Most notably the following security issues are addressed:

  • Potential authentication cookie forgery. CVE-2014-0166. (Very serious vulnerability!)
  • Privilege escalation: prevent contributors from publishing posts. CVE-2014-0165.

  • Pass along additional information when processing pingbacks to help hosts identify potentially abusive requests.

  • Fix a low-impact SQL injection by trusted users.

  • Prevent possible cross-domain scripting through Plupload, the third-party library WordPress uses for uploading files.

Additionally: JetPack – the wordpress.com feature-rich plugin suite – was updated to version 2.9.3 to address similar issues.

If your site is currently operating a WordPress version below 3.8.2 or Jetpack version below 2.9.3, you may be at risk and should consider upgrading as soon as possible. 

Filed under Tech News, Technology

Heartbleed bug security alert: Your web server/data may be vulnerable – test your domains

On Monday evening, a security firm announced a new vulnerability in a key internet technology that can result in the disclosure of user passwords. This vulnerability is widespread and affects more than two-thirds of the web servers on the planet including top-tier sites like Yahoo and Amazon. If you have a secure (https) website hosted on a Linux/Unix servers using Apache or Nginx or any other service using OpenSSL, you are likely vulnerable.

For a detailed breakdown of this vulnerability, please see this site. This security vulnerability may affect up to two-thirds of all web servers. We urge you to assess your vulnerability immediately, and reach out for help.

How can I get help to fix this problem?

How can I see if my servers are vulnerable?

You can use this site to test your domains for the vulnerability. Enter the domain of your HTTPS web site. If you get a red positive result, you are vulnerable.

In addition, you can execute the following command on your servers to see if they are running a vulnerable version of OpenSSL: openssl version -a

If the version returned is 1.0.1, and its build date is before April 7th, 2014, you are vulnerable.

How can I fix it if I am vulnerable?

You will need to obtain a patched version of OpenSSL and install it on all vulnerable servers. Updated packages should be available for Debian, RedHat, Ubuntu, and CentOS via their package managers. If a package is not available for your platform, you can recompile the OpenSSL package (version 1.0.1g) with the NO_HEARTBEAT flag, which will disable this vulnerability. After updating, restart any services that are using SSL and re-test your domain using the link above (http://filippo.io/Heartbleed/).

For information on your specific Linux distribution see:

Additionally, you should strongly consider changing passwords and/or resetting SSL certificates, but only after OpenSSL has been updated.

What is the vulnerability?

With the vulnerability, called Heartbleed, attackers can obtain sensitive information from servers running certain versions of OpenSSL. Examples of sensitive information include private encryption keys for SSL certificates, usernames/passwords, SSH private keys on those servers and more. Attackers which obtain the keys to your SSL certificates can then set up a man-in-the-middle attack between you and your customers and obtain secure information, such as credit card numbers and authentication credentials. The vulnerability was publicly disclosed Monday, 4/7/2014.

If you have any questions, please contact us, or ping your own go-to Nerdery contact right away. We’ll help analyze your risk and protect your data. If The Nerdery can be a resource to you in any way, we will.

Filed under Tech News, Technology

Dashicons Make Your WordPress Dashboard Cooler

What Are Dashicons

On December 1, 2013, WordPress 3.8 – code name “Parker” – was released. One of the highlights of 3.8 was the re-skin of the WordPress admin, officially called the Dashboard. The re-skin got rid of the blue, grey, and gradient interface for a more modern and flat design. Included in the update were Dashicons, an icon font.

An icon font has all the benefits of being text and none of the downsides of being an image. Size, color, and pretty much anything else you can do with CSS can be applied to icon fonts.

There are several ways to use Dashicons inside the Dashboard. For this example I’ll be using a plugin, but you don’t have to. If you’re more of a functions.php person, it will work there too. I’ll also be skipping over the details of customizing WordPress and focus on Dashicon-specific code.

Set Up Base Plugin

I already have the plugin created, uploaded, and activated. You can see the final plugin on Github. For each section, I’ll only be highlighting the relevant code. The code could be included in your own plugin or the functions.php file by using the appropriate hooks and filters.

Read more

Filed under Technology

Software maintenance = strategically-planned evolution

Thinking of starting a maintenance program? Picking the right number of hours can make a big difference as to how your project works.

I’ve done a number of maintenance programs and each one is a little different, however they fall into one of three buckets; no hours, few hours, lots of hours.

If you choose to do no hours, meaning you just call out of the blue when you want work done, or if you have very few hours, the main disadvantages you have are knowledge, developers, and timeline.

Ideally, one developer would be on your project to help with all future fixes. Having a developer with background on the project, code, and client relationship is beneficial. However, those developers are valuable. They will get put on another project if they aren’t busy and they will become unavailable for yours.

When you lose the developer, you also lose some of the knowledge. We’re still a team and still transfer knowledge form one developer to the next, but switching developers can reduce some efficiencies. We all do our best, but the new developer still has to learn all the ins-and-outs of the code.

And then there is timeline. If we can’t plan for your maintenance changes, you will have to wait until a developer becomes available. That could be a few days to a few weeks. It all depends on developer availability and the complexity of the changes.

Now, if you have a good amount of hours in your maintenance bucket, it’s easier to keep a developer on the project, retain knowledge, and plan for the work.

I’ve been working with one client for about three years now and I continue to lead the maintenance project, which is supporting the sites that we created. Every time they ask for something I know the project history,  I know what we’re talking about, and I know which developers actually built individual pieces of the site. Knowing all this makes it much easier to understand the request, estimate, and execute.

This client also has a good bucket of hours, which helps me plan for changes. I know that – based on the client budget per month – I need to spend x hours per week. Not only does this help me plan for development, but it also helps with setting expectations for the client. Our turnaround time on requests is usually a few days. We’ve gotten ourselves into a groove and it works well for the client and The Nerdery team.

What’s a good number of hours?

That’s a good question. How many changes do you plan on having? Do you think you’ll have some minor changes and updates? Or do you think you have some features you’d like developed during your maintenance program? Are you thinking of a handful of changes per month or do you have a long list already and it just keeps growing? It all depends on how much work you want done each month and how fast you want to get things completed.

For example, if you’re thinking 40 hours a month, one developer can plan for about six hours per week. Now, when other projects come up, that developer can safely plan for both projects. However, six hours per week also limits the amount of work the developer can get completed. More complex tasks may take a few weeks to complete.

Wait, six hours per week is only 24 hours a month. What about the other 16?

Another thing to consider when setting up maintenance hours is that the hours are not all development hours. You have to have some time for project management and QA. There will be phone calls, and email communications, status reports, and we’ll want to run any change through QA to help ensure we didn’t fix one thing and break another.

Quite often people think that maintenance is just finding and fixing bugs, but it’s more than that. It’s also about keeping things updated, about enhancing existing features, and it’s about building out new features. It’s about ensuring your site/software continues to evolve and grow instead of just standing still. In technology, you’re either getting better or you’re falling behind. Standing still isn’t an option.

Filed under Technology

What is Android Wear, and Why Should You Care?

google-android-wearGoogle rocked boats recently by announcing Android Wear. “What is Android Wear?” you ask? It’s a specialized version of Android designed to run on wearable computers. Right now, we’ve already seen two Android Wear devices slated for release in Q2 of 2014 – the square LG G Watch and the round Moto 360.  These watches will pair with any Android handset running Android 4.3 or greater. This is a refreshing change from smart watches such as the Galaxy Gear which restrict the owners to pairing with the few compatible Galaxy devices. Right now, both of the Android Wear devices publicly announced are currently considered “smart watches.” However, the name “Wear” means more product form factors will be explored in the near future according to the lead designer of Moto 360.

screen-image-pointerSo what do we know about what these smart watches can do? We know they’ll do what all watches do – tell time – but there’s a lot more as well. Wear devices will have a voice-input button that will trigger a launcher somewhat like Google Now.

Click the image to the right or follow this link for a quick animated example of the Android Wear user-interface.

They’ll also be able to display a number of different notifications to the user at the flick of a wrist. We as app developers will be able to make these notifications deliver a user’s response back to an app on your phone. For example, we can present the user with a notification from a messenger app that lets the user click a button to open the associated app on a phone. There’s also a “Remote Input” feature that offers the user the ability to speak a message to the Wear device that will be sent to the app on the phone.

Notifications are just the start. According to Google, down the road we’ll be able to do the following:

  • Create custom card layouts and run activities directly on wearables.
  • Send data and actions between a phone and a wearable with a data replication APIs and RPCs.
  • Gather sensor data and display it in real-time on Android wearables.
  • Register your app to handle voice actions, like “OK Google, take a note.”

What’s more, Google is working with an impressive list of hardware partners including Fossil, Samsung, HTC, Asus, and Intel. With all of the work they’re doing, one might wonder why they focused on notifications first. The most pressing reason is that this will affect every existing and upcoming Android device that offers notifications. Because this will affect so many apps, Google is trying to give us time to get our apps ready for the wrist. Regardless of whether your app was built with Wear in mind, users with Wear will be able to get your app’s notifications on their wrist. It’s in every app developer’s best interest to make sure that notifications are making their way to Wear.

Because this is so important for so many apps, we need to focus on how to interface with Android Wear correctly. Keep in mind that notifying users on their wrist is a powerful way to get information to the user, but it cannot be taken for granted. The goal is to give users information they need right when they need it. Users don’t want to be spammed with too many notifications. Instead, the focus should on maximizing signal and minimizing noise. For example, notifications shouldn’t vibrate unless they need the user’s urgent attention or action. A couple of examples that Google offers are a time-based reminder or a message from a friend. Similarly, a notification shouldn’t have sound unless there’s a good reason to. The goal with Wear is to make notifications glance-able. This means doing things like collapsing multiple notifications into a more compact view. There are five different priority buckets – Max, High, Default, Low, and Min. It’s important to know how to use these correctly. For more information on designing great notifications, read The Official Wear Design Guidelines.

We’re only scratching the surface of cool things that can be done to display information quickly while brushing disruptions aside conveniently. I’m excited to see what we can come up with next.

Filed under Tech News, Technology

A Developers Perspective on The Whirlwind of Announcements From GDC 2014

Growing up with the game industry has truly been a great pleasure. One of the coolest things about my time with the industry has been the recent years of incredible growth and the industry’s emergence as a leader in the entertainment industry. In that growth, conferences like E3, PAX, and GDC have only gotten bigger and crazier. GDC (Game Developer Conference) has a couple of different iterations (such as GDC Europe, GDC Asia, and GDC Next), but GDC ‘Prime’ (Simply known as ‘GDC’) is where all stops are pulled and vendors show off their latest and greatest.

This year’s GDC just wrapped and it has been a whirlwind week. There is so much to talk about in the way of technology and game announcements, but the focus of this article is going to be around core game engines and virtual reality technology. But before I switch to that, a quick shout out to Lucas Pope (@dukope) for pretty much sweeping the Independent Game Developer awards with his game ‘Papers, Please’. So great to see an amazing game recognized for its brilliance.

Two housekeeping items before I launch into full nerd mode here – two terms I would like to define for you, that is. The first is “Game Engine.” Game Engines are the final assembling point in the game-creation pipeline. It is where you pull in all of your art assets, where you create your level scenarios, and where you code (‘script’) events to happen in the game. Things to consider when a developer selects a game engine are points like how light is rendered in the engine, the ability for different dynamic visuals, and what the cross-platform abilities of the engine are. The second term I want to make you familiar with is “Virtual Reality.” Sure, you may have heard that term before and eye-roll at the very sound of the words together, but it’s making a resurgence in a massive way. Kickstarter birthed the Oculus Rift project, a goggle set that puts the wearer into a game placing two monitors right in front of their eyes in an oddly comfortable way (in a nutshell, I have not gone ‘full nerd’ here yet). In any case, they paired the ability to create a super emmersive visual scenario in the hands of many developers by allowing purchasable to Developer kits and pairing it up with the Unity3D game engine, common in the game-development community as a whole.

Alright, so lets go full nerd now. The week kicked off with Unity3D announcing its 5th iteration of its Indie affordable game engine. While it was not released, it was announced in a grand way. Historically, there has been a division between Unity and the “Triple A” game engines because of the type of game developer they were targeting and the resources required to make a great game engine. Unity 5 has the promise of some pretty impressive features, such as the ability to publish full 3D game experience to the web without the requirement of a plug-in through the WebGL technology. Also included is impressive Real Time Global Illumination and Physics based shaders. Which is nerd speak for “Gorgeous Graphics,” shortening the divide between Unity and the big guys.

Personally, I have gotten the opportunity to watch Unity grow from the four-person team I met out at Austin, Texas at the historic GDC Online (which has since been scrapped in favor of the GDC Next Conference help in LA). At the time, they were exclusive to the Web through their plug-in but walked over to our booth as they were all setting up and said to me, “Want to see 3D on a phone?” To which I replied, “No way!” Since then, they have built their technology  to be able to export to Web (through plug-ins), iOS, Android, Windows Mobile, and even Blackberry. And now they have returned to their roots to make their engine capable of exporting to the web without the use of a plug in. Which has been kind of the Holy Grail for Game Engines, given the current market.

Not to be outdone, the next day Epic’s Unreal announced Unreal 4, and its release. Now, this is a product I have been talking about for almost two years, when they first started showing some impressive video of the development environment. While there were rumblings that it may be released to the game development community, it certainly was not on my radar because I assumed it was just buzz talk to steal some of Unity’s momentum. But a few of us where stunned to see the word “Released” associated to Unreal 4. The engine features some crazy-impressive elements of lighting and physics (more so than even the Unity 5 updates), but one of the most interesting parts of their showcase is their recent switch in how they present themselves.

Previously, Unreal had a bit of a confusing pricing model which they have recently switched to $19/month + 5% revenue-share, which is much more Indie Development-friendly. So if the mission was to offer a high-end, affordable option to the ever-growing Indie Game Development community, mission accomplished.

You have been following my blog posts through out the years, you know that another engine that I often reference is the Crytek engine (we game developers get all the cool tool names!). This is the engine behind the gorgeous graphics of the Crysis and Farcry series. While there were no super-exciting technology updates to this engine (which is still impressive by the way), Crytek did switch over to the EaaS (engine as a service) model, undercutting Unreal significantly at $10/month without revenue sharing. It will be interesting to track the disruption this has on Unity and Unreal users over the next year.

Finally in my engine discussion is something that I (along with many other people) were not expecting at all, the announcement of Ubisoft’s Snowdrop Engine. This engine is about as impressive and beefy as they come. First showcased in the announcement of Tom Clancy’s ‘The Division’, the engine has gone relatively under the radar. When Ubisoft announced the Snowdrop engine, it was unclear about whether or not it will be made available to the open development community, but given one of the release videos there is a indication that it may be after the release of the first game using it. The engine offers some crazy tools such as procedural geometry creation and other features like procedural destruction, stunning volumetric lighting, and jaw-dropping dynamic material shaders (personal favorite). While a huge fan of game development tools, I have never considered myself the guy to get a tool the minute it is available, but I can tell you that if Ubisoft makes this tool available, I am going to take a week off.

We  now come to the Virtual Reality hardware portion of this blog post. This is easily one of the hardest things to discuss, because it is one of those “seeing is believing” topics. I cannot put into words what it is to experience the current VR hardware. The Nerdery however is showcasing a Oculus Rift lab experiment that myself and teammate Chris Figueroa tackled using the Oculus Rift Developer Kit I.

But the big news here is Sony’s announcement of ‘Project Morpheus’. While much of the community remained skeptical of Sony’s play to move into the VR space (given their track record of “pick up and put down” of different technologies), the results are actually rather impressive. The first generation of their Development Kit touts “bigger and better” than the first generation Oculus Rift. That, coupled with the support of engine creators like Unity and Unreal, and it looks like Morpheus could make some waves. Initial reports of those who waited in line at GDC to give it a try are also promising.

But in typical GDC fashion, Oculus Rift brought their response to the show. They showed off a more polished version of their Crystal Cove prototype and announced the second iteration of their developer kit. Overall, the technology is super impressive and in short, tracks every movement the brain expects to see when moving the head, creating an even more realistic VR experience. Getting to use and develop for the Oculus Rift first hand, I can tell you that the future of VR is very promising indeed.

To wrap this up, what happened at the conference is a promising nod at the game development community as a whole, not just top-end developers. The tools being made available to newb developers are vast and great. It is this writer’s opinion that this shift in attention is due to the recent boom of Indie Game Development (caused by many factors that are beyond the scope of this blog post). More tools of better quality  available at a reasonable price-point means a lot of things. You will start to see really impressive titles being released for your computers, Playstation 4’s, and Xbox Ones. Additionally, mobile technology will be pushed in ways you never thought possible.

But one of the things I am most excited about is that these technologies are so affordable, I can’t wait to see what this does beyond the game market, and how these new impressive engines – paired with exciting and engaging virtual reality hardware – will change other experiences, like going to the museum, the zoo, or even how consumers make decisions about products. There will soon be a day when you can walk into a Home Depot, put on a VR headset, see your house loaded into a simulated experience, and make paint decisions based on how the light hits the wall at 5 p.m. in the evening.

Filed under Tech News, Technology

Oculus Rift Experiment – Is Virtual Reality Ready for Business Applications?

Introduction to Oculus Rift

The Oculus Rift is a new Virtual Reality (VR) headset designed to provide a truly immersive experience, allowing you to step inside your favorite video game, movie, and more. The Oculus Rift has a wide field of view, high-resolution display, and ultra-low latency head tracking unlike any VR headset before it.

Nerdery Lab Program: Oculus Rift

Nerdery Lab Program

Lab partners Chris Figueroa and Scott Bromander collaborated on this Oculus Rift experiment; their respective Lab Reports are below. The Nerdery Lab program is an opportunity for employees to submit ideas for passion projects demonstrating cutting-edge technologies.  Nerds whose ideas show the most potential are given a week to experiment and produce something to show to other Nerds and the world at large.

Lab Report from Nerdery Developer Chris Figueroa:

How is the Oculus Rift Different from other Virtual Reality Headsets from the past?

The first thing to know is the Oculus Rift has very wide range. Previously you would put on a VR headset and have tunnel vision. It didn’t feel like a you were in the experience. This was critical because its called “Virtual Reality.” How can you feel like you are somewhere else if you just feel like you are watching a tiny screen inside of goggles?

Oculus Rift puts you in the virtual world. You have a full 110- degree field of view, which has never before been used in Virtual Reality. When you put on the Oculus Headset you immediately feel like you are in the virtual world. You actually look up and down and can just move your eyes slightly to see objects to the left and right. One of the key things about the Oculus is you have peripheral vision, just like in real life.

Rapid Prototyping at its finest

The first thing you always do is get a sense of what the 3D world will feel like. Put placeholder blocks everywhere – blocks in the size of the objects you will later put there. For example, the blocks you see below became a rocks. We placed a block there so when we put the VR headset on, we’ll know there will be something there.

oculus1

oculus2

Development Challenges

Developing for the Oculus Rift is a complete departure from developing video games, 3D movies, 3D graphics or any sort of media that involves 3D. You’ll quickly realize that things you create are making people sick with the Oculus Rift. Sometimes you won’t know what is making you sick – you just know something “feels wrong.” It’s a dilemma to have a very cool product that makes users sick because something on the screen moves wrong, or the UI is in their view or textures look wrong in the 3D world – it can be any number of things. Below is what we encountered.

1. Don’t Be Tempted to Control Head Movement

In real life you choose to look at something. Advertisers have experience in making lines a certain way with colors that guide someone’s eye to an object on a billboard, but with Virtual Reality you have to do that in 3D space. It has a whole new element of complexity that is unheard of and very few have experience in.

The easiest thing to do is just move the 3D camera so it points at something. What you don’t think about is that no one in real life has their head forced to look at something, so if you do it in Virtual Reality it literally can make people sick! It’s just ill-advised to make users ill.

2. User Interface vs World Space

The Oculus Rift wants you to feel like you’re experiencing real life. So how do you display information to users using the VR headset? The first thing people say is “Lets just put information in the top-right corner to indicate something important needed to get through the experience.” This sounds completely normal and works for everything except Virtual Reality – putting something in the view of your face will not only obstruct the view of the user – it could also make them sick!

Rule of thumb that I learned from the Oculus Rift Founder:

“If it exists in space, it doesn’t go on your face.”

3. Development Kit Resolution

The first development kit for the Oculus Rift has very low resolution in each eye. When people first put the headset on they will immediately say it’s low resolution. They are right and it was very interesting to work with because 3D objects and their edges, colors and lines don’t look the same as they do on your computer screen. Sometimes fonts are completely unreadable.

Everything must be tested before a user tries the experience or they may miss out on whatever the 3D world is attempting to show them.

4. High Resolution Textures vs Low Resolution Textures

Most people that work with 3D content or movies without restrictions know that higher resolution is better. The low resolution of the Oculus Rift made for some weird problems because higher resolution textures looked worse than low resolution textures. Even though people can look at a 3D rock and know its texture is low resolution, it didn’t matter because the high resolution textures didn’t look anything like what you wanted them to be.

Programs I used for the Oculus Rift Project:

  • Unity3D – Game Engine used to interact with 3D environments
  • Oculus Rift Dev Kit 1
  • C# and C++ (Oculus SDK)
  • MonoDevelop – I write C# on a mac with Unity3D
  • Blender 3D 2.69 with a python transform plugin I made.
  • Photoshop CS6

Lab Report from Nerdery Developer Scott Bromander:

Building 3D Modeling for the Oculus Rift

The process for this lab experiment was broken into two clear paths of work. The 3D modeling and SDK (software development kit) engine work could happen simultaneously since we had to have 3D visual assets to actually put into the environment, much like drafting a website in Photoshop before slicing it up and styling with HTML and CSS. The Oculus SDK focused more on the environment and user interactions, and I took placeholder objects in the environment and added in the realistic assets.

For my specific portion of this experiment, I handled the modeling of objects within the 3D experience. Since our goal was to create an example of a business application for a 3D simulator, I built a full-scale model of a residential house. Our experiment demonstrates how Oculus Rift could be used in visualizing a remodeling project, vacation planning, or property sales.

Building these real-world objects is a lot like sculpting with a block of clay. You start with nothing and use basic geometry to shape the object you would like to create. In this case, it was a house that started out looking very plain and very gray.

Typically in the 3D modeling process, the real magic doesn’t come together until later in the process – you change the flat gray 3D object and give it a “skin,” called a texture. Texturing requires that you take that 3D model and break it down into a 2D image. Creating 3D objects follows a specific process to get the best results.

My Process

Plan and prep; build a pseudo schematic for what would be built; create a to-scale model; texture/refactor geometry.

Tools

I used 3D studio Max to build out the front of the house, and I used measurement guides that I pre-created with basic geometry – in this case, I used a series of pre-measured planes for common measurements. I was able to then use those guides throughout the modeling experience to speed things up.

Additionally, I used a lot of the data-entry features of 3DS Max to get exact measurements applied to certain components of the house. This ensured that the scale would be 100% accurate. Once it was modeled in 3DS Max to scale, we then came up with a conversion ratio to apply before bringing the model into Unity.

Finally, we optimized texture maps by including extra geometry for repeating textures (like in the siding and roof). The trick here was to plan for it while at the same time ensuring the scale was accurate. In this case, guides help a lot in slicing extra geometry.

Photoshop for texture generation

To create textures for the house, we used photos I snapped from the first day. One problem here: I  didn’t set up the shot for texture use (lens settings), so there was a significant amount of cleanup work that needed to be performed. If you think about how we see things and how a lens captures images, it’s not in a flat space but rather a little more spherical. So using a combination of stretching/clone stamp/healing-brush techniques I’ve learned over the years, I was able to take this semi-spherized image and make it appear flattened-out.

After those textures were created, we took a pass at creating bump and specular maps. While the final product of that work ultimately never made it into the final experiment, I did follow the process. In both cases, I used an industry-standard tool called Crazy Bump. The purpose of these types of “maps” is to create the look of additional geometry without actually adding it. Basically, these maps tell Unity how the light should respond when hitting the 3D object to give the effect of actual touchable texture. So if you get up close to the siding, for example, it has the ridges and look of real siding.

Had we more time, we’d have used Mental Ray texturing/lighting to give a more realistic look, and then bake that into the texture itself. This effectively would’ve taken all of these different maps/texture/and lighting situations and condensed them down into one texture. Next time.

Challenging Aspects

One of the challenging aspects of this project was adding the actual geometry from the early designs based on “what is important” vs. using a texture. My initial thought was that if I was able to get close to these objects with the Oculus Rift on, I’d be able to catch a lot of the smaller details – planning for that and getting a little deeper in the geometry was on my radar from the get go. Ultimately though, with the prototype version of the Oculus Rift having a lower resolution than planned for final product, a lot of those details were lost.

Objects like the window frames, roof edging, and the other small details were part of the early process. You save a lot of time when you do this planning up front, but it’s more time consuming to make particular changes after the fact. While it doesn’t take a lot of time to go back and add those details, knowing their placement and their measurements ahead of time really smoothes the process.

New things that I learned

Important lesson: How to plan for the Oculus Rift since it doesn’t fit into the usual project specifications. Having a higher polygon count to work with was freeing after several years of building for mobile and applying all of the efficiencies that I’ve learned as a result of creating performant experiences for mobile. But I learned this maybe a little too late in the process, and it would have been great to include those in my initial geometry budgets. Ultimately, the savings helped us when it came time to texture. All of this is the delicate balance of any 3D modeler, but it was interesting being on the other end of it coming out of modeling 3D for mobile devices.

Things I’d have done anything differently in hindsight

I would have shifted my focus and time from the small details that didn’t translate as well, given the lower resolution of the prototype Oculus Rift that we were working with. I could have spent that time creating bolder visuals and texture maps.

Given more time, or for a next iteration or enhancement that would make for a better or more immersive experience, I’d also create more visually-interesting texture maps, build out the interior of the house, and add more tweeting-style animation – including more visually-interesting interactions within the environment.

I’d like to have spent less time on the details in the 3D-modeling portion and spent a lot more time getting the textures to a place that were vibrant and visually interesting within the setting that we ended up with. In any rapid 3D-model development, one needs to remember that it starts as a flat gray model. If you don’t plan to take the time and make the texture interesting, it will look like a flat gray 3D model. So having more time to go after the textures using some sort of baked Mental Ray set-up would have been awesome.

Which really brings me to what I would love to do in another iteration of the project: Take the time to make textures that look extremely realistic, but doing so in a way that utilizes the strengths of the Oculus Rift and the Unity engine – which can all be a delicate balance of texture maps between 3DS Max and Unity, in conjunction with how it renders in the Oculus Rift display. I think that would drive the “want’” to interact with the environment more. Then, beyond the model, include more animation and feedback loops to the interaction as a whole.

I’d also include an animated user interface – developed in Flash and imported using a technology like UniSWF or Scaleform – that would be used to make decisions about the house’s finishings. Then, as the user is making those decisions with the interface, I’d include rewarding feedback in the environment itself – stuff like bushes sprouting out of the ground in a bubbly matter, or windows bouncing in place as you change their style. Or that sort of thing – the type of interaction feedback we are used to seeing in game-like experiences, but instead are using it to heighten a product configurator experience.

Again, next time – so that’s our Lab Report for this time.

Getting Better Analytics for Naughty Native Apps Using Crashlytics

crashalyticsSomething most people don’t think about is what happens after someone presses the button to release an app into the wild. Development doesn’t always stop, issues crop up, and bugs happen. It doesn’t matter how much testing goes into an app or how many devices someone tested against. There will always be slightly different configurations of phones out there in the hands of real users. Most times, developers need to rely on vague bug reports submitted by users via reviews or emails. Thankfully there is something better we can do to lessen the burden of discovering where those issues are hiding in our apps. There are now tools we can use post-deployment that can track usage and even point us right to the issues at hand. The tool I will be covering today is called Crashlytics.

Crashlytics is a plugin for Android and iOS which is added to your projects via your IDE of choice. You simply download the plugin, add it to your IDE, and then follow the instructions to tie that app to your Crashlytics account. That’s it! You’re then ready to begin receiving analytics for your app. If the app crashes, you get everything from device type, OS version, whether or not the phone is rooted, current memory usage, and much more. The crash itself is detailed with the offending line of code, where it’s located, and if applicable the exception thrown.

The detail given for issues discovered by apps is great, but it gets better. When Crashlytics receives a crash dump it is categorized so the developer can easily sort issues. Crashes with more specificity than others are given higher priority, which means Crashlytics performs a level-of-issue triage for you. It will also lessen the severity of issues automatically if they stop happening for whatever reason. You can also specifically close issues as fixed in Crashlytics once you address them. This can be extremely powerful when coupled with bug-base integration, which is another useful feature of Crashlytics.

Crashlytics can be integrated with many different bug-tracking systems. These include Jira, Pivotal, and GitHub, among many others. In my experience this is one of the most helpful features of Crashlytics . Once a crash is received, Crashlytics  automatically creates a bug in your issue tracker of choice and populates it with the relevant information. It will then set the bug’s severity for you – based on the number of crashes detected – and keep it updated. This is extremely helpful and time saving. It takes the burden off of testers and developers of transferring issues from Crashlytics to the bug base and keeping it updated.

These are just some of the powerful features packed into this tool. Another large plus of the tool is that it has become free after Crashlytics partnered with Twitter – they’ve promised to keep developing the tool and add even more features. I hope I have convinced you that discovering and fixing issues post-deployment doesn’t have to be a chore. With the right tools, it can be a relatively easy experience that will benefit users and developers.

Filed under Technology

Good bye, Java. Hello, Scala!

As a developer, Java has been my go-to language for quite some time now.  TI-Basic on my graphing calculator aside, it’s the first real programming language I learned.  I used it all through college, and I’ve used it to create all sorts of cool things for a plethora of clients.  As an object oriented language, it gets the job done in a variety of situations.  For a long period of time, I was content with Java and always using an object-oriented mindset to solve problems.  With classes, inheritance and a book on design patterns, the world was mine for the taking–or so I thought.

Then I found functional programming.  Like a ten year old that’s magically won a full ride college scholarship simply by playing the claw game at the local mall, I didn’t have a full appreciation for what I had just been given.  I started toying around with pure functional programming using Clojure and solving problems on projecteuler.net (more on that in a later blog post).  More recently however, I was given the opportunity to use Scala on a project here at the Nerdery, and it’s made everything so much easier.

Let’s look at several examples of exactly how Scala surpasses Java.  First off, it’s important to note that Scala, like Java, runs on the Java Virtual Machine (JVM).  This means that Scala can run anywhere Java can run and can feed off of the numerous benefits of the JVM. Write once, run anywhere?  Done. Garbage collection without even knowing what garbage collection is?  Why not?  Use Java libraries that developers everywhere have come to depend on from day-to-day?  Not a problem; Scala can call Java code and vice-versa.  The list of benefits goes on, but let’s look at the ways in which Scala surpasses Java.

Let’s assume, for the sake of consistency and simplicity that we are writing an online store of some kind.  Naturally, we need products to sell. Each product should have an id, a name, a price, and a creation date. We need to be able to compare products to each other for equality and we need to be able to print/log products for the sake of debugging what’s in each property.  In Java, your product model class might look something like this: Read more

Filed under Technology

The Challenges of Testing Android

There comes a time in every QA Engineer’s life when they are tasked with testing their first project on the Android platform. This sounds like an easy enough task. Just grab a few devices and follow the test plan. Unfortunately, it is not so easy. The constantly growing list of devices with a wide range of features and the different versions of Android all supporting a varying set of those features can seem very daunting at times. These two obstacles are only compounded by the device manufacturers who, most often, do not provide their devices with stock versions of Android but instead versions modified for their devices.

The Devices

Android devices come in all shapes and sizes, each with their own hardware profiles, making it difficult to ensure an app’s look and feel is consistent for all Android users. An application may look pristine on a Nexus 5, but load it up on a Nexus One and pictures might overlap buttons and buttons might overlap text creating a poor user experience. It is critical we help the client select relevant and broadly supported Android devices during the scoping process, which can be tricky depending upon the application.

The Operating System

Three versions of Android currently (Feb. 2014) capture the largest market share: Jelly Bean, Ice Cream Sandwich, and Gingerbread. The newest version, Kitkat, holds only a 1.8% marketshare. We must be mindful of features unavailable on the older versions of Android while testing applications. If an app has a scoped feature not possible on some devices, we must be sure it gracefully handles older versions of Android and does not crash nor display any undesirable artifacts. The above-mentioned stumbling blocks are easily avoidable if we can pick a good range of targeted devices to cover the appropriate versions of Android.

Compounding the Problem

There are some lesser-known issues that can be easily missed, but would be disastrous for an end user. Most versions of Android installed on a user’s device are not stock – they are modified by the device manufacturer. As a result, testing an app on one device from one carrier might give you different results than testing the app on the same device from a different carrier. Knowing this can provide a buffer for these sorts of issues and ensure we are readily able to detect and squash those bugs.

Closing Thoughts

Consider these common problems as food for thought while testing on the Android platform. Google may be taking steps towards alleviating some of these common Android issues for future releases. First, it is rumored they will begin requiring device manufacturers to use versions of Android within the last two released or not be certified to use Google Apps. End users would definitely benefit from such a decision, but it would also be good for developers and QA Engineers. The move would also lessen the fragmentation issues currently prevalent on the Android platform. Second, Google continues to provide new testing tools with each new release of Android, making it easier to ensure apps are stable across a wide range of devices. Third parties are also making tools available for a wide range of testing. These tools include new Automation tools that function differently from native tools and allow testing across platforms. Some tools tools encompass walls of devices running your app viewed through a webcam so people can easily test across device types using online controls. Scripts and other automated tools are great for making sure nothing is broken during any fixes and ensuring a service is up and running.  However, nothing will ever replace a QA Engineer getting down to it with a range of actual, physical devices, testing without discretion, and finding bugs. A human’s intuition will always be the best.