Technology

Unity3D WebGL is in Beta, and it is SO good!

At GDC (Game Developer Conference) 2014 earlier this year, we were bombarded with a series of announcements. All of the top players were showing off all of their new technologies and it was a pretty whirlwind week. One of the most promising announcements was the promise from Unity that their game engine Unity3D would have the ability to export to WebGL, which means that the content you create could run in modern browsers without the requirement of plug-in.

Before we get into the meat of that those statements, let’s get you up to speed on a couple of things. Read more

Filed under Technology

For security’s sake update WordPress to version 3.8.2

On April 8, 2014 WordPress released a security update to version 3.8.2. The announcement that accompanied the release states “this is an important security release for all previous versions and we strongly encourage you to update your sites immediately.”

WP 3.8.2 addresses two potentially serious security vulnerabilities, includes three security hardening changes, and addresses nine “other bugs.” Most notably the following security issues are addressed:

  • Potential authentication cookie forgery. CVE-2014-0166. (Very serious vulnerability!)
  • Privilege escalation: prevent contributors from publishing posts. CVE-2014-0165.

  • Pass along additional information when processing pingbacks to help hosts identify potentially abusive requests.

  • Fix a low-impact SQL injection by trusted users.

  • Prevent possible cross-domain scripting through Plupload, the third-party library WordPress uses for uploading files.

Additionally: JetPack – the wordpress.com feature-rich plugin suite – was updated to version 2.9.3 to address similar issues.

If your site is currently operating a WordPress version below 3.8.2 or Jetpack version below 2.9.3, you may be at risk and should consider upgrading as soon as possible. 

Filed under Tech News, Technology

Heartbleed bug security alert: Your web server/data may be vulnerable – test your domains

On Monday evening, a security firm announced a new vulnerability in a key internet technology that can result in the disclosure of user passwords. This vulnerability is widespread and affects more than two-thirds of the web servers on the planet including top-tier sites like Yahoo and Amazon. If you have a secure (https) website hosted on a Linux/Unix servers using Apache or Nginx or any other service using OpenSSL, you are likely vulnerable.

For a detailed breakdown of this vulnerability, please see this site. This security vulnerability may affect up to two-thirds of all web servers. We urge you to assess your vulnerability immediately, and reach out for help.

How can I get help to fix this problem?

How can I see if my servers are vulnerable?

You can use this site to test your domains for the vulnerability. Enter the domain of your HTTPS web site. If you get a red positive result, you are vulnerable.

In addition, you can execute the following command on your servers to see if they are running a vulnerable version of OpenSSL: openssl version -a

If the version returned is 1.0.1, and its build date is before April 7th, 2014, you are vulnerable.

How can I fix it if I am vulnerable?

You will need to obtain a patched version of OpenSSL and install it on all vulnerable servers. Updated packages should be available for Debian, RedHat, Ubuntu, and CentOS via their package managers. If a package is not available for your platform, you can recompile the OpenSSL package (version 1.0.1g) with the NO_HEARTBEAT flag, which will disable this vulnerability. After updating, restart any services that are using SSL and re-test your domain using the link above (http://filippo.io/Heartbleed/).

For information on your specific Linux distribution see:

Additionally, you should strongly consider changing passwords and/or resetting SSL certificates, but only after OpenSSL has been updated.

What is the vulnerability?

With the vulnerability, called Heartbleed, attackers can obtain sensitive information from servers running certain versions of OpenSSL. Examples of sensitive information include private encryption keys for SSL certificates, usernames/passwords, SSH private keys on those servers and more. Attackers which obtain the keys to your SSL certificates can then set up a man-in-the-middle attack between you and your customers and obtain secure information, such as credit card numbers and authentication credentials. The vulnerability was publicly disclosed Monday, 4/7/2014.

If you have any questions, please contact us, or ping your own go-to Nerdery contact right away. We’ll help analyze your risk and protect your data. If The Nerdery can be a resource to you in any way, we will.

Filed under Tech News, Technology

Dashicons Make Your WordPress Dashboard Cooler

What Are Dashicons

On December 1, 2013, WordPress 3.8 – code name “Parker” – was released. One of the highlights of 3.8 was the re-skin of the WordPress admin, officially called the Dashboard. The re-skin got rid of the blue, grey, and gradient interface for a more modern and flat design. Included in the update were Dashicons, an icon font.

An icon font has all the benefits of being text and none of the downsides of being an image. Size, color, and pretty much anything else you can do with CSS can be applied to icon fonts.

There are several ways to use Dashicons inside the Dashboard. For this example I’ll be using a plugin, but you don’t have to. If you’re more of a functions.php person, it will work there too. I’ll also be skipping over the details of customizing WordPress and focus on Dashicon-specific code.

Set Up Base Plugin

I already have the plugin created, uploaded, and activated. You can see the final plugin on Github. For each section, I’ll only be highlighting the relevant code. The code could be included in your own plugin or the functions.php file by using the appropriate hooks and filters.

Read more

Filed under Technology

Software maintenance = strategically-planned evolution

Thinking of starting a maintenance program? Picking the right number of hours can make a big difference as to how your project works.

I’ve done a number of maintenance programs and each one is a little different, however they fall into one of three buckets; no hours, few hours, lots of hours.

If you choose to do no hours, meaning you just call out of the blue when you want work done, or if you have very few hours, the main disadvantages you have are knowledge, developers, and timeline.

Ideally, one developer would be on your project to help with all future fixes. Having a developer with background on the project, code, and client relationship is beneficial. However, those developers are valuable. They will get put on another project if they aren’t busy and they will become unavailable for yours.

When you lose the developer, you also lose some of the knowledge. We’re still a team and still transfer knowledge form one developer to the next, but switching developers can reduce some efficiencies. We all do our best, but the new developer still has to learn all the ins-and-outs of the code.

And then there is timeline. If we can’t plan for your maintenance changes, you will have to wait until a developer becomes available. That could be a few days to a few weeks. It all depends on developer availability and the complexity of the changes.

Now, if you have a good amount of hours in your maintenance bucket, it’s easier to keep a developer on the project, retain knowledge, and plan for the work.

I’ve been working with one client for about three years now and I continue to lead the maintenance project, which is supporting the sites that we created. Every time they ask for something I know the project history,  I know what we’re talking about, and I know which developers actually built individual pieces of the site. Knowing all this makes it much easier to understand the request, estimate, and execute.

This client also has a good bucket of hours, which helps me plan for changes. I know that – based on the client budget per month – I need to spend x hours per week. Not only does this help me plan for development, but it also helps with setting expectations for the client. Our turnaround time on requests is usually a few days. We’ve gotten ourselves into a groove and it works well for the client and The Nerdery team.

What’s a good number of hours?

That’s a good question. How many changes do you plan on having? Do you think you’ll have some minor changes and updates? Or do you think you have some features you’d like developed during your maintenance program? Are you thinking of a handful of changes per month or do you have a long list already and it just keeps growing? It all depends on how much work you want done each month and how fast you want to get things completed.

For example, if you’re thinking 40 hours a month, one developer can plan for about six hours per week. Now, when other projects come up, that developer can safely plan for both projects. However, six hours per week also limits the amount of work the developer can get completed. More complex tasks may take a few weeks to complete.

Wait, six hours per week is only 24 hours a month. What about the other 16?

Another thing to consider when setting up maintenance hours is that the hours are not all development hours. You have to have some time for project management and QA. There will be phone calls, and email communications, status reports, and we’ll want to run any change through QA to help ensure we didn’t fix one thing and break another.

Quite often people think that maintenance is just finding and fixing bugs, but it’s more than that. It’s also about keeping things updated, about enhancing existing features, and it’s about building out new features. It’s about ensuring your site/software continues to evolve and grow instead of just standing still. In technology, you’re either getting better or you’re falling behind. Standing still isn’t an option.

Filed under Technology

What is Android Wear, and Why Should You Care?

google-android-wearGoogle rocked boats recently by announcing Android Wear. “What is Android Wear?” you ask? It’s a specialized version of Android designed to run on wearable computers. Right now, we’ve already seen two Android Wear devices slated for release in Q2 of 2014 – the square LG G Watch and the round Moto 360.  These watches will pair with any Android handset running Android 4.3 or greater. This is a refreshing change from smart watches such as the Galaxy Gear which restrict the owners to pairing with the few compatible Galaxy devices. Right now, both of the Android Wear devices publicly announced are currently considered “smart watches.” However, the name “Wear” means more product form factors will be explored in the near future according to the lead designer of Moto 360.

So, what is to know about these devices? Read more

Filed under Tech News, Technology

A Developers Perspective on The Whirlwind of Announcements From GDC 2014

Growing up with the game industry has truly been a great pleasure. One of the coolest things about my time with the industry has been the recent years of incredible growth and the industry’s emergence as a leader in the entertainment industry. In that growth, conferences like E3, PAX, and GDC have only gotten bigger and crazier. GDC (Game Developer Conference) has a couple of different iterations (such as GDC Europe, GDC Asia, and GDC Next), but GDC ‘Prime’ (Simply known as ‘GDC’) is where all stops are pulled and vendors show off their latest and greatest.

This year’s GDC just wrapped and it has been a whirlwind week. There is so much to talk about in the way of technology and game announcements, but the focus of this article is going to be around core game engines and virtual reality technology. So what all happened at this conference people should care about? Read more

Filed under Tech News, Technology

Oculus Rift Experiment – Is Virtual Reality Ready for Business Applications?

Introduction to Oculus Rift

The Oculus Rift is a new Virtual Reality (VR) headset designed to provide a truly immersive experience, allowing you to step inside your favorite video game, movie, and more. The Oculus Rift has a wide field of view, high-resolution display, and ultra-low latency head tracking unlike any VR headset before it.

Nerdery Lab Program: Oculus Rift

Nerdery Lab Program

Lab partners Chris Figueroa and Scott Bromander collaborated on this Oculus Rift experiment; their respective Lab Reports are below. The Nerdery Lab program is an opportunity for employees to submit ideas for passion projects demonstrating cutting-edge technologies.  Nerds whose ideas show the most potential are given a week to experiment and produce something to show to other Nerds and the world at large.

Lab Report from Nerdery Developer Chris Figueroa:

How is the Oculus Rift Different from other Virtual Reality Headsets from the past?

The first thing to know is the Oculus Rift has very wide range. Previously you would put on a VR headset and have tunnel vision. It didn’t feel like a you were in the experience. This was critical because its called “Virtual Reality.” How can you feel like you are somewhere else if you just feel like you are watching a tiny screen inside of goggles?

Oculus Rift puts you in the virtual world. You have a full 110- degree field of view, which has never before been used in Virtual Reality. When you put on the Oculus Headset you immediately feel like you are in the virtual world. You actually look up and down and can just move your eyes slightly to see objects to the left and right. One of the key things about the Oculus is you have peripheral vision, just like in real life.

Rapid Prototyping at its finest

The first thing you always do is get a sense of what the 3D world will feel like. Put placeholder blocks everywhere – blocks in the size of the objects you will later put there. For example, the blocks you see below became a rocks. We placed a block there so when we put the VR headset on, we’ll know there will be something there.

oculus1

oculus2

Development Challenges

Developing for the Oculus Rift is a complete departure from developing video games, 3D movies, 3D graphics or any sort of media that involves 3D. You’ll quickly realize that things you create are making people sick with the Oculus Rift. Sometimes you won’t know what is making you sick – you just know something “feels wrong.” It’s a dilemma to have a very cool product that makes users sick because something on the screen moves wrong, or the UI is in their view or textures look wrong in the 3D world – it can be any number of things. Below is what we encountered.

1. Don’t Be Tempted to Control Head Movement

In real life you choose to look at something. Advertisers have experience in making lines a certain way with colors that guide someone’s eye to an object on a billboard, but with Virtual Reality you have to do that in 3D space. It has a whole new element of complexity that is unheard of and very few have experience in.

The easiest thing to do is just move the 3D camera so it points at something. What you don’t think about is that no one in real life has their head forced to look at something, so if you do it in Virtual Reality it literally can make people sick! It’s just ill-advised to make users ill.

2. User Interface vs World Space

The Oculus Rift wants you to feel like you’re experiencing real life. So how do you display information to users using the VR headset? The first thing people say is “Lets just put information in the top-right corner to indicate something important needed to get through the experience.” This sounds completely normal and works for everything except Virtual Reality – putting something in the view of your face will not only obstruct the view of the user – it could also make them sick!

Rule of thumb that I learned from the Oculus Rift Founder:

“If it exists in space, it doesn’t go on your face.”

3. Development Kit Resolution

The first development kit for the Oculus Rift has very low resolution in each eye. When people first put the headset on they will immediately say it’s low resolution. They are right and it was very interesting to work with because 3D objects and their edges, colors and lines don’t look the same as they do on your computer screen. Sometimes fonts are completely unreadable.

Everything must be tested before a user tries the experience or they may miss out on whatever the 3D world is attempting to show them.

4. High Resolution Textures vs Low Resolution Textures

Most people that work with 3D content or movies without restrictions know that higher resolution is better. The low resolution of the Oculus Rift made for some weird problems because higher resolution textures looked worse than low resolution textures. Even though people can look at a 3D rock and know its texture is low resolution, it didn’t matter because the high resolution textures didn’t look anything like what you wanted them to be.

Programs I used for the Oculus Rift Project:

  • Unity3D – Game Engine used to interact with 3D environments
  • Oculus Rift Dev Kit 1
  • C# and C++ (Oculus SDK)
  • MonoDevelop – I write C# on a mac with Unity3D
  • Blender 3D 2.69 with a python transform plugin I made.
  • Photoshop CS6

Lab Report from Nerdery Developer Scott Bromander:

Building 3D Modeling for the Oculus Rift

The process for this lab experiment was broken into two clear paths of work. The 3D modeling and SDK (software development kit) engine work could happen simultaneously since we had to have 3D visual assets to actually put into the environment, much like drafting a website in Photoshop before slicing it up and styling with HTML and CSS. The Oculus SDK focused more on the environment and user interactions, and I took placeholder objects in the environment and added in the realistic assets.

For my specific portion of this experiment, I handled the modeling of objects within the 3D experience. Since our goal was to create an example of a business application for a 3D simulator, I built a full-scale model of a residential house. Our experiment demonstrates how Oculus Rift could be used in visualizing a remodeling project, vacation planning, or property sales.

Building these real-world objects is a lot like sculpting with a block of clay. You start with nothing and use basic geometry to shape the object you would like to create. In this case, it was a house that started out looking very plain and very gray.

Typically in the 3D modeling process, the real magic doesn’t come together until later in the process – you change the flat gray 3D object and give it a “skin,” called a texture. Texturing requires that you take that 3D model and break it down into a 2D image. Creating 3D objects follows a specific process to get the best results.

My Process

Plan and prep; build a pseudo schematic for what would be built; create a to-scale model; texture/refactor geometry.

Tools

I used 3D studio Max to build out the front of the house, and I used measurement guides that I pre-created with basic geometry – in this case, I used a series of pre-measured planes for common measurements. I was able to then use those guides throughout the modeling experience to speed things up.

Additionally, I used a lot of the data-entry features of 3DS Max to get exact measurements applied to certain components of the house. This ensured that the scale would be 100% accurate. Once it was modeled in 3DS Max to scale, we then came up with a conversion ratio to apply before bringing the model into Unity.

Finally, we optimized texture maps by including extra geometry for repeating textures (like in the siding and roof). The trick here was to plan for it while at the same time ensuring the scale was accurate. In this case, guides help a lot in slicing extra geometry.

Photoshop for texture generation

To create textures for the house, we used photos I snapped from the first day. One problem here: I  didn’t set up the shot for texture use (lens settings), so there was a significant amount of cleanup work that needed to be performed. If you think about how we see things and how a lens captures images, it’s not in a flat space but rather a little more spherical. So using a combination of stretching/clone stamp/healing-brush techniques I’ve learned over the years, I was able to take this semi-spherized image and make it appear flattened-out.

After those textures were created, we took a pass at creating bump and specular maps. While the final product of that work ultimately never made it into the final experiment, I did follow the process. In both cases, I used an industry-standard tool called Crazy Bump. The purpose of these types of “maps” is to create the look of additional geometry without actually adding it. Basically, these maps tell Unity how the light should respond when hitting the 3D object to give the effect of actual touchable texture. So if you get up close to the siding, for example, it has the ridges and look of real siding.

Had we more time, we’d have used Mental Ray texturing/lighting to give a more realistic look, and then bake that into the texture itself. This effectively would’ve taken all of these different maps/texture/and lighting situations and condensed them down into one texture. Next time.

Challenging Aspects

One of the challenging aspects of this project was adding the actual geometry from the early designs based on “what is important” vs. using a texture. My initial thought was that if I was able to get close to these objects with the Oculus Rift on, I’d be able to catch a lot of the smaller details – planning for that and getting a little deeper in the geometry was on my radar from the get go. Ultimately though, with the prototype version of the Oculus Rift having a lower resolution than planned for final product, a lot of those details were lost.

Objects like the window frames, roof edging, and the other small details were part of the early process. You save a lot of time when you do this planning up front, but it’s more time consuming to make particular changes after the fact. While it doesn’t take a lot of time to go back and add those details, knowing their placement and their measurements ahead of time really smoothes the process.

New things that I learned

Important lesson: How to plan for the Oculus Rift since it doesn’t fit into the usual project specifications. Having a higher polygon count to work with was freeing after several years of building for mobile and applying all of the efficiencies that I’ve learned as a result of creating performant experiences for mobile. But I learned this maybe a little too late in the process, and it would have been great to include those in my initial geometry budgets. Ultimately, the savings helped us when it came time to texture. All of this is the delicate balance of any 3D modeler, but it was interesting being on the other end of it coming out of modeling 3D for mobile devices.

Things I’d have done anything differently in hindsight

I would have shifted my focus and time from the small details that didn’t translate as well, given the lower resolution of the prototype Oculus Rift that we were working with. I could have spent that time creating bolder visuals and texture maps.

Given more time, or for a next iteration or enhancement that would make for a better or more immersive experience, I’d also create more visually-interesting texture maps, build out the interior of the house, and add more tweeting-style animation – including more visually-interesting interactions within the environment.

I’d like to have spent less time on the details in the 3D-modeling portion and spent a lot more time getting the textures to a place that were vibrant and visually interesting within the setting that we ended up with. In any rapid 3D-model development, one needs to remember that it starts as a flat gray model. If you don’t plan to take the time and make the texture interesting, it will look like a flat gray 3D model. So having more time to go after the textures using some sort of baked Mental Ray set-up would have been awesome.

Which really brings me to what I would love to do in another iteration of the project: Take the time to make textures that look extremely realistic, but doing so in a way that utilizes the strengths of the Oculus Rift and the Unity engine – which can all be a delicate balance of texture maps between 3DS Max and Unity, in conjunction with how it renders in the Oculus Rift display. I think that would drive the “want’” to interact with the environment more. Then, beyond the model, include more animation and feedback loops to the interaction as a whole.

I’d also include an animated user interface – developed in Flash and imported using a technology like UniSWF or Scaleform – that would be used to make decisions about the house’s finishings. Then, as the user is making those decisions with the interface, I’d include rewarding feedback in the environment itself – stuff like bushes sprouting out of the ground in a bubbly matter, or windows bouncing in place as you change their style. Or that sort of thing – the type of interaction feedback we are used to seeing in game-like experiences, but instead are using it to heighten a product configurator experience.

Again, next time – so that’s our Lab Report for this time.

Getting Better Analytics for Naughty Native Apps Using Crashlytics

crashalyticsSomething most people don’t think about is what happens after someone presses the button to release an app into the wild. Development doesn’t always stop, issues crop up, and bugs happen. It doesn’t matter how much testing goes into an app or how many devices someone tested against. There will always be slightly different configurations of phones out there in the hands of real users. Most times, developers need to rely on vague bug reports submitted by users via reviews or emails. Thankfully there is something better we can do to lessen the burden of discovering where those issues are hiding in our apps. There are now tools we can use post-deployment that can track usage and even point us right to the issues at hand. The tool I will be covering today is called Crashlytics.

Crashlytics is a plugin for Android and iOS which is added to your projects via your IDE of choice. You simply download the plugin, add it to your IDE, and then follow the instructions to tie that app to your Crashlytics account. That’s it! You’re then ready to begin receiving analytics for your app. If the app crashes, you get everything from device type, OS version, whether or not the phone is rooted, current memory usage, and much more. The crash itself is detailed with the offending line of code, where it’s located, and if applicable the exception thrown.

The detail given for issues discovered by apps is great, but it gets better. When Crashlytics receives a crash dump it is categorized so the developer can easily sort issues. Crashes with more specificity than others are given higher priority, which means Crashlytics performs a level-of-issue triage for you. It will also lessen the severity of issues automatically if they stop happening for whatever reason. You can also specifically close issues as fixed in Crashlytics once you address them. This can be extremely powerful when coupled with bug-base integration, which is another useful feature of Crashlytics.

Crashlytics can be integrated with many different bug-tracking systems. These include Jira, Pivotal, and GitHub, among many others. In my experience this is one of the most helpful features of Crashlytics . Once a crash is received, Crashlytics  automatically creates a bug in your issue tracker of choice and populates it with the relevant information. It will then set the bug’s severity for you – based on the number of crashes detected – and keep it updated. This is extremely helpful and time saving. It takes the burden off of testers and developers of transferring issues from Crashlytics to the bug base and keeping it updated.

These are just some of the powerful features packed into this tool. Another large plus of the tool is that it has become free after Crashlytics partnered with Twitter – they’ve promised to keep developing the tool and add even more features. I hope I have convinced you that discovering and fixing issues post-deployment doesn’t have to be a chore. With the right tools, it can be a relatively easy experience that will benefit users and developers.

Filed under Technology

Good bye, Java. Hello, Scala!

As a developer, Java has been my go-to language for quite some time now.  TI-Basic on my graphing calculator aside, it’s the first real programming language I learned.  I used it all through college, and I’ve used it to create all sorts of cool things for a plethora of clients.  As an object oriented language, it gets the job done in a variety of situations.  For a long period of time, I was content with Java and always using an object-oriented mindset to solve problems.  With classes, inheritance and a book on design patterns, the world was mine for the taking–or so I thought.

Then I found functional programming.  Like a ten year old that’s magically won a full ride college scholarship simply by playing the claw game at the local mall, I didn’t have a full appreciation for what I had just been given.  I started toying around with pure functional programming using Clojure and solving problems on projecteuler.net (more on that in a later blog post).  More recently however, I was given the opportunity to use Scala on a project here at the Nerdery, and it’s made everything so much easier.

Let’s look at several examples of exactly how Scala surpasses Java.  First off, it’s important to note that Scala, like Java, runs on the Java Virtual Machine (JVM).  This means that Scala can run anywhere Java can run and can feed off of the numerous benefits of the JVM. Write once, run anywhere?  Done. Garbage collection without even knowing what garbage collection is?  Why not?  Use Java libraries that developers everywhere have come to depend on from day-to-day?  Not a problem; Scala can call Java code and vice-versa.  The list of benefits goes on, but let’s look at the ways in which Scala surpasses Java.

Let’s assume, for the sake of consistency and simplicity that we are writing an online store of some kind.  Naturally, we need products to sell. Each product should have an id, a name, a price, and a creation date. We need to be able to compare products to each other for equality and we need to be able to print/log products for the sake of debugging what’s in each property.  In Java, your product model class might look something like this: Read more

Filed under Technology