What is Android Wear, and Why Should You Care?

google-android-wearGoogle rocked boats recently by announcing Android Wear. “What is Android Wear?” you ask? It’s a specialized version of Android designed to run on wearable computers. Right now, we’ve already seen two Android Wear devices slated for release in Q2 of 2014 – the square LG G Watch and the round Moto 360.  These watches will pair with any Android handset running Android 4.3 or greater. This is a refreshing change from smart watches such as the Galaxy Gear which restrict the owners to pairing with the few compatible Galaxy devices. Right now, both of the Android Wear devices publicly announced are currently considered “smart watches.” However, the name “Wear” means more product form factors will be explored in the near future according to the lead designer of Moto 360.

screen-image-pointerSo what do we know about what these smart watches can do? We know they’ll do what all watches do – tell time – but there’s a lot more as well. Wear devices will have a voice-input button that will trigger a launcher somewhat like Google Now.

Click the image to the right or follow this link for a quick animated example of the Android Wear user-interface.

They’ll also be able to display a number of different notifications to the user at the flick of a wrist. We as app developers will be able to make these notifications deliver a user’s response back to an app on your phone. For example, we can present the user with a notification from a messenger app that lets the user click a button to open the associated app on a phone. There’s also a “Remote Input” feature that offers the user the ability to speak a message to the Wear device that will be sent to the app on the phone.

Notifications are just the start. According to Google, down the road we’ll be able to do the following:

  • Create custom card layouts and run activities directly on wearables.
  • Send data and actions between a phone and a wearable with a data replication APIs and RPCs.
  • Gather sensor data and display it in real-time on Android wearables.
  • Register your app to handle voice actions, like “OK Google, take a note.”

What’s more, Google is working with an impressive list of hardware partners including Fossil, Samsung, HTC, Asus, and Intel. With all of the work they’re doing, one might wonder why they focused on notifications first. The most pressing reason is that this will affect every existing and upcoming Android device that offers notifications. Because this will affect so many apps, Google is trying to give us time to get our apps ready for the wrist. Regardless of whether your app was built with Wear in mind, users with Wear will be able to get your app’s notifications on their wrist. It’s in every app developer’s best interest to make sure that notifications are making their way to Wear.

Because this is so important for so many apps, we need to focus on how to interface with Android Wear correctly. Keep in mind that notifying users on their wrist is a powerful way to get information to the user, but it cannot be taken for granted. The goal is to give users information they need right when they need it. Users don’t want to be spammed with too many notifications. Instead, the focus should on maximizing signal and minimizing noise. For example, notifications shouldn’t vibrate unless they need the user’s urgent attention or action. A couple of examples that Google offers are a time-based reminder or a message from a friend. Similarly, a notification shouldn’t have sound unless there’s a good reason to. The goal with Wear is to make notifications glance-able. This means doing things like collapsing multiple notifications into a more compact view. There are five different priority buckets – Max, High, Default, Low, and Min. It’s important to know how to use these correctly. For more information on designing great notifications, read The Official Wear Design Guidelines.

We’re only scratching the surface of cool things that can be done to display information quickly while brushing disruptions aside conveniently. I’m excited to see what we can come up with next.

Comment

Filed under Tech News, Technology

A Developers Perspective on The Whirlwind of Announcements From GDC 2014

Growing up with the game industry has truly been a great pleasure. One of the coolest things about my time with the industry has been the recent years of incredible growth and the industry’s emergence as a leader in the entertainment industry. In that growth, conferences like E3, PAX, and GDC have only gotten bigger and crazier. GDC (Game Developer Conference) has a couple of different iterations (such as GDC Europe, GDC Asia, and GDC Next), but GDC ‘Prime’ (Simply known as ‘GDC’) is where all stops are pulled and vendors show off their latest and greatest.

This year’s GDC just wrapped and it has been a whirlwind week. There is so much to talk about in the way of technology and game announcements, but the focus of this article is going to be around core game engines and virtual reality technology. But before I switch to that, a quick shout out to Lucas Pope (@dukope) for pretty much sweeping the Independent Game Developer awards with his game ‘Papers, Please’. So great to see an amazing game recognized for its brilliance.

Two housekeeping items before I launch into full nerd mode here – two terms I would like to define for you, that is. The first is “Game Engine.” Game Engines are the final assembling point in the game-creation pipeline. It is where you pull in all of your art assets, where you create your level scenarios, and where you code (‘script’) events to happen in the game. Things to consider when a developer selects a game engine are points like how light is rendered in the engine, the ability for different dynamic visuals, and what the cross-platform abilities of the engine are. The second term I want to make you familiar with is “Virtual Reality.” Sure, you may have heard that term before and eye-roll at the very sound of the words together, but it’s making a resurgence in a massive way. Kickstarter birthed the Oculus Rift project, a goggle set that puts the wearer into a game placing two monitors right in front of their eyes in an oddly comfortable way (in a nutshell, I have not gone ‘full nerd’ here yet). In any case, they paired the ability to create a super emmersive visual scenario in the hands of many developers by allowing purchasable to Developer kits and pairing it up with the Unity3D game engine, common in the game-development community as a whole.

Alright, so lets go full nerd now. The week kicked off with Unity3D announcing its 5th iteration of its Indie affordable game engine. While it was not released, it was announced in a grand way. Historically, there has been a division between Unity and the “Triple A” game engines because of the type of game developer they were targeting and the resources required to make a great game engine. Unity 5 has the promise of some pretty impressive features, such as the ability to publish full 3D game experience to the web without the requirement of a plug-in through the WebGL technology. Also included is impressive Real Time Global Illumination and Physics based shaders. Which is nerd speak for “Gorgeous Graphics,” shortening the divide between Unity and the big guys.

Personally, I have gotten the opportunity to watch Unity grow from the four-person team I met out at Austin, Texas at the historic GDC Online (which has since been scrapped in favor of the GDC Next Conference help in LA). At the time, they were exclusive to the Web through their plug-in but walked over to our booth as they were all setting up and said to me, “Want to see 3D on a phone?” To which I replied, “No way!” Since then, they have built their technology  to be able to export to Web (through plug-ins), iOS, Android, Windows Mobile, and even Blackberry. And now they have returned to their roots to make their engine capable of exporting to the web without the use of a plug in. Which has been kind of the Holy Grail for Game Engines, given the current market.

Not to be outdone, the next day Epic’s Unreal announced Unreal 4, and its release. Now, this is a product I have been talking about for almost two years, when they first started showing some impressive video of the development environment. While there were rumblings that it may be released to the game development community, it certainly was not on my radar because I assumed it was just buzz talk to steal some of Unity’s momentum. But a few of us where stunned to see the word “Released” associated to Unreal 4. The engine features some crazy-impressive elements of lighting and physics (more so than even the Unity 5 updates), but one of the most interesting parts of their showcase is their recent switch in how they present themselves.

Previously, Unreal had a bit of a confusing pricing model which they have recently switched to $19/month + 5% revenue-share, which is much more Indie Development-friendly. So if the mission was to offer a high-end, affordable option to the ever-growing Indie Game Development community, mission accomplished.

You have been following my blog posts through out the years, you know that another engine that I often reference is the Crytek engine (we game developers get all the cool tool names!). This is the engine behind the gorgeous graphics of the Crysis and Farcry series. While there were no super-exciting technology updates to this engine (which is still impressive by the way), Crytek did switch over to the EaaS (engine as a service) model, undercutting Unreal significantly at $10/month without revenue sharing. It will be interesting to track the disruption this has on Unity and Unreal users over the next year.

Finally in my engine discussion is something that I (along with many other people) were not expecting at all, the announcement of Ubisoft’s Snowdrop Engine. This engine is about as impressive and beefy as they come. First showcased in the announcement of Tom Clancy’s ‘The Division’, the engine has gone relatively under the radar. When Ubisoft announced the Snowdrop engine, it was unclear about whether or not it will be made available to the open development community, but given one of the release videos there is a indication that it may be after the release of the first game using it. The engine offers some crazy tools such as procedural geometry creation and other features like procedural destruction, stunning volumetric lighting, and jaw-dropping dynamic material shaders (personal favorite). While a huge fan of game development tools, I have never considered myself the guy to get a tool the minute it is available, but I can tell you that if Ubisoft makes this tool available, I am going to take a week off.

We  now come to the Virtual Reality hardware portion of this blog post. This is easily one of the hardest things to discuss, because it is one of those “seeing is believing” topics. I cannot put into words what it is to experience the current VR hardware. The Nerdery however is showcasing a Oculus Rift lab experiment that myself and teammate Chris Figueroa tackled using the Oculus Rift Developer Kit I.

But the big news here is Sony’s announcement of ‘Project Morpheus’. While much of the community remained skeptical of Sony’s play to move into the VR space (given their track record of “pick up and put down” of different technologies), the results are actually rather impressive. The first generation of their Development Kit touts “bigger and better” than the first generation Oculus Rift. That, coupled with the support of engine creators like Unity and Unreal, and it looks like Morpheus could make some waves. Initial reports of those who waited in line at GDC to give it a try are also promising.

But in typical GDC fashion, Oculus Rift brought their response to the show. They showed off a more polished version of their Crystal Cove prototype and announced the second iteration of their developer kit. Overall, the technology is super impressive and in short, tracks every movement the brain expects to see when moving the head, creating an even more realistic VR experience. Getting to use and develop for the Oculus Rift first hand, I can tell you that the future of VR is very promising indeed.

To wrap this up, what happened at the conference is a promising nod at the game development community as a whole, not just top-end developers. The tools being made available to newb developers are vast and great. It is this writer’s opinion that this shift in attention is due to the recent boom of Indie Game Development (caused by many factors that are beyond the scope of this blog post). More tools of better quality  available at a reasonable price-point means a lot of things. You will start to see really impressive titles being released for your computers, Playstation 4’s, and Xbox Ones. Additionally, mobile technology will be pushed in ways you never thought possible.

But one of the things I am most excited about is that these technologies are so affordable, I can’t wait to see what this does beyond the game market, and how these new impressive engines – paired with exciting and engaging virtual reality hardware – will change other experiences, like going to the museum, the zoo, or even how consumers make decisions about products. There will soon be a day when you can walk into a Home Depot, put on a VR headset, see your house loaded into a simulated experience, and make paint decisions based on how the light hits the wall at 5 p.m. in the evening.

Comment

Filed under Tech News, Technology

Implementing responsive images – a worthwhile investment, says money/mouth in unison

Responsive images are hard. At least for now anyways. The good news is a community of incredibly smart people have been working hard at providing a solution to this problem.

So what’s the problem?

The simplest way I can explain this is to say that in order to support responsive images today, you need to have an image that is at least as large as the largest it could potentially be displayed. Say you’ve got an image that needs to display at 1600px wide for large screens and 320px wide on small screens. This means your base image would need to be 1600px wide to cover the range of sizes. This technique works, but at a great performance cost as users with smaller screens will download a much larger file than necessary.

There are a number of solutions in the wild that attempt to address this problem. The best one I’ve seen is Picturefill – a responsive images approach that you can use today that mimics the proposed picture element.

Take a look at this video for an in depth look at the problem and proposed solution.

In recent months, part of the solution – srcset – has been implemented in Chrome and Safari. And now it’s time to implement picture. At the Nerdery, we believe this is incredibly important work that will help front-end development take one more step forward. To that end, we’ve decided to support Yoav Weiss and Mat Marquis in their crusade and have donated to the indiegogo campaign that will help fund the implementation of the picture element – we see it more as a thousand-dollar investment we can’t afford not to make.

We’d also like to encourage you to donate at http://www.indiegogo.com/projects/picture-element-implementation-in-blink so the picture element can be a part of our collective future.

Comment

Filed under Uncategorized

iOS Code Quiz: Rotary Dial

This is the next challenge of a new series of activities. The goal of this series of challenges is to give iOS developers a chance to explore portions of the API that have a discrete, small scope and that are likely to be less understood.

The requirements for the challenge are as follows:

  1. Create a dial. The dial represents a value between 1 and 100.
  2. The dial should include a drag handle. When a user drags the the drag handle the dial should orient to track the user’s dragging gesture. Only when the user drags on the drag handle should the dial turn.
  3. The dial should also include two touchable regions. If you touch the region, counter-clockwise from handle the dial should orient itself to the nearest step on the circle that is a whole 12th of the circle in that direction. For example, if the dial were to be imagined as a clock and the minute hand was on 37 minutes after the hour, tapping that region would move the minute hand to 35 minutes after the hour. Touching it again at that point would move it to 30 minutes after the hour.
  4. Touching the region clockwise from the handle moves the dial in a similar fashion, but in the clockwise direction.
  5. When the dial’s value updates, display the current value of the dial in a UILabel on the screen. Round that value to the nearest whole number.

Sample images for the specific hit areas have been included with the sample project. It is expected that hits will only register in the bounds of the ring, not in the whole rectangular bounds of the images. The final assembled dial should look like the following:

I’m making a contest of the challenge. Entries will be accepted until 4/11/2014. All entries will be evaluated by Ben Dolmar and the best challenge will be announced in the follow-up article. If I judge a tie to have occurred, an honorable mention will be given to each of the entrants. In addition, I’ll post my solution to the problem after that Friday and include an article explaining how I approached the problem.

To submit an entry, please do the following:

1. Fork the project from https://github.com/bdolmar/BRAVO.iOS.Challenge.05RotaryDial.
2. Post your files to your public fork on Github.
3. Send a pull request back to the original repository with your project by midnight on 4/11/2014

Comment

Filed under Code Quiz

Oculus Rift Experiment – Is Virtual Reality Ready for Business Applications?

Introduction to Oculus Rift

The Oculus Rift is a new Virtual Reality (VR) headset designed to provide a truly immersive experience, allowing you to step inside your favorite video game, movie, and more. The Oculus Rift has a wide field of view, high-resolution display, and ultra-low latency head tracking unlike any VR headset before it.

Nerdery Lab Program: Oculus Rift

Nerdery Lab Program

Lab partners Chris Figueroa and Scott Bromander collaborated on this Oculus Rift experiment; their respective Lab Reports are below. The Nerdery Lab program is an opportunity for employees to submit ideas for passion projects demonstrating cutting-edge technologies.  Nerds whose ideas show the most potential are given a week to experiment and produce something to show to other Nerds and the world at large.

Lab Report from Nerdery Developer Chris Figueroa:

How is the Oculus Rift Different from other Virtual Reality Headsets from the past?

The first thing to know is the Oculus Rift has very wide range. Previously you would put on a VR headset and have tunnel vision. It didn’t feel like a you were in the experience. This was critical because its called “Virtual Reality.” How can you feel like you are somewhere else if you just feel like you are watching a tiny screen inside of goggles?

Oculus Rift puts you in the virtual world. You have a full 110- degree field of view, which has never before been used in Virtual Reality. When you put on the Oculus Headset you immediately feel like you are in the virtual world. You actually look up and down and can just move your eyes slightly to see objects to the left and right. One of the key things about the Oculus is you have peripheral vision, just like in real life.

Rapid Prototyping at its finest

The first thing you always do is get a sense of what the 3D world will feel like. Put placeholder blocks everywhere – blocks in the size of the objects you will later put there. For example, the blocks you see below became a rocks. We placed a block there so when we put the VR headset on, we’ll know there will be something there.

oculus1

oculus2

Development Challenges

Developing for the Oculus Rift is a complete departure from developing video games, 3D movies, 3D graphics or any sort of media that involves 3D. You’ll quickly realize that things you create are making people sick with the Oculus Rift. Sometimes you won’t know what is making you sick – you just know something “feels wrong.” It’s a dilemma to have a very cool product that makes users sick because something on the screen moves wrong, or the UI is in their view or textures look wrong in the 3D world – it can be any number of things. Below is what we encountered.

1. Don’t Be Tempted to Control Head Movement

In real life you choose to look at something. Advertisers have experience in making lines a certain way with colors that guide someone’s eye to an object on a billboard, but with Virtual Reality you have to do that in 3D space. It has a whole new element of complexity that is unheard of and very few have experience in.

The easiest thing to do is just move the 3D camera so it points at something. What you don’t think about is that no one in real life has their head forced to look at something, so if you do it in Virtual Reality it literally can make people sick! It’s just ill-advised to make users ill.

2. User Interface vs World Space

The Oculus Rift wants you to feel like you’re experiencing real life. So how do you display information to users using the VR headset? The first thing people say is “Lets just put information in the top-right corner to indicate something important needed to get through the experience.” This sounds completely normal and works for everything except Virtual Reality – putting something in the view of your face will not only obstruct the view of the user – it could also make them sick!

Rule of thumb that I learned from the Oculus Rift Founder:

“If it exists in space, it doesn’t go on your face.”

3. Development Kit Resolution

The first development kit for the Oculus Rift has very low resolution in each eye. When people first put the headset on they will immediately say it’s low resolution. They are right and it was very interesting to work with because 3D objects and their edges, colors and lines don’t look the same as they do on your computer screen. Sometimes fonts are completely unreadable.

Everything must be tested before a user tries the experience or they may miss out on whatever the 3D world is attempting to show them.

4. High Resolution Textures vs Low Resolution Textures

Most people that work with 3D content or movies without restrictions know that higher resolution is better. The low resolution of the Oculus Rift made for some weird problems because higher resolution textures looked worse than low resolution textures. Even though people can look at a 3D rock and know its texture is low resolution, it didn’t matter because the high resolution textures didn’t look anything like what you wanted them to be.

Programs I used for the Oculus Rift Project:

  • Unity3D – Game Engine used to interact with 3D environments
  • Oculus Rift Dev Kit 1
  • C# and C++ (Oculus SDK)
  • MonoDevelop – I write C# on a mac with Unity3D
  • Blender 3D 2.69 with a python transform plugin I made.
  • Photoshop CS6

Lab Report from Nerdery Developer Scott Bromander:

Building 3D Modeling for the Oculus Rift

The process for this lab experiment was broken into two clear paths of work. The 3D modeling and SDK (software development kit) engine work could happen simultaneously since we had to have 3D visual assets to actually put into the environment, much like drafting a website in Photoshop before slicing it up and styling with HTML and CSS. The Oculus SDK focused more on the environment and user interactions, and I took placeholder objects in the environment and added in the realistic assets.

For my specific portion of this experiment, I handled the modeling of objects within the 3D experience. Since our goal was to create an example of a business application for a 3D simulator, I built a full-scale model of a residential house. Our experiment demonstrates how Oculus Rift could be used in visualizing a remodeling project, vacation planning, or property sales.

Building these real-world objects is a lot like sculpting with a block of clay. You start with nothing and use basic geometry to shape the object you would like to create. In this case, it was a house that started out looking very plain and very gray.

Typically in the 3D modeling process, the real magic doesn’t come together until later in the process – you change the flat gray 3D object and give it a “skin,” called a texture. Texturing requires that you take that 3D model and break it down into a 2D image. Creating 3D objects follows a specific process to get the best results.

My Process

Plan and prep; build a pseudo schematic for what would be built; create a to-scale model; texture/refactor geometry.

Tools

I used 3D studio Max to build out the front of the house, and I used measurement guides that I pre-created with basic geometry – in this case, I used a series of pre-measured planes for common measurements. I was able to then use those guides throughout the modeling experience to speed things up.

Additionally, I used a lot of the data-entry features of 3DS Max to get exact measurements applied to certain components of the house. This ensured that the scale would be 100% accurate. Once it was modeled in 3DS Max to scale, we then came up with a conversion ratio to apply before bringing the model into Unity.

Finally, we optimized texture maps by including extra geometry for repeating textures (like in the siding and roof). The trick here was to plan for it while at the same time ensuring the scale was accurate. In this case, guides help a lot in slicing extra geometry.

Photoshop for texture generation

To create textures for the house, we used photos I snapped from the first day. One problem here: I  didn’t set up the shot for texture use (lens settings), so there was a significant amount of cleanup work that needed to be performed. If you think about how we see things and how a lens captures images, it’s not in a flat space but rather a little more spherical. So using a combination of stretching/clone stamp/healing-brush techniques I’ve learned over the years, I was able to take this semi-spherized image and make it appear flattened-out.

After those textures were created, we took a pass at creating bump and specular maps. While the final product of that work ultimately never made it into the final experiment, I did follow the process. In both cases, I used an industry-standard tool called Crazy Bump. The purpose of these types of “maps” is to create the look of additional geometry without actually adding it. Basically, these maps tell Unity how the light should respond when hitting the 3D object to give the effect of actual touchable texture. So if you get up close to the siding, for example, it has the ridges and look of real siding.

Had we more time, we’d have used Mental Ray texturing/lighting to give a more realistic look, and then bake that into the texture itself. This effectively would’ve taken all of these different maps/texture/and lighting situations and condensed them down into one texture. Next time.

Challenging Aspects

One of the challenging aspects of this project was adding the actual geometry from the early designs based on “what is important” vs. using a texture. My initial thought was that if I was able to get close to these objects with the Oculus Rift on, I’d be able to catch a lot of the smaller details – planning for that and getting a little deeper in the geometry was on my radar from the get go. Ultimately though, with the prototype version of the Oculus Rift having a lower resolution than planned for final product, a lot of those details were lost.

Objects like the window frames, roof edging, and the other small details were part of the early process. You save a lot of time when you do this planning up front, but it’s more time consuming to make particular changes after the fact. While it doesn’t take a lot of time to go back and add those details, knowing their placement and their measurements ahead of time really smoothes the process.

New things that I learned

Important lesson: How to plan for the Oculus Rift since it doesn’t fit into the usual project specifications. Having a higher polygon count to work with was freeing after several years of building for mobile and applying all of the efficiencies that I’ve learned as a result of creating performant experiences for mobile. But I learned this maybe a little too late in the process, and it would have been great to include those in my initial geometry budgets. Ultimately, the savings helped us when it came time to texture. All of this is the delicate balance of any 3D modeler, but it was interesting being on the other end of it coming out of modeling 3D for mobile devices.

Things I’d have done anything differently in hindsight

I would have shifted my focus and time from the small details that didn’t translate as well, given the lower resolution of the prototype Oculus Rift that we were working with. I could have spent that time creating bolder visuals and texture maps.

Given more time, or for a next iteration or enhancement that would make for a better or more immersive experience, I’d also create more visually-interesting texture maps, build out the interior of the house, and add more tweeting-style animation – including more visually-interesting interactions within the environment.

I’d like to have spent less time on the details in the 3D-modeling portion and spent a lot more time getting the textures to a place that were vibrant and visually interesting within the setting that we ended up with. In any rapid 3D-model development, one needs to remember that it starts as a flat gray model. If you don’t plan to take the time and make the texture interesting, it will look like a flat gray 3D model. So having more time to go after the textures using some sort of baked Mental Ray set-up would have been awesome.

Which really brings me to what I would love to do in another iteration of the project: Take the time to make textures that look extremely realistic, but doing so in a way that utilizes the strengths of the Oculus Rift and the Unity engine – which can all be a delicate balance of texture maps between 3DS Max and Unity, in conjunction with how it renders in the Oculus Rift display. I think that would drive the “want’” to interact with the environment more. Then, beyond the model, include more animation and feedback loops to the interaction as a whole.

I’d also include an animated user interface – developed in Flash and imported using a technology like UniSWF or Scaleform – that would be used to make decisions about the house’s finishings. Then, as the user is making those decisions with the interface, I’d include rewarding feedback in the environment itself – stuff like bushes sprouting out of the ground in a bubbly matter, or windows bouncing in place as you change their style. Or that sort of thing – the type of interaction feedback we are used to seeing in game-like experiences, but instead are using it to heighten a product configurator experience.

Again, next time – so that’s our Lab Report for this time.

Comment

Maintaining Your Software For The Long Haul

In our webinar on maintaining software maintenance we discuss the importance of having a plan. Part of this plan is identifying and addressing the root causes of an observation, not just treating the symptoms. So, taking our own advice, we address the typical root causes of needing software maintenance. We specifically call out how we all make decisions in software development that will have an impact on the short-term and long-term lifespan of the project. This is how we define technical debt. Tune in to the webinar for a great conversation that can give you the context to better understand long-term support of a project.

View the Slides of this webinar on our SlideShare Page

Listen to the Q&A audio recording below:

(Running time: 13:15)

Play

Comment

Hitting the Strategy Sweet Spot: The Importance of Business Discovery

The beginning of The Nerdery’s UX process can feel strange and unfamiliar to clients who have previously engaged in projects with similar vendors. Why? Because we ask a lot of questions. Questions that dig into the roots of the company; operations, reasons behind decisions, internal systems and tools, long-term goals. Although, these topics may seem unrelated to the project at hand, they are the foundation and initial step in applying our holistic approach to a design project. Afterall, design is about everything. The Nerdery UX department has validated over time that its success is based on understanding its partners’ business (and their users). It is essential to develop this understanding because project success is reliant on taking into account the whole of the business.

When solving a UX problem holistically, the designer is looking for the sweet spot of success. In simple terms, the sweet spot is the solution in which business goals and user needs overlap. At a high level, failure to find the sweet spot exists in two main forms, genius design and admin affliction.

Sweet Spot

The risk in immediately jumping into a project without finding the sweet spot is that the solution only focuses on business goals and does not consider the end-user in the design process. This disconnect in the process is what we typically refer to as genius-design. Genius design is when a designer or engineer focuses on the business and existing design patterns, but users are not incorporated into the design process. A lack of user research and understanding quickly causes a project to be in high risk of missing user needs altogether, creating a product with no users and therefore no revenue. Note: Genius design is a well covered topic by previous Nerdery UX blog posts. For more information about the risks of ignoring your users, please refer to The Science of UX.

Projects are at risk of failure from the internal side even if user needs are well defined. A project that does not properly define a set of focused business goals and outline how the design will affect business operations is something I like to call “admin affliction.” Outcomes that result in admin affliction have negative effects on the company and can take a variety of forms, but ultimately end in project failure. It is more than knowing about goals, it is about having a holistic view of the company to take all its aspects into account.

All stakeholders should be aware of the potential risks in overlooking the important questions that arise during the discovery process. To mitigate admin affliction, our UX team believes strongly in engaging project stakeholders to define the company, its needs, and operations. For the remainder of this article, we will be focusing on a few of the risks and forms of admin affliction that can arise from the lack of communication and clarity between any design vendor and their clients.

Admin Affliction through Internal Tools and Systems
Your company uses a variety of tools and systems to accomplish everyday tasks. These systems and their functions have impact on the design and development of your digital project and vice versa. While the systems may never touch, it is important for us to identify what other tools accomplish to avoid duplicity and maintenance.

To illustrate, let’s use a hypothetical. During our project engagement with Company A, we have identified the client login as a major area for sales rep and client engagement. The design opportunity addresses the company’s goal to augment the client relationship.

It turns out that Company A has spent years creating a custom CRM that logs detailed information about the interactions each sales rep has with their client – each purchase and each opportunity. Imagine the project comes to completion without the opportunity to look at and ask questions about the CRM. The project is at great risk of creating more work and maintenance internally, and duplicating functionality that already exists in the robust tool used by the company. Do you really want your employees cursing the day the website caused them to maintain data in two places?

Admin Affliction through Internal Processes
Whether or not you are aware of internal process within your company, they exist for almost everything. They’re not always good, but they exist. Say, for example, you want to publish content to your website. Who writes it? Who approves it? Who puts it up there? Or what happens when a customer places an order over the phone? Is it entered into a system? Who fulfills the order? How do you know it has been fulfilled?

When doing a redesign, it is important to investigate the processes that exist internally that the system may touch or change. Imagine that your project involves changing your current IT services system. In this scenario, target users of the design would be employees of your company. Logically, one would research the issues and needs of the employees, but something that may be forgotten is how the change will affect how the IT employees will receive and fulfill these requests. They, no doubt, have developed a process around internal systems, from custom ticket-tracking fields in call-support software, to documentation and training materials. By investigating that process, we can accommodate their needs and maintain efficiencies while addressing pain-points on their side of the system. Conversely, by ignoring processes on the IT side, a redesign of the service system may result in a mass amount of IT requests that aren’t actionable because a key piece of information was not collected.

Admin Affliction through Company Culture & Branding
During stakeholder interviews and project kickoffs, we often ask about a company’s origin story, culture, values and their vision for the future. While this may seem odd, imagine if we went through the entire project without understanding how a company sees itself. The outcome would potentially be a solution that doesn’t represent who they are. Suddenly, their website might feel like a limb of the company that doesn’t belong to them!

Additionally, if we didn’t know how the company wants to grow, we could potentially be suggesting a platform or design that cuts an opportunity off at the knees causing the need to start from scratch when the company is ready to level-up.

While this is just a short list of the types of admin affliction that can occur in any digital project, they illustrate a key point. Design is about users, but it also has a whole lot to do with a company. If we don’t understand who the stakeholders are, what the company does and where the business is going, we are setting the company up for failure. Fortunately, The Nerdery’s UX discovery process is designed to mitigate the risks that arise as a result of admin affliction. My teachers always told me that smart kids ask questions and that’s what we do (until we’re blue in the face) to set a project up for success.

Comment

Filed under Design, The UX Files

Building a Single Page Web Application with Knockout.js

Back in June of 2013 I was contacted by Packt Publishing to ask if I was interested in producing and publishing a video tutorial series after they saw a video of a live presentation I gave here at The Nerdery.  I was a little hesitant at first, but after after some encouragement from Ryan Carlson, our Tech Evangelist, I went for it.

“Building a Single Page Web Application with Knockout.js” is a video tutorial series that guides the user through building a fully functional application using Knockout.js. The tutorial itself is aimed towards both back-end and front-end web developers with the assumption that the viewer has a basic level of familiarity with HTML/CSS/JS.

I designed the tutorial with the purpose of not only teaching the Knockout.js library, but also to introduce software architectural elements that are helpful when building a full-blown single page application. Knockout.js works really well with an architecture and structure more commonly seen in back-end development while using front-end technology, so the melding of the disciplines is something that many aren’t used to. When people start out with Knockout, they often end up building applications that aren’t scaleable. The tutorial we made focuses on application architecture for scalable websites using Knockout.js.  This results in architecture (or lack thereof) that isn’t scalable in complexity.

The production took much longer than anticipated and other life events caused me to not have enough time to finish producing the videos in a timely fashion. It was at this point that I reached out to my fellow nerds, and Chris Black volunteered to help complete this endeavor. He did a fantastic job of recording and editing the videos to submit to the publisher. For anyone attempting a similar task, we found Camtasia was a very useful tool for this.

You can find a sample video of the tutorial here.

Comment

Filed under Tech Tips

NerdCast #93: Developer Download Symfony Edition

NerdCastToday on the Developer Download we are talking about the web framework Symfony. It’s a popular tool here at The Nerdery; in fact, it’s our go-to framework for custom PHP projects. Listen in as we talk with 3 Nerdery Symfony experts, covering what Symfony is, what it isn’t, news, libraries and more.

Host: Andrew Watson

Guests: Jansen Price, Matt Janssen, Doug Linsmeyer

Listen Now: Running Time: 0:26:52 / Subscribe on iTunes

Play

Comment

Filed under NerdCast

Your Users are Yip Yips

Using UX tools to serve all your users. Yip, even the plush aliens.

By Emily Schmittler and Christopher Stephan, Senior UX Designers

As UX designers, we always want people to understand, benefit from and even enjoy the designed interfaces and experiences we’ve shaped for them. The clients and companies we work with feel the same way; however, where we often differ in approach is in how we do the work to get there. To many UX professionals the appropriate process involves engaging with and talking to members of the target-user population. Many companies assume the UX professionals they hire have built-in knowledge about their audience and don’t think spending time with users is necessary.

This thought – that we in UX have users “all worked out” – is a frequent cause for concern. Sure, we can perform expert reviews and analysis, we have principles like Fitt’s Law and “7 (+ or – 2),” and an often involuntary strong gut reaction to interfaces that are not as clear as possible. However, no set of models, rules, guidelines or physical reactions can ever effectively approximate a single human being, let alone any group of them. Humans are unpredictable, frustrating, willful and sometimes hilarious. In short, your users are Yip Yips.

Photo of two Yip Yips

These are Yip Yips. Yip yip yip yip uh-huh uh huh.


How are Yips Yips like users?

You’re likely wondering what comparison we can make between your customers, subscribers, visitors and contributors to bug-eyed, plush alien rags with a lot of mouth that sound like ⅓ of every corporate teleconference you’ve likely ever witnessed. For that you’ll have to journey with us beyond the puppets to the puppets’ behavior. Lets take a closer look at Yip Yips.

They’re not from around here

Yip Yips are aliens from a different planet. They are not like you and they do not respond how you might expect. Users don’t come from planet YourOrganization. And just because you think your instructions are straight-forward and your meanings plain, they might not mean anything to a creature from outside your intellectual ecosystem.

Their background and understanding is not like yours

Yip Yips don’t have your frame of reference, your cultural history, your idioms, jargon or catch-phrases. This means they cannot be expected to perceive your world in the same way as you, talk about it using the same words, or approach things in the same way.

They are smart and learn from their experiences

The Yip Yips are examining our world and forming reasonable intelligent opinions based on their experiences. This means that they are not passively consuming what they see and hear. They’re finding ways to understand and engage. Users are also going to – consciously or not – try to understand your offering and message. They will make connections and meaning that are sensible and relevant to themselves.

They have goals and came for a reason

While not trying to actively conquer earth – so far as we know – the Yip Yips do seem to be here to learn. They are driven by curiosity and the need to learn and understand. Your users are going to have goals, too. Like the Yip Yips, we can’t be certain what caused them to visit or engage.

5 Things Yip Yips Teach Us

(In their own special way)

Some of what you are about to see you may have seen before. Not on PBS, but in the behavior of your users. While most of your users are likely not covered in colorful plush fur and have fewer antennae, they still do what can seem to be strange, random and unpredictable things.

Yip Yips Discover Earth

While the Yip Yips didn’t find Earth – their intended target – right away, they did learn a few new things during their exploration.

Lesson 1: A user’s experience is about the journey as well as the destination

In the process of looking for Earth, each new discovery was exciting and educational, making their experience an overall positive one. Users may not always know exactly what they are looking for, so it is important to identify what they find valuable, useful and entertaining to make sure that their journey is always positive. While you might have your main navigation nailed to perfection, users may need more to engage them and keep them returning to your site or application.

Your Project: The user who doesn’t know what they are looking for is just as important to the success of your design as the one who does. User research and an understanding of their behaviors can help determine how to support exploration and playfulness in these less-directed individuals.

Lesson 2: It’s good to be challenged, in some cases

Not everything has to be easy. Again, pointing to the fact that the Yip Yips enjoy their journey to finding Earth even though it isn’t successful immediately demonstrates the joy individuals often find in being challenged. Commonly, clients misconstrue UX to be about making things “easy to use.” Au contraire! A challenge can make experiences more enjoyable than something that is perceived as easy. Consider the design of a game. Are easy games fun? Not really. Therefore, UX plays a very important role in determining level of challenge that proves engaging. In these cases, it is essential to determine an adequate level to ensure the offering isn’t so difficult that it’s frustrating – or so easy it gets quickly discarded as boring by consulting users via testing.

Your Project: It’s okay to make things harder for a good reason. Just not too hard. Usability testing is a great way to test concepts that need the appropriate amount of difficulty.

Yip Yips Discover a Clock

Lesson 3: Even a detail can scare people away

Another common misconception about user research is that it’s nothing more than asking users what they want. In this example, the Yip Yips are presented with a clock. The clock has all the attributes they were expecting; however, it still scares them away resulting in an overall negative experience. While user research may seem aimed at discovering what users want, it more importantly uncovers why they want it – allowing the designer to make educated decisions on whether the requested feature is appropriate – and what would make the experience useful. Therefore, we employ research methods to get at the core of what users are experiencing so we can address the problem instead of offering temporary relief of the symptoms. Further, we test those ideas with those users to ensure that the intended outcome is met.

Your Project: Don’t focus discussions with users on what they want, instead talk about the problems they are facing to understand their viewpoint. Use the problem space to inform design decisions, then test for success.

Yip Yips Discover a Radio

Lesson 4: Users might not want what you’d expect

In this case, you could make the common-sense assumption that people have preferences for certain genres of music, but overall the experience of music is pleasant. The Yip Yips prove that a general understanding – or a common sense approach – is not enough to design an experience that is inherently pleasant. What if – like the Yip Yips – your users have totally different preferences than you would expect [and would rather listen to static]? This is a more common occurrence than you might expect.

Your Project: It is extremely important – especially with new product concepts – to research and test ideas with real users before making a large investment to develop an idea. By involving users, you can more easily determine worthwhile features more quickly. There are many ways to involve users in the design process that go beyond asking what they want. Consider collaborative design sessions with your users, or methods that look at the problem space in a broader sense, like observation.

Yip Yips Discover a Fan

Lesson 5: Engagement doesn’t mean positive experience

Ever look at your site analytics and think, “Wow! People are spending a lot of time on our site. That’s great!” Unfortunately, you may be misreading your data; however, there is no way to prove it without diving deeper. In this video, the Yip Yips demonstrate how a user can be (or appear to be) engaged, but have a totally negative experience anyway in that they are blown across the room… twice!

Consider that sometimes users are obligated to use a system. Signing up for insurance, using a website for work, doing taxes are all examples where a user has a task that they must complete; therefore, they are likely to spend more time painstakingly interacting with a system they may have otherwise ditched when the unpleasantness began. In these cases, the analytics may look as if a user is engaged, but that’s not necessarily the whole story.

Your Project: Even if you have really robust analytics data that suggests positive or negative experiences, it is really important to dive deeper using qualitative research methods to ensure a successful redesign.

An Ecosystem Full of Yip Yips

So if users are like Yip Yips, what does that mean for how we think about users? It proves that we should be watching them a little closer, and trying to understand their motives or support their interests. Luckily, UX designers come primed with tools and methods to help you build that understanding and define where the overlap between business goals and user needs lie. And, you don’t necessarily need to spend a fortune to do it. We can help you to better know your Yip Yips in a variety of ways within your budget.

Next time…

Now that we’ve told you why you need to think of your users as Yips Yips, stay tuned for some ideas about how to do research with your users, and how to test the designs that your research has informed. Watch blog.nerdery.com for part-two of the Yip Yip adventure!

Comment

Filed under Design, The UX Files