Tech Tips

Mapmaker, Mapmaker, Map Me a (Google) Map

A picture of google maps with the search field saying Jane stop this crazy thing

So you want to embed Google Maps in your website. Maybe you have a list of locations you want to display, or perhaps you need to provide directions or perform simple GIS operations. You’re a pragmatic person, so you don’t want to reinvent the wheel.  You’ve settled on Google Maps as your mapping platform, acquired your API key, and you’re raring to go. Awesome! I’m so excited! Google Maps has great bang for the buck, and their API is well documented and easy enough to use.  But there’s a downside. Google Maps has become the Power Point of cartography.

But all that’s just fine, because we can still do some great work with this tool. I’ve written some general tips that I’ve learned after making a few of those “Contact Us” and “Locations” pages you see, but they are far from prescriptive. You, dear reader, are the adult in the room. All of these tips should be taken with a grain of salt.

I’ve written this article from the perspective of someone who is familiar with Javascript and DOM events, but I also hope it will raise important questions and gotchas for people who still want to have great maps and have chosen Google. This article should take about five minutes to read. Let’s get started!

You have options. Specifically, you have MapOptions.

Even a cursory glance over the documentation for MapOptions will be worth your time.  This is Google Map’s “Look what I can do!” section.  Want a simple map without all the UI bureaucracy?  Oh look, disableDefaultUI right there! How nice!  Most UX and UI tweaks can be done with a careful and considered use of MapOptions. Experiment with a variety of configurations.

Own your map. Own your Style.

Various styled and colorized maps juxtaposed with Andy Warhol's classic screen printing of Marilyn Monroe.

Google Maps v3 has Styled Maps. For a crash course, take a quick jaunt over to SnazzyMaps to get some inspiration. Note how their own map style reinforces their brand.  If you really wanna get your hands dirty, though, you’ll have to learn Google’s Styled Maps from the ground up. The results are worth it. For the extra mile, you can style Map Markers and Info Boxes too. Be sure to check Google’s Styled Map Wizard.  You can also head over to’s Styled Map Colorizr. Don’t forget your color theory!

Remove irrelevant labels.

A simple comparison of a map with and without labels.

If your labels compete with the data, get rid of them or find a way to quiet them. Do your users need to be told where the North Atlantic Ocean is?  Do you need to know the province names AND the city names? This isn’t to say that all labels should be removed, but hiding extraneous information will give your data more elbow room.  Don’t make your map be all things to all people; make it the right thing for your people.

Stay on topic; constrain the view.

Hi.  I have ADD. If a content manager takes about three hours to enter in the geo locations of all 52 of artisan waffle shops onto a map of North America, you can bet within 10 seconds I’ll have already left the confines of North America and centered my map on Ouagadougou because OOH BUTTERFLIES!

Stay on topic and stay the course.  Reducing scope also helps you focus your own optimization efforts. You’ll enjoy a reduced workload, and your user will enjoy an increased attention span.

Consider setting minimum and maximum zoom in the MapOptions object.  As far as constraining the viewport, you may have to write some custom code to stay on topic. If users need to view a particular location up close, it’s very easy to provide an external link to

Consider Clusters for large amounts of Map Markers.


Watch out for UX Gotchas!

Avoid full bleed on mobile devices, as user may be unable to scroll. Give small gutters on either side to give enough affordance to scroll freely.

Imagine a long blog post (like this one) where there was a map near the bottom. A smartphone user scrolls down, dragging their finger across the glass. Eventually the large map scrolls into view, and the scrolling momentum takes the map entirely into their view.  The user intends to skip this content and continue scrolling down. They use the same action – they have no choice. GOTCHA! Instead of scrolling the document, the map captures the event and pans the map. If the map takes up the entire viewport, the user is effectively trapped. What an awful thing to happen! On mobile, be very very careful with maps. Your users will reach Antarctica before they reach the footer.

A similar case can happen with mouse scrolling on desktops. Turning off zoom in the API prevents scroll event captures, which can be frustrating.  Add a zoom control if your design calls for it, but consider disabling scroll-to-zoom events.  Again, pay attention to MapOptions.

Maps are for everyone. Support Localization.


Google has a wide range of supported languages for its map API, but you may need to tweak your javascript tags to enable them.  Not only are labels properly localized, but so are controls and even the routing directions.  If you support internationalized content, this little tweak is definitely worth your time. 

There is life outside of Google Maps.

Mercator projection comparing supposed size of Greenland against Size of the continent of Africa

Google Maps is an excellent tool, but it may not always be the right tool for the job. Ask yourself what it is you want to show. Scale matters. If you want to show people how to get to your locations, then Google Maps is often an appropriate tool.  If you want to show that you have an international presence, then Google Maps’ default Web Mercator projection may not be… politically correct.

Without falling too deep into the cartography projection rabbit hole, do consider using Winkel Tripel or Robinson projections instead of Web Mercator when your data is presented at the global level.  These projections hew closer to the way the world actually looks – note the size of greenland compared to the continent of Africa.  However, Google Maps Javascript API v3 is strongly oriented around Web Mercator.

Let’s not be too hard on Google Maps. It’s a round planet. Your monitor is flat. The math dictates that something has to give here.  Suffice to say that the eternal battle of Conformal projections against Equal Area projections may seem a bit academic to people who just wanna know how to get to Denny’s.

If you love data, but you hate Mercator projections like I do, then may I suggest that these problems may require you to consider tools outside of Google Map? Because I just did. Technologies such as D3.js, Open Layers, and GIS software may need to be employed. But then you can be the coolest kid at the party.

Building a Single Page Web Application with Knockout.js

Back in June of 2013 I was contacted by Packt Publishing to ask if I was interested in producing and publishing a video tutorial series after they saw a video of a live presentation I gave here at The Nerdery.  I was a little hesitant at first, but after after some encouragement from Ryan Carlson, our Tech Evangelist, I went for it.

“Building a Single Page Web Application with Knockout.js” is a video tutorial series that guides the user through building a fully functional application using Knockout.js. The tutorial itself is aimed towards both back-end and front-end web developers with the assumption that the viewer has a basic level of familiarity with HTML/CSS/JS.

I designed the tutorial with the purpose of not only teaching the Knockout.js library, but also to introduce software architectural elements that are helpful when building a full-blown single page application. Knockout.js works really well with an architecture and structure more commonly seen in back-end development while using front-end technology, so the melding of the disciplines is something that many aren’t used to. When people start out with Knockout, they often end up building applications that aren’t scaleable. The tutorial we made focuses on application architecture for scalable websites using Knockout.js.  This results in architecture (or lack thereof) that isn’t scalable in complexity.

The production took much longer than anticipated and other life events caused me to not have enough time to finish producing the videos in a timely fashion. It was at this point that I reached out to my fellow nerds, and Chris Black volunteered to help complete this endeavor. He did a fantastic job of recording and editing the videos to submit to the publisher. For anyone attempting a similar task, we found Camtasia was a very useful tool for this.

You can find a sample video of the tutorial here.

Filed under Tech Tips

The Challenges of Testing Android

There comes a time in every QA Engineer’s life when they are tasked with testing their first project on the Android platform. This sounds like an easy enough task. Just grab a few devices and follow the test plan. Unfortunately, it is not so easy. The constantly growing list of devices with a wide range of features and the different versions of Android all supporting a varying set of those features can seem very daunting at times. These two obstacles are only compounded by the device manufacturers who, most often, do not provide their devices with stock versions of Android but instead versions modified for their devices.

The Devices

Android devices come in all shapes and sizes, each with their own hardware profiles, making it difficult to ensure an app’s look and feel is consistent for all Android users. An application may look pristine on a Nexus 5, but load it up on a Nexus One and pictures might overlap buttons and buttons might overlap text creating a poor user experience. It is critical we help the client select relevant and broadly supported Android devices during the scoping process, which can be tricky depending upon the application.

The Operating System

Three versions of Android currently (Feb. 2014) capture the largest market share: Jelly Bean, Ice Cream Sandwich, and Gingerbread. The newest version, Kitkat, holds only a 1.8% marketshare. We must be mindful of features unavailable on the older versions of Android while testing applications. If an app has a scoped feature not possible on some devices, we must be sure it gracefully handles older versions of Android and does not crash nor display any undesirable artifacts. The above-mentioned stumbling blocks are easily avoidable if we can pick a good range of targeted devices to cover the appropriate versions of Android.

Compounding the Problem

There are some lesser-known issues that can be easily missed, but would be disastrous for an end user. Most versions of Android installed on a user’s device are not stock – they are modified by the device manufacturer. As a result, testing an app on one device from one carrier might give you different results than testing the app on the same device from a different carrier. Knowing this can provide a buffer for these sorts of issues and ensure we are readily able to detect and squash those bugs.

Closing Thoughts

Consider these common problems as food for thought while testing on the Android platform. Google may be taking steps towards alleviating some of these common Android issues for future releases. First, it is rumored they will begin requiring device manufacturers to use versions of Android within the last two released or not be certified to use Google Apps. End users would definitely benefit from such a decision, but it would also be good for developers and QA Engineers. The move would also lessen the fragmentation issues currently prevalent on the Android platform. Second, Google continues to provide new testing tools with each new release of Android, making it easier to ensure apps are stable across a wide range of devices. Third parties are also making tools available for a wide range of testing. These tools include new Automation tools that function differently from native tools and allow testing across platforms. Some tools tools encompass walls of devices running your app viewed through a webcam so people can easily test across device types using online controls. Scripts and other automated tools are great for making sure nothing is broken during any fixes and ensuring a service is up and running.  However, nothing will ever replace a QA Engineer getting down to it with a range of actual, physical devices, testing without discretion, and finding bugs. A human’s intuition will always be the best.

Manage Localization Strings for Mobile Apps with Twine

Managing strings between iOS, Android and the web is prone to error and hard to manage. When you introduce multiple languages, even more headaches arise.

Twine allows you to keep your localized strings in a central location that can easily be exported to iOS, Android and the web. Twine is great for any multi-platform or multi-language application. It keeps your strings in sync. On a project I’m currently working on, we’re using Twine to consolidate and manage five languages across three different platforms. It’s saved us time and help to cut down on the cost of translations.

Getting Started:
Before you begin, make sure that you don’t have any hard-coded strings in your application. This is the first step in localization for any platform and a best practice for mobile development. Make sure to backup your existing strings. Next, you’ll need to install Twine. You can find instructions here:

After you’ve installed Twine, create a strings.txt file that will be used for storing strings. Twine is able to parse existing string files and write them back to your master data file. Top-level categories can be used to organize your strings, tag attributes for selecting the target platform and comment attributes to give context.


en = Off
tags = ios,android,web
comment = Off
de = AUS seit
es = Apagado
fr = Arrêt
nl = Uit
en = On
tags = ios,android,web
comment = On
de = An seit
es = Encendido
fr = Marche
nl = Aan
en = Edit
tags = ios,android,web
comment = Edit
de = Bearbeiten
es = Editar
fr = Modifier
nl = Bewerken
en = Done
tags = ios,android,web
comment = Done
de = Fertig
es = listo
fr = Terminé
nl = Klaar

The “tags” field is case sensitive and should not include white space. Keep this in mind when generating your strings file.


tags = ios, Android, web


tags = ios,android,web

Output Folders:
Running the Twine command will generate you strings files and put them in the correct sub-directories. For iOS, they will be put in the Locales folder and for Android, they will be put in the res folder. You will need to create the corresponding lproj (iOS) and values (Android) folders for each language prior to running Twine. As long as the folders exist, Twine will automatically overwrite the strings files.

Running Twine:
To export for Android, put the strings.txt file in your project directory and run the following command. Note: this will replace your existing strings.xml files. Make sure to back up your strings and write them back to the master data file before running this command for the first time.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/res" --tags android

The command to export for iOS is very similar.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/$PROJECT_NAME/Locales/" --tags ios

Updating Strings:
When you have new text or translations, update your strings.txt file and re-run the commands. All of the string files within your apps will be updated. My preferred editor for viewing the string.txt files is TextMate. Built in text editors like Notepad and TextEdit can have problems opening large string files.

We have used Twine on a number of projects with great success. It’s made managing strings and translations significantly easier. Anyone who is managing strings across platforms or supporting multiple languages should look into Twine.

Additional resources:

Filed under Tech Tips made easy thanks to Clojure

clojure-logo10I’ve always enjoyed Math. It’s concise and to the point. When you get an answer, you don’t have to consider “what the author was thinking when he wrote the problem.” Either you’re right, or you’re wrong. In fact, it’s partly due to my love of math that I decided to pursue a career as a software developer.

So naturally, when I discovered, a site designed for those who enjoy writing programs to solve challenging math problems, I was all over it. Unfortunately, I only knew basic Java at the time, so I was rather limited in not only how I could solve problems, but also in how I could reason about a problem. After learning Clojure, a functional programming language for the Java Virtual Machine (JVM), I decided to give Project Euler another shot. And I’m glad that I did. Solving problems on Project Euler is significantly easier in Clojure than it is in Java. I don’t mean to pick on Java or even the object-oriented paradigm. Rather, I intend to display how a functional programming language, such as Clojure, allows you to solve and reason about problems in ways that are either difficult or impossible in procedural/object-oriented languages.

Let’s start with the first problem on Project Euler as an example:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.

This problem is pretty straightforward. In Java, we would likely do the following:

long total = 0;
//for each integer between 3 and 1000
for (int i = 3; i < 1000; i++) {
    //if i is divisible by 3 or 5, then add it to our total
    if (i % 3 == 0 || i % 5 == 0) {
        total += i;

In Clojure, your solution will look much different. First, let’s generate our data set:

;Call the range function to generate a lazy sequence of all
;of the integers between 3 and 999 (inclusive)
(range 3 1000)

Now, we need to filter that data set to only include numbers that are divisible by 3 or 5. We don’t have a function for checking for divisibility, but we can write one:

(defn divisible? "Is n divisble by x?" [n x]
  (zero? (mod n x)))

This will create a function called “divisible?” that accepts two integers, n and x, and checks to see if n is divisible by x. Next, we’ll need to create a function that will tell us if a given integer is divisible by either 3 or 5. Since we will only use this function for the duration of this problem and then likely never need it again, we can make it an anonymous function. The “#” indicates that the following form is an anonymous function and the “%” will represent the first parameter passed to the function:

#(or (divisible? % 3) (divisible? % 5))

Now, we can use our anonymous function above against all of the integers between 3 and 999 as a filter so that we will be left with a sequence of the numbers we actually care about:

(filter #(or (divisible? % 3) (divisible? % 5)) (range 3 1000))

After that, we simply need to reduce our data set using the “+” function. This will give us the sum we are looking for. In the end, our solution looks like this:

(defn divisible? "Is n divisble by x?" [n x] (zero? (mod n x)))

(reduce + (filter #(or (divisible? % 3) (divisible? % 5)) (range 3 1000)))

So that’s it. We’ve just solved the first problem, but more than likely, you’re not convinced. If you’re accustomed to object-oriented languages like Java, then you probably see nothing wrong with the Java solution to the previous problem. You wouldn’t be wrong, after all, it does produce the correct answer and in a reasonable amount of time. But that was just a warm-up problem. More difficult problems will hit the limits of Java and procedural/object-oriented programming, so let’s up the difficulty and look at problem number two:

Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:

1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.

In Java, our solution definitely involves recursion. We’ll need to keep track of our two most recent Fibonacci numbers and a running sum. Our solution likely looks something like this:

public static long numberTwo(int fib1, int fib2, long sum) {
    if (fib2 < 4000000) {
        int nextFib = fib1 + fib2;
        if (nextFib % 2 == 0) {
            sum += nextFib;
        return numberTwo(fib2, nextFib, sum);
    } else {
        return sum;

Then we simply need to call this function with the following arguments to get the result we’re looking for:

System.out.println(numberTwo(1, 2, 3));

Now, let’s look at a solution to the same problem except using Clojure. Our approach is going to be much different than it was with Java. Instead of creating the Fibonacci sequence while simultaneously calculating our sum, we’re just going to create a lazy sequence of the Fibonacci sequence and keep pulling values from it until we have everything we need:

(def fib-seq      ;Define the variable fib-seq to hold our lazy sequence
  ((fn rfib [a b] ;Define a recursive function for calculating Fibonacci values
    (lazy-seq     ;Define a lazy sequence.

      ;Creates a new sequence consisting of the value of 'a'
      ;followed by the result of calling rfib
      (cons a
        ; call rfib again with 'b' and 'a + b'
        (rfib b (+ a b)))))
  ;Use 0 and 1 as the first set of values to our Fibonacci sequence.
  0 1))

Now we have a lazy sequence that will calculate only as many values of the Fibonacci sequence as we need. The rest of this is easy. First, we’re going to drop the first two values (since we only care about the second ’1′ and the ’2′ value). Next, we will take only the values which are less than 4,000,000. After that, we simply need to filter and reduce like we did with the previous problem. Our solution ends up looking like this:

(def fib-seq ((fn rfib [a b] (lazy-seq (cons a (rfib b (+ a b))))) 0 1))
(reduce + (filter even? (take-while #(< % 4000000) (drop 2 fib-seq))))

Our Clojure solution has a couple of advantages over our Java solution:
1. Our Clojure solution is very clear about what we are trying to do. With our Java solution, our problem-solving logic is mixed in with our recursion which can make debugging very difficult.
2. Our Clojure solution is going to be reusable in the future if we ever run into the Fibonacci sequence again. Of course, you could rewrite the Java solution to first create a list of all of the Fibonacci values that we care about and that might work. However, with Clojure constructs such as take, drop-while, lazy-seq, and the myriad of other core Clojure constructs, we get a lot of functionality out of the box that we don’t have with Java. We could build it out ourselves, but that would involve extra work and debugging that I don’t care to perform. Even if we were to build such a library, it wouldn’t be as flexible as something we could build with Clojure.

So far, we’ve only considered problems that are straight forward and fairly simple. However, the problems on Project Euler grow progressively more difficult as you progress.

Consider problem 34:

In England the currency is made up of pound, £, and pence, p, and there are eight coins in general circulation:

1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:

1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?

Problems like this are common in computer science. With both Java and Clojure, our solution to this problem will involve looping through multiple different scenarios while only counting those which are valid. However, Clojure wins again with its “memoize” construct. Memoize essentially works as an in-memory cache which will allow you cache each combination that you’ve tested so that you can avoid running the same calculations multiple times. Again, you could write functionality to do this in Java, but I get that out of the box with Clojure and would rather not waste my time.

Additionally, we haven’t even touched on several of the other advantages of Clojure such as tail recursion, immutability, and macros. We’ll save those for a future blog post.

Solving problems involving complicated operations across large complex sets of data is best suited for tackling functional programming. While many of these problems can be solved using a procedural or object-oriented language, they are best left to functional programming languages. I challenge you to pick up a functional programming language (such as Clojure) and start solving problems on As you progress and solve problems, consider the advantages and disadvantages of using a procedural or object-oriented language such as Java versus a functional programming language such as Clojure. But be forewarned, after you start solving problems in a functional programming language, you’ll likely have a hard time going back to anything else.

The Moment I Realized Testing and QA Are Entirely Different Things

I’ve been a web developer for many years and I didn’t really know what “QA” was before I came to the Nerdery. I did a little browser testing sometimes during the later stages of development or after my project was live. More often than not I either didn’t have the time or wasn’t being paid enough to do specific “Quality Assurance” on the sites I built. In all honesty, the result was obvious.

When I sent my first project to QA at the Nerdery I was nervous and excited. I felt very thorough clicking through the site trying to see what they would find; I’d checked the site on a bunch of browsers; I read the Test Plan – we call them “TPS” reports – and felt like my bases were covered. By that time I’d heard tales of the tenacity of our Quality Assurance department but I was confident I’d done my diligence. Then the first “Ticket” came in…

113 Tickets Later and my notion of professional web development was changed forever. It was humbling and exhilarating to read the “Tickets” that were submitted. They were insightful, comprehensive, user-focused insights into the project I was sure I’d nailed. They saw things I didn’t and brought them to my attention with detailed precision. They evaluated every part of the website whether I thought it was relevant or not and reported exactly what someone – not me – would experience. They took the time to test and retest until everything – EVERYTHING – was correct.

Through this process it was evident that our QA Engineers had a deep understanding and enduring passion for “assuring” extraordinary user experiences on the web. Their patience, attention, and creativity were a perfect reflection of the planning and design effort put forth at the beginning of the project. The result was obvious.

In our industry we elevate the designers, user experience teams, and the developers as the artists and engineers in the interactive space, and we should because we are those things. But after design and development wind down a whole new group is there to make sure everything is done, done well, and done right. 113 tickets later and I was a raving fan of our Nerdery QA team. They are the ones that are the truly indispensable artists and engineers in the interactive space.

Filed under Tech Tips, Technology

ASP MVC and the Dreaded “..operation could not be completed” Error

Recently I was happily developing an ASP MVC website in Visual Studio 2013, when seemingly without warning my development came to a screeching halt. I attempted to open any *.cshtml file in my solution and was greeted with the most helpful of errors:


Thanks, Visual Studio. Ahem.

Catch some sarcasm there? I hope so. It’s as if my copy of Visual Studio had become possessed with the worst kind of evil for a developer – horribly unspecific and passive-aggressive (I might be projecting that last part). No indication of what’s actually wrong, and no clear path of how to resolve the issue. All I knew is that I wasn’t able to get to files I needed to work on for my project.

My first attempt at resolution was what any self-respecting Windows-based developer would do: restarting the software. No luck with that one. My memory is clouded with the haze of confusion, so I don’t know if I tried building the website solution. If I had, I would have gotten an error message that would have pointed me in the right direction as to what was actually going on:

“Application Configuration file ‘Web.config’ is invalid. Unexpected XML declaration. The XML declaration must be the first node in the document, and no white space characters are allowed to appear before it.”

Whitespace at the beginning of web.config? I didn’t do that. Sure I had web.config open, but I didn’t mess with the beginning of the file and it’s since been closed. Double-clicking on the error would have confirmed what I was thinking:

No space there, kids!

No space there, kids!

Vindication would have been, or surely was mine. Wait. VS was still winning; I still couldn’t open my files. So I did the next thing any self-respecting developer would do: googled the error. Turns out I’m not alone on this one, but there’s no clear answer as to the resolution. Except, I did find at least one post mentioning the whitespace thing. Maybe VS isn’t so crazy.

So I took another look at web.config, deciding to humor what is surely an unclear error message. It looks like I’m at the top of the file, but I pushed my way up the file anyway. Jackpot!



Turns out, the error message was telling me exactly what was wrong: I had inadvertently put a newline at the beginning of web.config! I don’t know how it happened; perhaps I sneezed and hit the enter key at one point. At any rate, removing the newline solved my original problem: I was able to open my *.cshml files without issue.

So Visual Studio, I’m sorry I didn’t listen to you better. You did try to tell me what was going on. Let’s both agree to work on our communication skills, okay?

Filed under Tech Tips

Tech Tip: Core and Shell Architecture to Improve Testability

Sample slide from the video that says integration tests are a scamI watched a very interesting video on creating a “functional” core focused on logic and values surrounded by a imperative/OO shell that performs “destructive” (read: mutating) actions such as I/O. He starts by defining the problem: unit tests in isolation can lead to a false sense of security where your tests pass, but the actual system fails because of a failure to properly mock dependencies (i.e., your mock doesn’t behave properly) and, as he says, “integration tests are a scam” – they don’t scale, for example, due to the number of possible branches that need to be tested.

His solution is to separate the application into (multiple) functional cores surrounded by an imperative/OO shell. The functional core details with values and logic and doesn’t mutate data, it’s full of paths and has few or no dependencies. The shell mutates data but has minimal paths. Unit tests are used against the core; integration tests against the shell. This approach has the additional benefit of making the core scale easily due to the loose coupling of the components through immutable values.

He comes from a Ruby background so he dismisses one of the other potential solutions (static typing) to resolve many, albeit only the simplest, mocking issues, but I’d be interested in other reactions to this concept. How would you adapt it to .NET/MVC or device development, for example? Have you used anything similar to this? If so, what’s your experience?

Watch the Video :

Filed under Tech Tips

Ruby on Rails: Migrating string to text and back again

postgres-railsRecently I ran into an issue when attempting to change a column’s datatype in a PostGres database. I was working on a project where a description field had been previously created with a character limit (the standard :string, limit: 255). The client needed more space on the field, so I decided to migrate it to an unlimited :text field.

You’d think this sort of migration would be pretty simple, right?

Read more

A Developers How To Guide: Kinect Hello World

Kinect Hello World

This Hello World example sets out to demonstrate in C#/WPF how to reference the Kinect libraries, find an available Kinect device connected to the PC and track the user’s Right Wrist position. The code for this example is available here

Next you will need to add a reference to the Microsoft.Kinect.dll.

The Microsoft.Kinect namespace provides all of the base level APIs you need to talk to the Kinect. However, it does not provide an elegant way of handling a Kinect that may not always be connected to the app. The Microsoft.Kinect.Toolkit library makes this easier, available in Binary or Source form.

Microsoft.Kinect.Toolkit Binary

The default location for the Kinect Toolkit binaries is located here C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.8.0\Assemblies. You can reference them through Visual Studio as shown below KinectToolkitReferenceBinary

Alternatively, if you want to modify how the libraries work, follow this pattern to download them from the Kinect Toolkit Browser and bring them into your project.

To add the Microsoft.Kinect.Toolkit library source you will need to “Install” the source from the Kinect Toolkit Browser

After clicking “Install” you can select the folder in which your solution exists.

From there, make a reference in your Solution to the existing project.

Then Reference the new project in your solution.

Now that the basics are completed, it’s time to write some code.

Getting a Reference to an Active Kinect

The Microsoft.Kinect.Toolkit.KinectSensorChooserUI  class is a WPF control you can use in your application to denote to the user if the Kinect is active or if there was a problem detecting a Kinect. This control is coupled with the Microsoft.Kinect.Toolkit.KinectSensorChooser that does all the heavy lifting of letting your app know when a Kinect is activated or deactivated. The KinectSensorChooserUI control exposes a KinectSensorChooser property. You can bind this property to a viewmodel or set it with CodeBehind. This example will use CodeBehind as it takes less files to demonstrate the code.

To set up the KinectSensorChooserUI, first you need to add the namespace into your XAML Window declaration. The following is taken from the example project MainWindow.xaml

From there you can put the control in your interface wherever you like.  Along with creating the KinectSensorChooserUI control for this example we will also put in placeholders for the wrist X, Y and Z positions.


After Setting up the XAML, continue into the codebehind. In our example it will be MainWindow.xaml.cs. To set up the KinectSensorChooserUI to actually start checking for Active Kinects, it needs to be supplied with a KinectSensorChooser class instance. It is best to do this after the Window has loaded so the user can see visual indicators if a Kinect is found right away. In the constructor of the page add Loaded += MainWindowLoaded; The MainWindowLoaded should be an event listener that creates a new KinectSensorChooser instance and assigns it to the KinectSensorChooserUI.KinectSensorChooser property. Also the KinectSensorChooser has a KinectChanged event that needs to be listened too. Finally the KinectSensorChooser instance has a method Start() that will tell it to begin checking for active Kinects. Below is taken from the example app.

/// <summary>

/// Upon the WPF Window loading start up the Kinect Chooser to find a Kinect to get data from

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void MainWindowLoaded(object sender, RoutedEventArgs e)


var chooser = new KinectSensorChooser();

chooser.KinectChanged += KinectSensorChooserKinectChanged;

KinectChooser.KinectSensorChooser = chooser;
chooser.Start();//Initialize the Chooser to find a Kinect to use


The KinectSensorChooserKinectChanged event listener handles whenever a new Kinect is found. It gets the reference to the Kinect and starts listening for Skeleton data.

<p dir="ltr">/// <summary></p>
<p dir="ltr">       /// Fired when the KinectSensorChooser finds a new active Kinect to stream from</p>
<p dir="ltr">       /// </summary></p>
<p dir="ltr">       /// <param name="sender"></param></p>
<p dir="ltr">       /// <param name="e"></param></p>
<p dir="ltr">       private void KinectSensorChooserKinectChanged(object sender, KinectChangedEventArgs e)</p>
<p dir="ltr">       {</p>
<p dir="ltr">           if (Kinect != null) //We are getting a new Kinect reference, if we already have one, stop listening to it</p>
<p dir="ltr">               Kinect.SkeletonFrameReady -= KinectSkeletonFrameReady;</p>
<p dir="ltr">           Kinect = e.NewSensor;</p>
<p dir="ltr">           if (Kinect == null) //No Kinect is active to capture data from</p>
<p dir="ltr">               return;</p>
<p dir="ltr">           Kinect.SkeletonStream.Enable(); //Start streaming for Skeleton data</p>
<p dir="ltr">           Kinect.SkeletonFrameReady += KinectSkeletonFrameReady;</p>
<p dir="ltr">       }</p>
<p dir="ltr">

Now a Kinect has been referenced and it is told to start getting Skeleton data. Whenever a new set of Skeleton data is received (max is 30 frames a second), the Kinect is capable of tracking multiple users at once so the Skeleton data comes in with instances for each Skeleton being tracked. It does take a little bit of set up and ceremony to get the Skeleton data. Below is taken from the example project of getting the Skeleton data and writing it to the UI.

<p dir="ltr">       /// <summary></p>
<p dir="ltr">       ///     Fires whenever the current Kinect gets Skeleton data</p>
<p dir="ltr">       /// </summary></p>
<p dir="ltr">       /// <param name="sender"></param></p>
<p dir="ltr">       /// <param name="e"></param></p>
<p dir="ltr">       private void KinectSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)</p>
<p dir="ltr">       {</p>
<p dir="ltr">           var skeletons = new Skeleton[0];</p>

<pre class="brush: csharp"></pre>
<p dir="ltr">           using (var skeletonFrame = e.OpenSkeletonFrame()) //Get the current Frame of Skeleton data</p>
<p dir="ltr">           {</p>
<p dir="ltr">               if (skeletonFrame != null)</p>
<p dir="ltr">               {</p>
<p dir="ltr">                   //Set the Skeleton array to hold the current number of skeletons in the frame</p>
<p dir="ltr">                   skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];</p>

<pre class="brush: csharp"></pre>
<p dir="ltr">                   skeletonFrame.CopySkeletonDataTo(skeletons);</p>
<p dir="ltr">               }</p>
<p dir="ltr">           }</p>

<pre class="brush: csharp"></pre>
<p dir="ltr">           if (skeletons.Length == 0) //If we have no skeletons in the frame, no more processing is needed</p>
<p dir="ltr">           {</p>
<p dir="ltr">               return;</p>
<p dir="ltr">           }</p>

<pre class="brush: csharp"></pre>
<p dir="ltr">           //Get first Skeleton that is actually being tracked.</p>
<p dir="ltr">           //Note: This is for demonstration purposes, there could be more than one user being tracked.</p>
<p dir="ltr">           var skel = skeletons.FirstOrDefault(x => x.TrackingState == SkeletonTrackingState.Tracked);</p>
<p dir="ltr">           if (skel == null)</p>
<p dir="ltr">           {</p>
<p dir="ltr">               return;</p>
<p dir="ltr">           }</p>

<pre class="brush: csharp"></pre>
<p dir="ltr">           //We have a Skeleton, use the JointType enumeration to get the Right Wrist which can be used to track the Right Hand.</p>
<p dir="ltr">           var rightHand = skel.Joints[JointType.WristRight];</p>
<p dir="ltr">           XValue.Text = rightHand.Position.X.ToString(CultureInfo.InvariantCulture);</p>
<p dir="ltr">           YValue.Text = rightHand.Position.Y.ToString(CultureInfo.InvariantCulture);</p>
<p dir="ltr">           ZValue.Text = rightHand.Position.Z.ToString(CultureInfo.InvariantCulture);</p>
<p dir="ltr">       }</p>
<p dir="ltr">
Filed under Tech Tips