Tech Tips

Apache Configuration for Testing WordPress REST API on Secured Sites

WordPress Icon

It’s not uncommon to encounter a few roadblocks during a project and the typical next-step might involve doing a quick Google search for the answer. Unfortunately there are occasions in which we are on own with a unique problem. In this case we had to roll up our sleeves and discover the answer that works. We hope this helps the next person looking for this answer.

“I spent a bit of time reading documentation and testing and getting increasingly frustrated.”

I ran into an interesting problem this week. I have a staging site in active development that needs to remain behind a firewall, but we plan to use the WordPress REST API to serve content from the site to iOS and Android Apps. Unfortunately, for the API to work Read more

Filed under Tech Tips

Features Most Likely to Break When Upgrading to iOS 8 and What to Plan For

An experienced quality assurance (QA) engineer will have their spidey-senses tingling with every announcement of a new OS version, hardware refresh, or browser update. These are all good things for innovation, it just means we all need to be ready for launch day by starting to plan today. Read more

DataImportHandler: Recreating problems to discover their root

When a client asked me to address the performance problems with their Solr full-imports (9+ hours), I knew I was going to have to put on my computer-detective hat.

First, I needed to understand the current import process and environment. Their import process used a Solr contrib library called DataImportHandler. This library allows an end user to configure queries to pull data from various endpoints, which will then be imported into the Solr index. It supports a variety of data endpoints, such as Solr indexes, XML files – or in this client’s case, a database. The DataImportHandler defines two types of imports, full and delta. The query for the full index should be written to pull all the data that is required for the index. The query for the delta index should be written to pull only the data that has changed since the last index was run.

Once I understood the basics of the DataImportHandler, I needed to reproduce the problem. The DataImportHandler has a status page which displays – in real-time – the number of requests made to the datasource, the number of rows fetched from the datasource, the number of processed records, and the elapsed time.  I had the client start a full-import and monitored this page.

I knew that a full-import would take 9+ hours and the end result would be slightly more than 6 million records in the Solr index, but the number of fetches to the datasource and the number of rows fetched were trending up significantly faster than the number of records processed. By the end, there had been over 24 million requests to the datasource and over 31 million rows fetched from the datasource. Obviously, this was a significant source of the 9+ hours a full import, but it wasn’t clear why so many queries and fetches were being done.

With a possible source of the performance problems identified, I needed to look at the queries that were being used by the DataImportHandler. I dug into the configuration files and found this:

<entity name=”z” query=”select * from a”>
<entity name=”y” query=”select * from b WHERE a_id=’{$}’ />
<entity name=”x” query=”select * from c WHERE a_id=’{$}’ />
<entity name=”u” query=”select * from d WHERE a_id=’{$}’ />
<entity name=”t” query=”select * from e WHERE a_id=’{$}’ />

A quick Google search confirmed my fear. For each record in the outer query, each of the inner queries were done once. I did a quick count of the number of records in the outer query, which was slightly more than six million. Multiply that by the number of inner queries, four, and you get 24 million requests to the datasource. Both of those numbers were in line with the results from the status page, though the rows fetched from datasource was still off for an unexplained reason. A quick review of the schema showed that both the “t” and “x” entities were multivalued, which means the database could return more than one record for them. This accounted for 31 million rows fetched from the datasource.

I now had an explaination for the numbers I was seeing, but I still hadn’t confirmed that the DataImportHandler was bottlenecking on the database queries. Unfortunately, there wasn’t a good way to determine this. The best idea I came up with was to convert the database to another format that the DataImportHandler would read faster, but it was going to take a non-trivial amount of work to set that up. I settled on using a combination of “SHOW PROCESSLIST” on the MySQL server and strace to monitor the import while it ran. By the end of the day, the problem was obvious: the DataImportHandler was spending most of its time waiting for the database to send data after each query.

With the quantity of queries representing the majority of the full-import run time, I began researching alternative ways of fetching the data. That’s when I found the CachedSqlEntityProcessor. This processor would fetch all the records for each entity, then stitch them together on the fly. In my example, it would reduce the number of requests to the datasource to 5! I immediately rewrote the entities to use this processor and started up another full-import.

Three hours later, the import was done. A 66% improvement was satisfying, but the client was hoping for something closer to an hour. That meant it was time to search for bottlenecks again. Another strace test showed the biggest offender was waiting for data from the database, so I focused on tuning the MySQL server. Unfortunately, this was a mistake. I didn’t critically think about what this bottleneck actually was. No matter what I did to the MySQL settings, I never saw >1% improvement in the run time.

After a signficiant amount of frustration, I realized the actual problem, disk IO. It turns out that running a MySQL server on EC2/EBS is a terrible idea for a number of reasons.

* By default, EC2 internet and EBS traffic occur on the same network. Thankfully, a simple setting, EBS Optimized, can be flipped to give EC2 a dedicated connection to EBS volumes.
* Standard EBS volumes are not suited for sustained load. There is no guarantee of IOPS, so they can fluctuate significantly. Thankfully, you can create provisioned IOPS EBS volumes, which allow you to define what kind of performance you need.
* Provisioned IOPS EBS volumes have max IOPS values.

The best solution would have been to move the database into Amazon RDS, which fixes most of these problems, but it was too big a change to be made quickly. Instead, we settled on making both the Solr and database servers EBS optimized and setting up provisioned IOPS EBS volumes. After playing with some IOPS values, we settled on 2000.

It was finally time to run a complete test will all the changes in place. It had taken weeks of my time, so the anticipation was killing me. The final full-import time was 35 minutes.

Filed under Tech Tips

Mapmaker, Mapmaker, Map Me a (Google) Map

A picture of google maps with the search field saying Jane stop this crazy thing

So you want to embed Google Maps in your website. Maybe you have a list of locations you want to display, or perhaps you need to provide directions or perform simple GIS operations. You’re a pragmatic person, so you don’t want to reinvent the wheel.  You’ve settled on Google Maps as your mapping platform, acquired your API key, and you’re raring to go. Awesome! I’m so excited! Google Maps has great bang for the buck, and their API is well documented and easy enough to use.  But there’s a downside. Google Maps has become the Power Point of cartography.

But all that’s just fine, because we can still do some great work with this tool. I’ve written some general tips that I’ve learned after making a few of those “Contact Us” and “Locations” pages you see, but they are far from prescriptive. You, dear reader, are the adult in the room. All of these tips should be taken with a grain of salt.

I’ve written this article from the perspective of someone who is familiar with Javascript and DOM events, but I also hope it will raise important questions and gotchas for people who still want to have great maps and have chosen Google. This article should take about five minutes to read. Let’s get started! Read more

Implementing responsive images – a worthwhile investment, says money/mouth in unison

Responsive images are hard. At least for now anyways. The good news is a community of incredibly smart people have been working hard at providing a solution to this problem.

So what’s the problem?

Read more

Filed under Tech Tips

Building a Single Page Web Application with Knockout.js

Back in June of 2013 I was contacted by Packt Publishing to ask if I was interested in producing and publishing a video tutorial series after they saw a video of a live presentation I gave here at The Nerdery.  I was a little hesitant at first, but after after some encouragement from Ryan Carlson, our Tech Evangelist, I went for it.

“Building a Single Page Web Application with Knockout.js” is a video tutorial series that guides the user through building a fully functional application using Knockout.js. The tutorial itself is aimed towards both back-end and front-end web developers with the assumption that the viewer has a basic level of familiarity with HTML/CSS/JS.

I designed the tutorial with the purpose of not only teaching the Knockout.js library, but also to introduce software architectural elements that are helpful when building a full-blown single page application. Knockout.js works really well with an architecture and structure more commonly seen in back-end development while using front-end technology, so the melding of the disciplines is something that many aren’t used to. When people start out with Knockout, they often end up building applications that aren’t scaleable. The tutorial we made focuses on application architecture for scalable websites using Knockout.js.  This results in architecture (or lack thereof) that isn’t scalable in complexity.

The production took much longer than anticipated and other life events caused me to not have enough time to finish producing the videos in a timely fashion. It was at this point that I reached out to my fellow nerds, and Chris Black volunteered to help complete this endeavor. He did a fantastic job of recording and editing the videos to submit to the publisher. For anyone attempting a similar task, we found Camtasia was a very useful tool for this.

Here is a sample of the video we made

Filed under Tech Tips

The Challenges of Testing Android

There comes a time in every QA Engineer’s life when they are tasked with testing their first project on the Android platform. This sounds like an easy enough task. Just grab a few devices and follow the test plan. Unfortunately, it is not so easy. The constantly growing list of devices with a wide range of features and the different versions of Android all supporting a varying set of those features can seem very daunting at times. These two obstacles are only compounded by the device manufacturers who, most often, do not provide their devices with stock versions of Android but instead versions modified for their devices.

The Devices

Android devices come in all shapes and sizes, each with their own hardware profiles, making it difficult to ensure an app’s look and feel is consistent for all Android users. An application may look pristine on a Nexus 5, but load it up on a Nexus One and pictures might overlap buttons and buttons might overlap text creating a poor user experience. It is critical we help the client select relevant and broadly supported Android devices during the scoping process, which can be tricky depending upon the application.

The Operating System

Three versions of Android currently (Feb. 2014) capture the largest market share: Jelly Bean, Ice Cream Sandwich, and Gingerbread. The newest version, Kitkat, holds only a 1.8% marketshare. We must be mindful of features unavailable on the older versions of Android while testing applications. If an app has a scoped feature not possible on some devices, we must be sure it gracefully handles older versions of Android and does not crash nor display any undesirable artifacts. The above-mentioned stumbling blocks are easily avoidable if we can pick a good range of targeted devices to cover the appropriate versions of Android.

Compounding the Problem

There are some lesser-known issues that can be easily missed, but would be disastrous for an end user. Most versions of Android installed on a user’s device are not stock – they are modified by the device manufacturer. As a result, testing an app on one device from one carrier might give you different results than testing the app on the same device from a different carrier. Knowing this can provide a buffer for these sorts of issues and ensure we are readily able to detect and squash those bugs.

Closing Thoughts

Consider these common problems as food for thought while testing on the Android platform. Google may be taking steps towards alleviating some of these common Android issues for future releases. First, it is rumored they will begin requiring device manufacturers to use versions of Android within the last two released or not be certified to use Google Apps. End users would definitely benefit from such a decision, but it would also be good for developers and QA Engineers. The move would also lessen the fragmentation issues currently prevalent on the Android platform. Second, Google continues to provide new testing tools with each new release of Android, making it easier to ensure apps are stable across a wide range of devices. Third parties are also making tools available for a wide range of testing. These tools include new Automation tools that function differently from native tools and allow testing across platforms. Some tools tools encompass walls of devices running your app viewed through a webcam so people can easily test across device types using online controls. Scripts and other automated tools are great for making sure nothing is broken during any fixes and ensuring a service is up and running.  However, nothing will ever replace a QA Engineer getting down to it with a range of actual, physical devices, testing without discretion, and finding bugs. A human’s intuition will always be the best.

Manage Localization Strings for Mobile Apps with Twine

Managing strings between iOS, Android and the web is prone to error and hard to manage. When you introduce multiple languages, even more headaches arise.

Twine allows you to keep your localized strings in a central location that can easily be exported to iOS, Android and the web. Twine is great for any multi-platform or multi-language application. It keeps your strings in sync. On a project I’m currently working on, we’re using Twine to consolidate and manage five languages across three different platforms. It’s saved us time and help to cut down on the cost of translations.

Getting Started:
Before you begin, make sure that you don’t have any hard-coded strings in your application. This is the first step in localization for any platform and a best practice for mobile development. Make sure to backup your existing strings. Next, you’ll need to install Twine. You can find instructions here:

After you’ve installed Twine, create a strings.txt file that will be used for storing strings. Twine is able to parse existing string files and write them back to your master data file. Top-level categories can be used to organize your strings, tag attributes for selecting the target platform and comment attributes to give context.


en = Off
tags = ios,android,web
comment = Off
de = AUS seit
es = Apagado
fr = Arrêt
nl = Uit
en = On
tags = ios,android,web
comment = On
de = An seit
es = Encendido
fr = Marche
nl = Aan
en = Edit
tags = ios,android,web
comment = Edit
de = Bearbeiten
es = Editar
fr = Modifier
nl = Bewerken
en = Done
tags = ios,android,web
comment = Done
de = Fertig
es = listo
fr = Terminé
nl = Klaar

The “tags” field is case sensitive and should not include white space. Keep this in mind when generating your strings file.


tags = ios, Android, web


tags = ios,android,web

Output Folders:
Running the Twine command will generate you strings files and put them in the correct sub-directories. For iOS, they will be put in the Locales folder and for Android, they will be put in the res folder. You will need to create the corresponding lproj (iOS) and values (Android) folders for each language prior to running Twine. As long as the folders exist, Twine will automatically overwrite the strings files.

Running Twine:
To export for Android, put the strings.txt file in your project directory and run the following command. Note: this will replace your existing strings.xml files. Make sure to back up your strings and write them back to the master data file before running this command for the first time.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/res" --tags android

The command to export for iOS is very similar.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/$PROJECT_NAME/Locales/" --tags ios

Updating Strings:
When you have new text or translations, update your strings.txt file and re-run the commands. All of the string files within your apps will be updated. My preferred editor for viewing the string.txt files is TextMate. Built in text editors like Notepad and TextEdit can have problems opening large string files.

We have used Twine on a number of projects with great success. It’s made managing strings and translations significantly easier. Anyone who is managing strings across platforms or supporting multiple languages should look into Twine.

Additional resources:

Filed under Tech Tips made easy thanks to Clojure

clojure-logo10I’ve always enjoyed Math. It’s concise and to the point. When you get an answer, you don’t have to consider “what the author was thinking when he wrote the problem.” Either you’re right, or you’re wrong. In fact, it’s partly due to my love of math that I decided to pursue a career as a software developer.

So naturally, when I discovered, a site designed for those who enjoy writing programs to solve challenging math problems, I was all over it. Unfortunately, I only knew basic Java at the time, so I was rather limited in not only how I could solve problems, but also in how I could reason about a problem. After learning Clojure, a functional programming language for the Java Virtual Machine (JVM), I decided to give Project Euler another shot. And I’m glad that I did. Solving problems on Project Euler is significantly easier in Clojure than it is in Java. I don’t mean to pick on Java or even the object-oriented paradigm. Rather, I intend to display how a functional programming language, such as Clojure, allows you to solve and reason about problems in ways that are either difficult or impossible in procedural/object-oriented languages.

Let’s start with the first problem on Project Euler as an example:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.

This problem is pretty straightforward. In Java, we would likely do the following:

long total = 0;
//for each integer between 3 and 1000
for (int i = 3; i < 1000; i++) {
    //if i is divisible by 3 or 5, then add it to our total
    if (i % 3 == 0 || i % 5 == 0) {
        total += i;

In Clojure, your solution will look much different. First, let’s generate our data set:

;Call the range function to generate a lazy sequence of all
;of the integers between 3 and 999 (inclusive)
(range 3 1000)

Now, we need to filter that data set to only include numbers that are divisible by 3 or 5. We don’t have a function for checking for divisibility, but we can write one:

(defn divisible? "Is n divisble by x?" [n x]
  (zero? (mod n x)))

This will create a function called “divisible?” that accepts two integers, n and x, and checks to see if n is divisible by x. Next, we’ll need to create a function that will tell us if a given integer is divisible by either 3 or 5. Since we will only use this function for the duration of this problem and then likely never need it again, we can make it an anonymous function. The “#” indicates that the following form is an anonymous function and the “%” will represent the first parameter passed to the function:

#(or (divisible? % 3) (divisible? % 5))

Now, we can use our anonymous function above against all of the integers between 3 and 999 as a filter so that we will be left with a sequence of the numbers we actually care about:

(filter #(or (divisible? % 3) (divisible? % 5)) (range 3 1000))

After that, we simply need to reduce our data set using the “+” function. This will give us the sum we are looking for. In the end, our solution looks like this:

(defn divisible? "Is n divisble by x?" [n x] (zero? (mod n x)))

(reduce + (filter #(or (divisible? % 3) (divisible? % 5)) (range 3 1000)))

So that’s it. We’ve just solved the first problem, but more than likely, you’re not convinced. If you’re accustomed to object-oriented languages like Java, then you probably see nothing wrong with the Java solution to the previous problem. You wouldn’t be wrong, after all, it does produce the correct answer and in a reasonable amount of time. But that was just a warm-up problem. More difficult problems will hit the limits of Java and procedural/object-oriented programming, so let’s up the difficulty and look at problem number two:

Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:

1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.

In Java, our solution definitely involves recursion. We’ll need to keep track of our two most recent Fibonacci numbers and a running sum. Our solution likely looks something like this:

public static long numberTwo(int fib1, int fib2, long sum) {
    if (fib2 < 4000000) {
        int nextFib = fib1 + fib2;
        if (nextFib % 2 == 0) {
            sum += nextFib;
        return numberTwo(fib2, nextFib, sum);
    } else {
        return sum;

Then we simply need to call this function with the following arguments to get the result we’re looking for:

System.out.println(numberTwo(1, 2, 3));

Now, let’s look at a solution to the same problem except using Clojure. Our approach is going to be much different than it was with Java. Instead of creating the Fibonacci sequence while simultaneously calculating our sum, we’re just going to create a lazy sequence of the Fibonacci sequence and keep pulling values from it until we have everything we need:

(def fib-seq      ;Define the variable fib-seq to hold our lazy sequence
  ((fn rfib [a b] ;Define a recursive function for calculating Fibonacci values
    (lazy-seq     ;Define a lazy sequence.

      ;Creates a new sequence consisting of the value of 'a'
      ;followed by the result of calling rfib
      (cons a
        ; call rfib again with 'b' and 'a + b'
        (rfib b (+ a b)))))
  ;Use 0 and 1 as the first set of values to our Fibonacci sequence.
  0 1))

Now we have a lazy sequence that will calculate only as many values of the Fibonacci sequence as we need. The rest of this is easy. First, we’re going to drop the first two values (since we only care about the second ’1′ and the ’2′ value). Next, we will take only the values which are less than 4,000,000. After that, we simply need to filter and reduce like we did with the previous problem. Our solution ends up looking like this:

(def fib-seq ((fn rfib [a b] (lazy-seq (cons a (rfib b (+ a b))))) 0 1))
(reduce + (filter even? (take-while #(< % 4000000) (drop 2 fib-seq))))

Our Clojure solution has a couple of advantages over our Java solution:
1. Our Clojure solution is very clear about what we are trying to do. With our Java solution, our problem-solving logic is mixed in with our recursion which can make debugging very difficult.
2. Our Clojure solution is going to be reusable in the future if we ever run into the Fibonacci sequence again. Of course, you could rewrite the Java solution to first create a list of all of the Fibonacci values that we care about and that might work. However, with Clojure constructs such as take, drop-while, lazy-seq, and the myriad of other core Clojure constructs, we get a lot of functionality out of the box that we don’t have with Java. We could build it out ourselves, but that would involve extra work and debugging that I don’t care to perform. Even if we were to build such a library, it wouldn’t be as flexible as something we could build with Clojure.

So far, we’ve only considered problems that are straight forward and fairly simple. However, the problems on Project Euler grow progressively more difficult as you progress.

Consider problem 34:

In England the currency is made up of pound, £, and pence, p, and there are eight coins in general circulation:

1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:

1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?

Problems like this are common in computer science. With both Java and Clojure, our solution to this problem will involve looping through multiple different scenarios while only counting those which are valid. However, Clojure wins again with its “memoize” construct. Memoize essentially works as an in-memory cache which will allow you cache each combination that you’ve tested so that you can avoid running the same calculations multiple times. Again, you could write functionality to do this in Java, but I get that out of the box with Clojure and would rather not waste my time.

Additionally, we haven’t even touched on several of the other advantages of Clojure such as tail recursion, immutability, and macros. We’ll save those for a future blog post.

Solving problems involving complicated operations across large complex sets of data is best suited for tackling functional programming. While many of these problems can be solved using a procedural or object-oriented language, they are best left to functional programming languages. I challenge you to pick up a functional programming language (such as Clojure) and start solving problems on As you progress and solve problems, consider the advantages and disadvantages of using a procedural or object-oriented language such as Java versus a functional programming language such as Clojure. But be forewarned, after you start solving problems in a functional programming language, you’ll likely have a hard time going back to anything else.

The Moment I Realized Testing and QA Are Entirely Different Things

I’ve been a web developer for many years and I didn’t really know what “QA” was before I came to the Nerdery. I did a little browser testing sometimes during the later stages of development or after my project was live. More often than not I either didn’t have the time or wasn’t being paid enough to do specific “Quality Assurance” on the sites I built. In all honesty, the result was obvious.

When I sent my first project to QA at the Nerdery I was nervous and excited. I felt very thorough clicking through the site trying to see what they would find; I’d checked the site on a bunch of browsers; I read the Test Plan – we call them “TPS” reports – and felt like my bases were covered. By that time I’d heard tales of the tenacity of our Quality Assurance department but I was confident I’d done my diligence. Then the first “Ticket” came in…

113 Tickets Later and my notion of professional web development was changed forever. It was humbling and exhilarating to read the “Tickets” that were submitted. They were insightful, comprehensive, user-focused insights into the project I was sure I’d nailed. They saw things I didn’t and brought them to my attention with detailed precision. They evaluated every part of the website whether I thought it was relevant or not and reported exactly what someone – not me – would experience. They took the time to test and retest until everything – EVERYTHING – was correct.

Through this process it was evident that our QA Engineers had a deep understanding and enduring passion for “assuring” extraordinary user experiences on the web. Their patience, attention, and creativity were a perfect reflection of the planning and design effort put forth at the beginning of the project. The result was obvious.

In our industry we elevate the designers, user experience teams, and the developers as the artists and engineers in the interactive space, and we should because we are those things. But after design and development wind down a whole new group is there to make sure everything is done, done well, and done right. 113 tickets later and I was a raving fan of our Nerdery QA team. They are the ones that are the truly indispensable artists and engineers in the interactive space.

Filed under Tech Tips, Technology