Tech Tips

Quality Assurance Pro Tips – Learn from Apple’s recent HealthKit bug


HealthKitBug-Blog-Hero

Many of you already know the buzz going on with iOS 8 and some critical issues which occurred with Apple’s first iOS 8.0.1 software update on Wednesday, September 24th. A major bug with the HealthKit feature was discovered prior to the iOS 8.0 release, which resulted in Apple pulling all HealthKit enabled apps from the App Store ahead of the public release, leaving 3rd-party devs uncertain as to the fate of their Apps.

 

iOS Health app

The major issue that was reported is unknown, but Apple promised a quick fix for the major bug. One week after the iOS 8.0 release, iOS 8.0.1 was released to the public to fix the HealthKit issue and allow related apps back into the App Store. One hour and 15 minutes after the release, iOS 8.0.1 was taken down after critical issues were discovered with iPhone 6 and 6 Plus owners. This resulted in users losing cellular service and malfunctions with the Touch ID feature.

“How could a fix for the HealthKit feature that tracks your calories burned, sleep duration, nutrition and other features, be the cause for users being unable to make or receive phone calls?”

iOS 8.0.2 was released the very next day and contained fixes for the critical issues that came with iOS 8.0.1, as well as the HealthKit issue and other minor bug fixes. So you may be asking yourself, “How could a fix for the HealthKit feature that tracks your calories burned, sleep duration, nutrition and other features, be the cause for users being unable to make or receive phone calls?” Well, the answer is, there’s no real way of knowing for sure how it happened. Just that, it happened.

Whenever new code is implemented for a fix, there’s always a possibility of that fix causing new bugs to occur, which can be in a related area of the software or in a seemingly unrelated area from the original issue. That is why after re-testing the fix that was implemented, it is always best practice to perform regression testing around the affected area, to ensure no other issues were caused by the change.

In this particular case where the issue is related to a major firmware/software update which will affect millions of consumers. The best practice in this case would be to not only re-test the fix and perform regression testing around the affected area of functionality, but to also fully test all major functionality of the iPhone 6 and 6 Plus (as well as all other devices that support the firmware/software update) before releasing the update to the public. Making/receiving phone calls, sending/receiving emails, sending/receiving text/video messages, taking photos/videos, keyboard functionality, the notification center/alerts, wifi, syncing with iTunes, the locked screen, Siri and all other major functionality the iPhones are capable of performing.

There are a couple things we can take away from this situation. First is that more testing will always be better than less testing. If the budget allows for it, perform as much testing as you possibly can if a major update is ready (code complete) before releasing to the general public as well as continuing testing post deployment.

Also be sure to perform a full test sweep of all functionality a device/website/application is capable of performing to ensure nothing was affected by the update and after deployment. Never rush through quality assurance (QA) and always take your time when performing your test sweep, ensuring all critical and major issues have been discovered. The general public will thank you for taking the time to thoroughly test your software so that they don’t have to.









Developing for Next Generation Touchscreen Computers

More than just Mobile Devices: Where touch detection breaks down

When you think of “touch,” mobile phones and tablets may immediately come to mind. Unfortunately, it’s far too easy to overlook the newest crop of touch-driven devices, such as Chromebook laptops that employ both a touchscreen and a trackpad, and Windows 8 machines paired with touchscreen monitors. In this article, you’ll learn how to conquer the interesting challenges presented by these “hybrid” devices that can employ both mouse and touch input. In the browser, the Document Object Model (DOM) started with one main interface to facilitate user pointer input: MouseEvent. Over the years, the methods of input have grown to include the pen/stylus, touch, and a plethora of others. Modern web browsers must continually stay on top of these new input devices by either converting to mouse events or adding an additional event interface. In recent years, however, it has become apparent that dividing these forms of input – as opposed to unifying and normalizing – is becoming problematic when hardware supports more than one method of input. Programmers are then forced to write entire libraries just to unify all the event interfaces (mouse, touch, pen, etc). So how did mouse and touch events come to be separate interfaces? Going forward, are all new forms of input going to need their own event interface? How do I unify mouse and touch now?

Is this article for me?

The solutions in this article are specific in nature – only applications that require heavy user interaction (games, HTML canvas application, drag & drop widgets, etc) fall within the target application of the solutions discussed. Click driven interactions (ie. regular websites) do not necessarily need to worry about user-input methods, as click events will be fired regardless of the user’s input method.

Read more

Filed under Tech Tips, Technology

Apache Configuration for Testing WordPress REST API on Secured Sites

WordPress Icon

It’s not uncommon to encounter a few roadblocks during a project and the typical next-step might involve doing a quick Google search for the answer. Unfortunately there are occasions in which we are on own with a unique problem. In this case we had to roll up our sleeves and discover the answer that works. We hope this helps the next person looking for this answer.

“I spent a bit of time reading documentation and testing and getting increasingly frustrated.”

I ran into an interesting problem this week. I have a staging site in active development that needs to remain behind a firewall, but we plan to use the WordPress REST API to serve content from the site to iOS and Android Apps. Unfortunately, for the API to work Read more

Filed under Tech Tips

Features Most Likely to Break When Upgrading to iOS 8 and What to Plan For

An experienced quality assurance (QA) engineer will have their spidey-senses tingling with every announcement of a new OS version, hardware refresh, or browser update. These are all good things for innovation, it just means we all need to be ready for launch day by starting to plan today. Read more

DataImportHandler: Recreating problems to discover their root

When a client asked me to address the performance problems with their Solr full-imports (9+ hours), I knew I was going to have to put on my computer-detective hat.

First, I needed to understand the current import process and environment. Their import process used a Solr contrib library called DataImportHandler. This library allows an end user to configure queries to pull data from various endpoints, which will then be imported into the Solr index. It supports a variety of data endpoints, such as Solr indexes, XML files – or in this client’s case, a database. The DataImportHandler defines two types of imports, full and delta. The query for the full index should be written to pull all the data that is required for the index. The query for the delta index should be written to pull only the data that has changed since the last index was run.

Once I understood the basics of the DataImportHandler, I needed to reproduce the problem. The DataImportHandler has a status page which displays – in real-time – the number of requests made to the datasource, the number of rows fetched from the datasource, the number of processed records, and the elapsed time.  I had the client start a full-import and monitored this page.

I knew that a full-import would take 9+ hours and the end result would be slightly more than 6 million records in the Solr index, but the number of fetches to the datasource and the number of rows fetched were trending up significantly faster than the number of records processed. By the end, there had been over 24 million requests to the datasource and over 31 million rows fetched from the datasource. Obviously, this was a significant source of the 9+ hours a full import, but it wasn’t clear why so many queries and fetches were being done.

With a possible source of the performance problems identified, I needed to look at the queries that were being used by the DataImportHandler. I dug into the configuration files and found this:

<entity name=”z” query=”select * from a”>
<entity name=”y” query=”select * from b WHERE a_id=’{$z.id}’ />
<entity name=”x” query=”select * from c WHERE a_id=’{$z.id}’ />
<entity name=”u” query=”select * from d WHERE a_id=’{$z.id}’ />
<entity name=”t” query=”select * from e WHERE a_id=’{$z.id}’ />
</entity>

A quick Google search confirmed my fear. For each record in the outer query, each of the inner queries were done once. I did a quick count of the number of records in the outer query, which was slightly more than six million. Multiply that by the number of inner queries, four, and you get 24 million requests to the datasource. Both of those numbers were in line with the results from the status page, though the rows fetched from datasource was still off for an unexplained reason. A quick review of the schema showed that both the “t” and “x” entities were multivalued, which means the database could return more than one record for them. This accounted for 31 million rows fetched from the datasource.

I now had an explaination for the numbers I was seeing, but I still hadn’t confirmed that the DataImportHandler was bottlenecking on the database queries. Unfortunately, there wasn’t a good way to determine this. The best idea I came up with was to convert the database to another format that the DataImportHandler would read faster, but it was going to take a non-trivial amount of work to set that up. I settled on using a combination of “SHOW PROCESSLIST” on the MySQL server and strace to monitor the import while it ran. By the end of the day, the problem was obvious: the DataImportHandler was spending most of its time waiting for the database to send data after each query.

With the quantity of queries representing the majority of the full-import run time, I began researching alternative ways of fetching the data. That’s when I found the CachedSqlEntityProcessor. This processor would fetch all the records for each entity, then stitch them together on the fly. In my example, it would reduce the number of requests to the datasource to 5! I immediately rewrote the entities to use this processor and started up another full-import.

Three hours later, the import was done. A 66% improvement was satisfying, but the client was hoping for something closer to an hour. That meant it was time to search for bottlenecks again. Another strace test showed the biggest offender was waiting for data from the database, so I focused on tuning the MySQL server. Unfortunately, this was a mistake. I didn’t critically think about what this bottleneck actually was. No matter what I did to the MySQL settings, I never saw >1% improvement in the run time.

After a signficiant amount of frustration, I realized the actual problem, disk IO. It turns out that running a MySQL server on EC2/EBS is a terrible idea for a number of reasons.

* By default, EC2 internet and EBS traffic occur on the same network. Thankfully, a simple setting, EBS Optimized, can be flipped to give EC2 a dedicated connection to EBS volumes.
* Standard EBS volumes are not suited for sustained load. There is no guarantee of IOPS, so they can fluctuate significantly. Thankfully, you can create provisioned IOPS EBS volumes, which allow you to define what kind of performance you need.
* Provisioned IOPS EBS volumes have max IOPS values.

The best solution would have been to move the database into Amazon RDS, which fixes most of these problems, but it was too big a change to be made quickly. Instead, we settled on making both the Solr and database servers EBS optimized and setting up provisioned IOPS EBS volumes. After playing with some IOPS values, we settled on 2000.

It was finally time to run a complete test will all the changes in place. It had taken weeks of my time, so the anticipation was killing me. The final full-import time was 35 minutes.

Filed under Tech Tips

Mapmaker, Mapmaker, Map Me a (Google) Map

A picture of google maps with the search field saying Jane stop this crazy thing

So you want to embed Google Maps in your website. Maybe you have a list of locations you want to display, or perhaps you need to provide directions or perform simple GIS operations. You’re a pragmatic person, so you don’t want to reinvent the wheel.  You’ve settled on Google Maps as your mapping platform, acquired your API key, and you’re raring to go. Awesome! I’m so excited! Google Maps has great bang for the buck, and their API is well documented and easy enough to use.  But there’s a downside. Google Maps has become the Power Point of cartography.

But all that’s just fine, because we can still do some great work with this tool. I’ve written some general tips that I’ve learned after making a few of those “Contact Us” and “Locations” pages you see, but they are far from prescriptive. You, dear reader, are the adult in the room. All of these tips should be taken with a grain of salt.

I’ve written this article from the perspective of someone who is familiar with Javascript and DOM events, but I also hope it will raise important questions and gotchas for people who still want to have great maps and have chosen Google. This article should take about five minutes to read. Let’s get started! Read more

Implementing responsive images – a worthwhile investment, says money/mouth in unison

Responsive images are hard. At least for now anyways. The good news is a community of incredibly smart people have been working hard at providing a solution to this problem.

So what’s the problem?

Read more

Filed under Tech Tips

Building a Single Page Web Application with Knockout.js

Back in June of 2013 I was contacted by Packt Publishing to ask if I was interested in producing and publishing a video tutorial series after they saw a video of a live presentation I gave here at The Nerdery.  I was a little hesitant at first, but after after some encouragement from Ryan Carlson, our Tech Evangelist, I went for it.

“Building a Single Page Web Application with Knockout.js” is a video tutorial series that guides the user through building a fully functional application using Knockout.js. The tutorial itself is aimed towards both back-end and front-end web developers with the assumption that the viewer has a basic level of familiarity with HTML/CSS/JS.

I designed the tutorial with the purpose of not only teaching the Knockout.js library, but also to introduce software architectural elements that are helpful when building a full-blown single page application. Knockout.js works really well with an architecture and structure more commonly seen in back-end development while using front-end technology, so the melding of the disciplines is something that many aren’t used to. When people start out with Knockout, they often end up building applications that aren’t scaleable. The tutorial we made focuses on application architecture for scalable websites using Knockout.js.  This results in architecture (or lack thereof) that isn’t scalable in complexity.

The production took much longer than anticipated and other life events caused me to not have enough time to finish producing the videos in a timely fashion. It was at this point that I reached out to my fellow nerds, and Chris Black volunteered to help complete this endeavor. He did a fantastic job of recording and editing the videos to submit to the publisher. For anyone attempting a similar task, we found Camtasia was a very useful tool for this.

Here is a sample of the video we made

Filed under Tech Tips

The Challenges of Testing Android

There comes a time in every QA Engineer’s life when they are tasked with testing their first project on the Android platform. This sounds like an easy enough task. Just grab a few devices and follow the test plan. Unfortunately, it is not so easy. The constantly growing list of devices with a wide range of features and the different versions of Android all supporting a varying set of those features can seem very daunting at times. These two obstacles are only compounded by the device manufacturers who, most often, do not provide their devices with stock versions of Android but instead versions modified for their devices.

The Devices

Android devices come in all shapes and sizes, each with their own hardware profiles, making it difficult to ensure an app’s look and feel is consistent for all Android users. An application may look pristine on a Nexus 5, but load it up on a Nexus One and pictures might overlap buttons and buttons might overlap text creating a poor user experience. It is critical we help the client select relevant and broadly supported Android devices during the scoping process, which can be tricky depending upon the application.

The Operating System

Three versions of Android currently (Feb. 2014) capture the largest market share: Jelly Bean, Ice Cream Sandwich, and Gingerbread. The newest version, Kitkat, holds only a 1.8% marketshare. We must be mindful of features unavailable on the older versions of Android while testing applications. If an app has a scoped feature not possible on some devices, we must be sure it gracefully handles older versions of Android and does not crash nor display any undesirable artifacts. The above-mentioned stumbling blocks are easily avoidable if we can pick a good range of targeted devices to cover the appropriate versions of Android.

Compounding the Problem

There are some lesser-known issues that can be easily missed, but would be disastrous for an end user. Most versions of Android installed on a user’s device are not stock – they are modified by the device manufacturer. As a result, testing an app on one device from one carrier might give you different results than testing the app on the same device from a different carrier. Knowing this can provide a buffer for these sorts of issues and ensure we are readily able to detect and squash those bugs.

Closing Thoughts

Consider these common problems as food for thought while testing on the Android platform. Google may be taking steps towards alleviating some of these common Android issues for future releases. First, it is rumored they will begin requiring device manufacturers to use versions of Android within the last two released or not be certified to use Google Apps. End users would definitely benefit from such a decision, but it would also be good for developers and QA Engineers. The move would also lessen the fragmentation issues currently prevalent on the Android platform. Second, Google continues to provide new testing tools with each new release of Android, making it easier to ensure apps are stable across a wide range of devices. Third parties are also making tools available for a wide range of testing. These tools include new Automation tools that function differently from native tools and allow testing across platforms. Some tools tools encompass walls of devices running your app viewed through a webcam so people can easily test across device types using online controls. Scripts and other automated tools are great for making sure nothing is broken during any fixes and ensuring a service is up and running.  However, nothing will ever replace a QA Engineer getting down to it with a range of actual, physical devices, testing without discretion, and finding bugs. A human’s intuition will always be the best.

Manage Localization Strings for Mobile Apps with Twine

Problem:
Managing strings between iOS, Android and the web is prone to error and hard to manage. When you introduce multiple languages, even more headaches arise.

Solution:
Twine allows you to keep your localized strings in a central location that can easily be exported to iOS, Android and the web. Twine is great for any multi-platform or multi-language application. It keeps your strings in sync. On a project I’m currently working on, we’re using Twine to consolidate and manage five languages across three different platforms. It’s saved us time and help to cut down on the cost of translations.

Getting Started:
Before you begin, make sure that you don’t have any hard-coded strings in your application. This is the first step in localization for any platform and a best practice for mobile development. Make sure to backup your existing strings. Next, you’ll need to install Twine. You can find instructions here: https://github.com/mobiata/twine

After you’ve installed Twine, create a strings.txt file that will be used for storing strings. Twine is able to parse existing string files and write them back to your master data file. Top-level categories can be used to organize your strings, tag attributes for selecting the target platform and comment attributes to give context.

Example:

[[Uncategorized]]
[Off]
en = Off
tags = ios,android,web
comment = Off
de = AUS seit
es = Apagado
fr = Arrêt
nl = Uit
[On]
en = On
tags = ios,android,web
comment = On
de = An seit
es = Encendido
fr = Marche
nl = Aan
[[General]]
[Edit]
en = Edit
tags = ios,android,web
comment = Edit
de = Bearbeiten
es = Editar
fr = Modifier
nl = Bewerken
[Done]
en = Done
tags = ios,android,web
comment = Done
de = Fertig
es = listo
fr = Terminé
nl = Klaar

The “tags” field is case sensitive and should not include white space. Keep this in mind when generating your strings file.

Bad:

tags = ios, Android, web

Good:

tags = ios,android,web

Output Folders:
Running the Twine command will generate you strings files and put them in the correct sub-directories. For iOS, they will be put in the Locales folder and for Android, they will be put in the res folder. You will need to create the corresponding lproj (iOS) and values (Android) folders for each language prior to running Twine. As long as the folders exist, Twine will automatically overwrite the strings files.

Running Twine:
To export for Android, put the strings.txt file in your project directory and run the following command. Note: this will replace your existing strings.xml files. Make sure to back up your strings and write them back to the master data file before running this command for the first time.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/res" --tags android

The command to export for iOS is very similar.

twine generate-all-string-files "$PROJECT_DIR/strings.txt" "$PROJECT_DIR/$PROJECT_NAME/Locales/" --tags ios

Updating Strings:
When you have new text or translations, update your strings.txt file and re-run the commands. All of the string files within your apps will be updated. My preferred editor for viewing the string.txt files is TextMate. Built in text editors like Notepad and TextEdit can have problems opening large string files.

Summary:
We have used Twine on a number of projects with great success. It’s made managing strings and translations significantly easier. Anyone who is managing strings across platforms or supporting multiple languages should look into Twine.

Additional resources:
http://www.mobiata.com/blog/2012/02/08/twine-string-management-ios-mac-os-x
https://github.com/mobiata/twine

Filed under Tech Tips