“Pre-Production” is the new “Post-Production”

There is a singular, terrifying truth that every software developer, application architect, product designer, and stake-holder must eventually embrace. Even if you build the coolest, most useful product in the world, there is a good chance that nobody will use it.

This simple truth evokes dark feelings of inadequacy within the souls of executives and and renders project managers, marketers, and developers useless. It doesn’t matter if your application is robust, scalable, or beautifully designed, you won’t know whether your audience will embrace it until you’ve actually presented it to them.

The problem here is a “known unknown” related to design. Although developers can load test an application to ensure it scales and performs, there hasn’t really been an easy way to test design except in a production environment. Focus groups are about as accurate as licking your thumb and searching for wind direction, and UI/UX consultants can give you beauty, but they can’t guarantee success. So how do you test your user experience before actually presenting your app to users? One of our colleagues wondered the same thing and came up with a pretty nifty solution.

Introducing Apptourage, an easy, cheap way to optimize usability in pre-production. Instead of spending four months developing the perfect app/ERP/CRM/tool, spending tons of cash marketing your new product, and then waiting for useability data to come in, you can upload the user experience and have it tested either by your team or Apptourage’s US-based testers. Easy peasy – Needless to say, we’re big fans of their new tool and recommend it to all of our clients. Check it out here.

Disclaimer: Yes, we have become friendly with the Apptourage crew; but, it’s mostly because we earnestly believe in the product they’ve created.

 

Big Data. Small Budget. 3 Possible Solutions for an SMB

Big data has revolutionized the way we look at information. Unfortunately, access to the collective set of tools that define this craze is not always easy to come by. Below I lay out three possible ways an SMB can take advantage of the Big Data revolution.

Rent a Cluster
Resource Cost: Potentially High, Labour Cost: So-So, Nerd Props: So-So

If you’re very familiar with the scope of your data crunching project, trust in your team’s ability to write/deploy solid code, and don’t mind spending the extra money on occasion, then maybe a 3rd party computing cluster is for you. Services such as Amazon’s Elastic Map Reduce, Microsoft Azure’s HPC, or Qubole offer you an elastic, on-demand environment to run your code. The benefits are obvious: easy to manage infrastructure, ability to grow/shrink with your data set, ability to grow/shrink with the complexity of your code, and rock-solid performance. The problem with cloud-based computing clusters is that they can (very) easily become expensive. Just moving data around can cost you a few dollars, so make sure your team is able to produce quality code. With that said, we run big data clusters in an Amazon VPC running Spark, and it works very well for our needs.

You Don’t Need a Stinking Cluster
Resource Cost: Low, Labour Cost: Low, Nerd Props: Low

The fact of the matter is, big data is not for everyone. Properly mining data requires a talented team, patience, and a deep understanding of your data. Incomplete data analysis leads to incorrect conclusions. Fortunately, you don’t need access to expensive resources in order to take advantage of the big data revolution. A number of larger enterprises have already done the heavy lifting for you and the results of their analysis are all over the web to review. Websites like Google Trends offer you access to a plethora of information which has been mapped, reduced, analyzed, and made available in lovely chart/graph form. Want to learn more about your particular market segment? A simple Bing or Google search can be your gateway to a world of knowledge. Want to know more about user behavior on Facebook? Just search the web. Chances are, the best work has already been done by researchers at Universities, Think Tanks, and Global Corporations. Just because you’re not mining the data yourself, doesn’t mean it’s not relevant and valuable.

Setup Your Own Cluster
Resource Cost: Low, Labour Cost: High, Nerd Props: High

Modern, open source data crunching platforms are purpose-built to run on all sorts of hardware. Better yet, they’re easy to install and setup. Whereas 15 years ago, the majority of your time would be spent building the actual  computing cluster (i.e. a Beowulf cluster), now, you can focus your efforts on collecting and analyzing your data. Although we’re now a SparkDB shop, in the past, we have implemented Hadoop for crunching data. Both are easy(ish) to get setup. The problem: all distributed computing platforms are only as good as the resources that you throw at them. The name of the game here is “distribution,” so the more nodes (computers) you have, the better. Fortunately, most SMBs have access to a large pool of computing nodes right under their noses. With some basic hardware – gigabit switch, dedicated gigabit Ethernet card, and cables – your unused employee workstations can be run as hadoop nodes in the evening. We recently setup a Spark cluster on three machines and a good time was had by all. The one caveat here is that you will probably want to enable dual boot on these unused machines.