The Business of Responsive Design Survey
Mark originally drafted the Responsive Survey for a conference talk he was preparing for Fronteers Conference in 2012. He wanted to get an idea of the pain points around responsive design so he could talk about them from his experience of working with clients. The first survey received 512 responses. He decided to run it again many months later when he was preparing to talk about Responsive Design at a different conference; the second time it received 496 responses. As Mark has done a lot of research for clients and his own projects before, he was already pretty good at designing and using surveys. However, there were a few decisions he made that actually made life a little difficult for the team when it came to the retrospective analysis of the data. This is partly because he had such a good sample (which he wasn’t expecting) but also because of the collection method and the type of questions he used.
Types of questions
The questions asked were based on some initial research by Mark - his own assumptions and experiences of working with Responsive Web Design for a number of clients on RWD projects, blog posts and articles he’d read on the subject, as well as conversations and discussions with peers in the industry. This was a pretty good starting point and gave him a few topic areas he knew he wanted some more robust data on: challenges, frustrations, feelings, and tools and resources of choice. Most of the questions were closed questions, meaning the responses people could choose from were predefined. There were also three or four open-ended questions where people could write a comment in an open text box.
Sorting into themes
Open-ended questions are great for providing colour and depth to closed questions but should be used sparingly. We found that with 1,000 responses across two surveys, these questions were a tremendous source of knowledge and insight but proved challenging to analyse. When our team started analysing the data, rather than print every single response out and cut up into individual insights (as we have done before on a client project) or write the most pertinent ones onto post-it notes, they painstakingly went through each and every comment in a spreadsheet and sorted them into themes. Then, they wrote a short one line description of what each theme was about and also how large the theme was, e.g., how strong was the feeling? You can see this in the report. Some of the comments fitted into more than one theme, for example a comment about time and clients, so those were ‘double counted’ and the insight noted in each theme. The most pertinent examples of insights were highlighted on the spreadsheet so that others coming to it without prior knowledge wouldn’t have to wade through the whole list again.
Pivoting for detail
The first survey was set up in Google Forms as a quick and easy way of asking questions and getting the data in one place. For this purpose, Google worked well as it provided both the overall data in tables and also some charts of the results. What Google Forms can’t do, however, is easily provide different comparisons between different data sets. With such a large sample, this kind of information was going to deliver the most value to us. Working with a large data set can be pretty arduous but Nathan made light work of it and created a series of pivot tables we could use to do just this. We cross-referenced role and organisation type to hone in on personas and understand the audience better to create empathy maps.
I’m a big advocate of using software for analysis if it makes life easier and helps to make research quicker, so the next responsive survey will be run in Survey Monkey as it has great analysis tools built into the software (of course, other tools are available!).
Telling the story
After we’d waded through the data, we needed to bring it together into a story. First of all I concentrated on Gridset and what this data could tell us about potential customers. Then I created a longer summary of the data, pulling together the overall messages, looking amongst different groups and subsets, and also comparing the year on year results. This was pulled together into a slide deck and was very much written as a quick way of sharing the learnings internally. Nathan then went to work creating a lighter version of my report, designed to be easier to digest and nicer to look at.
We’ll be running the survey again in a few weeks so that we can gather another full year's worth of data to analyse and share with the web community. Keep an eye out!
In the meantime, feel free to tell us what you want to learn more about from our next Responsive Report.