home : : research projects

Research Project: Survey Platform

problem

We know that our customers in the skilled nursing and senior living markets face high employee turnover, and had a theory that by evaluating their employees' engagement with a series of surveys, we'd better be able to help our customers act on information before it was too late to impact any changes in the organization that may have kept an employee happy / engaged.

Previous Research

The generative research work for the new survey types had been completed by the time I joined the Engage team as a UX Designer.

My Tasks

My research work was evaluative and focused on running the proof of concept surveys before developers were brought on, to determine if the response rates for our regular pulse survey were negatively impacted by our increasing the frequency / adding new types of surveys that employees were receiving.

There were a number of things we were looking to evaluate with the survey platform proof of concept:

  1. adding a new hire survey program
  2. creating a quarterly survey program
  3. including a second required question with the weekly pulse survey

Our customers had told us that they wanted to be able to ask their own questions outside of the pulse survey, but we didn't have a good way of including that survey type in our development-light proof of concept, so it was something we could ask about during our evaluative research, but it would have to be something we evaluated in the product at a later time.

actions

The first thing that needed to be done was determine which of the items we were hoping to evaluate should be the first.

To determine which should be first, my product manager and I had a conversation with the developer / tech lead to learn what the level of effort would be for the things that we would need developer help for:

My product manager then evaluated the product priorities and worked with me to create my design / experience priorities for the proof of concepts and survey pilots.

And finally, we had to find a customer to partner with us so that we could run our proof of concept tests. The customer that we asked to partner with us for the proof of concept studies was selected because of their higher than average pulse survey response rate of 38.45% instead of just 19.82%.

My Priorities

I was tasked with the running of the developer-free proof of concepts for New Hire Surveys and Quarterly Surveys using existing OnShift Schedule messaging functionality with our surveys built out in Survey Monkey.

Once we had secured a customer-partner for the proof of concept, I was tasked with running the proof of concepts: from manually sending the surveys to analyzing the data.

New Hire Surveys

The key part of the new hire surveys was that they were sent to an employee at specific tenure milestones, so a newly-hired employee would receive a survey after:

...at which point they were no longer considered a "new employee."

At the beginning and middle of every week during the proof of concept, I checked for new employees and added their hire date to my Excel sheet so I knew when to send each employee the correct survey. Once I knew when to send each survey to each newly hired employee, I set a calendar reminder to send the survey.

Using the existing OnShift Schedule messaging platform, when it was time to send a new employee their surveys, I created a new message and sent the employee an intro message with a link to the survey.

The new hire survey proof of concept ran for 4 months to account for the fluctuation in employee hiring + onboarding and to give us the opportunity to follow the full 90-day protocol for new hires.

Quarterly Surveys

The process was a little different for the quarterly survey proof of concept: all employees who had been employed for over 90 days were sent a survey on a specific theme once per month.

It was my responsibility to verify that an individual was still employed and that any recently hired employees who had made it to day 91 were added to the group of employees who received the monthly survey. And then once a month I followed the same procedure as I used for the new hire surveys to send all the employees with at least a 91-day tenure a survey on Communication (month 1), Supervisor Support (month 2), or Cooperation (month 3).

The quarterly survey proof of concept ran for the same 4 months as the new hire proof of concept so that we could evaluate the impact of recieving a longer survey each month quarter over quarter, the impact the longer survey had on the pulse survey results, and determine if the new hire survey results impacted later quarterly survey results from the same individuals.

2-Question Pulse A/B Test

With the help of a Senior UX Designer, I was able to convince management that going forward with development for the customer-heard request of adding a second to the pulse survey might not be the best idea - and we should pause to do some evaluation by way of an A/B test to compare the response + completion rates for the different survey types.

This would require development time and effort, so it wasn't something that we could start right away, but I knew that the effort would be worth it.

Looking back in our database, I was able to sample the employees and evaluate who regularly responded to pulse surveys, how often they left comments, and how often they chose to remain anonymous when responding to a pulse survey. Using an Excel Pivot Table, I used the information from the database to split out partner's employees into 2 equal groups of responders and non-responders to ensure an even split between which group would continue to receive the regular pulse survey and the group that would receive the 2-question pulse survey.

results

To reduce the number of variables in our A/B 2-question pulse survey test and to allow the development staff a chance to plan for our A/B test effort, we broke out the proof of concepts to separate time frames: the new hire + quarterly survey programs (from May 2018 through August 2018) and the 2-question pulse survey (from mid-November 2018 through early January 2019).

New Survey Programs

At the end of every week, I downloaded the new hire survey results from Survey Monkey and created an Excel report that displayed the information in a chart for our customer-partner. The Monday of the following week, we would review the survey comments and response rate during a weekly phone call set-up for the purposes of reviewing the proof of concept data.

While the new survey programs consist of different development epics, the proof of concept for each program was run at the same time to get information about the success / failure of each program faster and because the groups that would be participating in each survey program was mutually exclusive.

During my data analysis, I saw that the pulse survey response rate declined during the new survey platform proof of concept and I believe that it was due to the increased number of surveys that more tenured employees were recieving: they were being asked to complete an additional survey of 4 - 6 questions once a month on top of their regularly scheduled weekly pulse surveys vs the newly hired employees who weren't used to any number of surveys and who had a lower response rate over all.

New Hire Surveys

The response rate of the new hire surveys was lower than expected at 27.95% based on the raw response rate numbers from the pulse survey; however, the lower rate may be explained by a lower awareness of the existing pulse survey program by newly hired employees.

Lifetime new hire response rate.
Survey Type Response Rate
Day 7 11.76%
Day 14 26.32%
Day 30 33.33%
Day 60 35.00%
Day 90 33.33%

Quarterly Surveys

The quarterly survey response rate started about the same as the pulse survey response rate; however, over the time of the proof of concept, the response rate for the quarterly survey itself declined by 5.42% and the weekly pulse survey rate declined by 3.96% as well during the 4 month period.

Pulse survey response rates vs quarterly survey response rates.
Month Survey Type
Pulse Survey Quarterly Survey
Pre-Proof of Concept Pulse Results
February 2018 44.49% n/a
March 2018 43.00% n/a
April 2018 43.28% n/a
Pulse + Proof of Concept Results
May 2018 49.23% 42.86%
(communication)
June 2018 46.82% 39.50%
(supervisor support)
July 2018 46.32% 40.00%
(cooperation)
August 2018 45.27% 37.44%
(communication)
Post-Proof of Concept Pulse Results
September 2018 50.09% n/a
October 2018 51.90% n/a
November 2018 54.24% n/a

2-Question Pulse Survey

There were 3 pieces of information that I was concered about during our A/B test:

We had established a preferred and less-preferred variant for our "B" option in the A/B test; due to development time, we were only able to test our less-preferred variant - and I believe that if we had been able to test our more-preferred variant, the results might be slightly different.

What I saw when I looked at the results was very interesting: overall (across all customers responding to the pulse survey) the results were very consistent and for our partner, as the A/B test continued, while there was still some week to week variance in completion rates, as the test continued, the responses for our "B" variant climbed to almost meet the default "A" variant. I suspect that there may be a couple of things at work here:

  1. there was no initial communication to the employees who were in the "B" group and because there was no signifier that their regular pulse survey had changed until the normal end of the survey, the employees didn't initially percieve that there was any difference and left the page when they would normally be done
  2. as the A/B test continued, if those employees who hadn't been completing the surveys noticed that they hadn't been getting their normal survey completion points, they may have paid more attention as they completed - or thought they completed - their next set of surveys

Completion rate by week for all customer pulse survey responses, separating out "A" and "B" variants for our proof of concept partner.
Week Survey Type
Non-Partner Data
response rate for all other customers
Group "A" Variant
partner "A" variant response data
Group "B" Variant
partner "B" variant response data
11/19 - 11/25/2018 96.96% 97.37% 86.96%
11/26 - 12/2/2018 96.52% 100.00% 83.33%
12/3 - 12/9/2018 96.72% 100.00% 87.50%
12/10 - 12/16/2018 96.48% 100.00% 92.86%
12/17 - 12/23/2018 96.59% 100.00% 86.36%
12/24 - 12/30/2018 96.56% 95.56% 92.86%

Numbers Review

This indicates to me that the addition of the second question and not any variation caused the decrease in completion rate of the 2-question pulse.

lessons learned

I didn't have easy access to the generative research - or its insights - that was completed before I joined the team - and I expect that there might have been some useful information contained in it.

Something that I struggled with at the beginning of the project was figuring out how my management team wanted to see information for the ongoing proof of concept work. To start somewhere and not get caught up in analysis paralysis, I focused on presenting information to our customers first so that they understood the value of new survey types and used that information when presenting the proof of concept results to my management team.

Once the proof of concept was complete, I spent more time evaluating the data and presented a wrap-up report with that information to my management team with my recommendations.

next steps

While I had some ideas developed during this process, I knew it was important to validate those with customers before we started development on the features.

Unfortunately, I wasn't able to test with customers before we started development on the new survey platform as a whole, but I was able to validate the information architecture and work flow before we started development of the new managed survey programs.

I recommended to my management team that at no point in the future we add a second question to the pulse using the template that we used for this A/B test. If at some point in the future, the request was received again from a significant number of customers, we could re-evaluate / repeat the A/B test using the preferred variant that included signifiers indicating a multiple-step survey.