Features

Author Surveys: Insights into Iterative Author Survey Campaigns

Download Article

ACS launched our rolling author surveys in 2015 with two main objectives in mind: 1) to give corresponding authors of each manuscript considered by one of our journals the opportunity to provide feedback on their experience, and 2) to collect that feedback over several years to allow for longitudinal analysis.

Within ACS Publications the decision to launch a rolling survey campaign was not taken lightly. There is legitimate concern regarding the number of surveys authors receive each year. The suggestion of adding yet another (or two or three, depending on an author’s publishing frequency) should be considered carefully. Our marketing department conducts several targeted survey campaigns every year, each offering critical insight to a particular component of the publishing experience. However, the launch of rolling author surveys was justified by specific limitations of those traditional targeted campaigns.

Most survey campaigns do not provide authors with the chance to report on each of their manuscript submission experiences individually—with more than 50 journals in the ACS Publications portfolio, author experience can vary widely from journal to journal. In traditional targeted campaigns, authors’ responses represent their cumulative author experience at ACS, not their experience with one particular journal or editor. Also, targeted campaigns render data sets that are too short to consider the long-term impact of changes within a publishing program—submission system functionality, editorial leadership, and advances in publishing technology, to name a few. Because those examples involve significant (and expensive) decisions, reporting on their effectiveness and impact is critical.

Thus, ACS Publications launched a rolling survey in the summer of 2015, inviting the corresponding author of every manuscript considered by one of our journals to provide feedback on their experience, regardless of whether the manuscript was rejected or published. Authors of rejected manuscripts receive a survey regarding their journal selection, submission, and peer-review experience. Authors of manuscripts that are accepted and published receive a survey with those same components, plus additional questions regarding their production and publication experience.

Nearly three years and tens of thousands of responses later, we have learned quite a bit—not just from the analysis our response data has afforded but also from the experience in general. What follows is our advice for anyone considering launching a similar campaign, interesting insights from our data, and some lessons learned worth sharing with anyone considering a similar endeavor.

Getting Things Started: Details to Consider

First, when embarking upon a survey effort of this magnitude, it is important to take inventory of your data infrastructure and determine points within the submission and publication process where there are reliable “triggers” for automating survey distribution. Assembling the right team to do this work is critical. At ACS, we are fortunate to have cutting-edge IT resources and a reliable data choreography that made automation possible. For manuscripts that are rejected, we determined the trigger would be the decision letter itself, with important timing considerations (addressed below). For accepted manuscripts, since we wanted to gather feedback on authors’ production experience as well, we determined the trigger would be web publication. With these two triggers in mind, we set up scheduled reports that would “feed” our survey tool the information necessary to populate and send each survey.

We chose Survey Gizmo because of their robust integration options and proven ability to handle large-scale survey campaigns. The ability to design our survey to be consistent with the ACS Publications brand was essential. Most out-of-the box survey tools offer this feature, but we still mention it because it was a serious consideration for us given some of the predatory activities that impact our industry today. It was of paramount importance that our authors be able to identify with absolute certainty that our surveys are legitimate.

It was not until development of the technical foundation to support the survey was well underway that we begin drafting our survey content. As a project team, we dramatically underestimated the magnitude of this task. We highly recommend engaging with a firm that specializes in communicating with and seeking feedback from international audiences, as it is a delicate practice that should be approached carefully and with attention to both grammatical and cultural details. We learned during this step how nuanced the art of asking for feedback is, and how varied practices can be across geographic regions.

When drafting survey questions and response options, it is important to consider how those responses translate into data values, and ultimately how those values will meet your reporting needs. For example:

  • Consider how you want to use/report the data before deciding on the question type. To understand how important several different aspects of the journal are to the authors, it may not be enough to have them assign a value to each aspect separately. It might be better if they rank the aspects from most to least important.
  • In order to receive meaningful data values, response schemes such as “excellent, fair, poor, etc.” should be consistent throughout the survey and correspond to numerical values for the purpose of reporting and analysis.
  • Avoid using response schemes with an odd number, as the middle value offers little in the way of persuasive data, and many respondents will use that neutral option as a way to bypass the question, potentially watering down the response set.
  • Provide a “N/A” option so respondents can bypass questions that did not apply to their experience (e.g., questions about peer review on manuscripts that were desk rejected).

The length of the survey is also an important consideration. Generally speaking, the longer the survey, the lower the response rate. It is important to look at the drafted survey and evaluate how important and “actionable” each question in the survey is. By “actionable” we mean, “Is there an action the organization can take if the authors give largely negative responses to the question asked?” Questions that fail to meet this standard will often increase the length of a survey at the expense of the completion rate without providing data of value.

Test, test, and retest your surveys before launch, ideally with audiences that were not involved in the survey development.

If you have specific reporting outcomes in mind, consider what information is contained in your survey distribution feed. We knew we wanted to report on survey responses at the journal and editor level, and we also wanted to look at responses based on whether manuscripts were peer reviewed. We set up our distribution feed to contain these data points so responses in Survey Gizmo could be matched to that information when surveys are returned. This has allowed us to filter our survey data to meaningful subsets of responses.

Finally—and perhaps it should go without saying—test, test, and retest your surveys before launch, ideally with audiences that were not involved in the survey development. Our project team was mortified to discover the word “survey” was misspelled in one of our test runs. Beyond the wording and survey functionality, test the data exporting and reporting steps to verify there are not any surprises in this area. It is important to verify the data will integrate with your reporting software (if applicable) and is capable of generating the graphs, reports, and comparisons that you seek. Do not skip or rush this step.

What Have We Learned?

By and large, our survey effort has been quite successful and has offered tremendous insights into how authors experience ACS as a publisher and where we stand to improve on that front. Still, we have learned a few important lessons from decisions that, if given the chance, we would not repeat in a future survey effort.

Our first lesson was particularly painful and involved poor timing regarding when our survey invitation was sent to authors whose manuscripts had been rejected. Since the surveys are sent regarding a specific submission, the journal name, manuscript title, and final decision are referenced in our invitation. We originally designed our rejection survey to go out one full day after the rejection decision was rendered by the editor, but in doing so we did not account for the fact that not everyone reads email on the weekend or while traveling. If an author did not check their email for two days, they would return to find our survey invitation higher up in their inbox than the actual decision letter (since emails are usually sorted chronologically) and discover their paper had been rejected by way of our survey invitation rather than the decision letter itself. Luckily (if you can call it luck), this issue surfaced relatively quickly, and we immediately adjusted our survey distribution settings so authors of rejected manuscripts are contacted two business days after the manuscript decision is communicated.

Our second major lesson was less painful, but equally surprising. When we launched our author surveys, we aspired to follow up personally with authors if such follow up seemed welcome and necessary. We provided respondents with the option to identify themselves by name and email address if they wished to be contacted regarding the feedback they provided. We were astonished to find that more than 35% of our respondents identified themselves and indicated their openness to being contacted. We expected only a handful of these respondents each month, and planned to coordinate with our editorial team to respond to the authors who wrote in about an experience they had with a particular journal. When hundreds and then thousands of responses rolled in, it became clear that our staff could not possibly keep up with our personal follow-up aspirations. We quickly disabled that feature in our surveys and looked for larger-scale ways to engage with our authors around some of the issues being raised—through social media, educational pieces, and posts to our ACS Axial page, for example.

The final lesson we will share here, which we are still struggling to address, is the complicated nature of “analyzing” open-text responses. Concerned that our radio button and ranking options would not adequately capture our authors’ perspectives, we decided to provide opportunities for authors to explain their selections with open-text boxes at the end of each survey section. Many authors (we have observed over 50%) provide open-text responses to augment their response selections.

Our decision to provide open-text options at the end of each survey section compounded the issue; we have observed those authors who offer open-text responses do so repeatedly within the same survey. Combined, we have more than 34,000 survey responses and more than 32,000 open-text responses. Simple word-cloud analysis does not uncover trends in these responses. Instead, we have to take a targeted approach to analyzing this data (that is, manually dig through it) if we want useful information. For example, when we wanted to learn what our authors thought of manuscript transfer, we looked for responses containing terms relevant to manuscript transfer. Author quotes from that effort changed the way some of our editors looked at manuscript transfer as a practice and philosophy, so the digging was worth it. Still, we often feel we are doing those responses a disservice by not systematically analyzing them, and continue to evaluate our options to streamline our survey output on that front.

Even in facing these challenges, the results of our author survey continue to underscore the value of such a campaign. Iterative author surveys offer great insights into how you’re doing as a publisher and can be a powerful tool in evaluating specific journals’ and editors’ impact on author experience. When done well, the results can inform important decisions around journal strategy and publishing operations.

 

Jessica Rucker is Assistant Director of Editorial Services, in the Publications Division of the American Chemical Society. Jody Plank, PhD, is the Manager of Products & Analytics, Editorial Services, in the Publications Division of the American Chemical Society