In this blog post, I will share insights from my experience working as a researcher on the RETREX (Rethinking Translation Expertise) project, funded by the Austrian Science Fund (FWF) and led by Hanna Risku at the University of Vienna. While more information about the project itself and our research team (currently consisting of one project lead and four researchers) can be found here on our website, my focus will be on the ethnographic data collection we conducted as a closely collaborating research team.
Methodology
Our research takes an ethnographic approach to understand how translation expertise is perceived and practiced by professionals in real workplace settings, offering a more holistic perspective compared to traditional experimental methods. This entails a combination of participant observations, interviews, focus groups, and document analysis. Using ethnographic approaches allows us to capture the social, material, and institutional factors influencing translation practices, emphasising the situation-dependent, collaborative, and artifact-mediated dimensions inherent in translation work. Our multi-case research design targeted a few selected translation workplaces: 3 translation agencies and 1 public service institution with an in-house translation department that generously agreed to participate in our study. We recently shared insights into our ethnographic approach to investigating translation expertise in the newsletter of the EST, which can be accessed on our website.
Our comprehensive data collection spanned 34 days with 134.5 hours of participant observation and included 12 interviews across four case studies, totalling 13.5 hours. In addition, we organised focus groups with CEOs and translators and gathered extensive digital and biographical data. Five researchers spearheaded these efforts, with specific roles allocated for observations, interviews, document collection, and focus group moderation, supported by additional team members for logistical tasks and transcription. For the coding and analysis of the data, five team members were engaged. [CG1] These figures illustrate the extensive coordination necessary to effectively manage a research team and highlight the various challenges, both expected and unexpected, that can emerge from such collaborative endeavours. This is what I would like to share with you in this blog post I have been kindly invited to contribute.
When getting started conducting ethnographic fieldwork, working in a team has proven to be practical. Having a larger team of researchers enhanced our network within the field, enabling us to reach a broader pool of potential participants, thereby increasing the likelihood of their participation in our study. As a team, we were also more flexible in the observation dates; for instance, we could assign two researchers to a single translation agency on the same day and adapt more readily to participant availability.
Additionally, our diversity of personal and professional backgrounds contributed to varied observational foci. This was particularly evident when two researchers observed the same participants, leading to the identification of different phenomena based on our unique experiences. For instance, my prior internships at translation agencies equipped me with familiarity with industry tools, allowing me to rather seamlessly follow the workflow. In contrast, my colleagues, with their stronger academic backgrounds, might have approached the same situations differently or asked the participants to explain. Additionally, participants might interact with or explain things differently to each researcher, all of which leads to richer insights.
However, involving multiple researchers in observations can challenge participants’ trust, as they open their workspace to more than one person, which might be difficult for them. It may also complicate the researchers’ ability to synthesise a coherent overall picture which cannot always be compensated by combining the data gathered in the observations. When attempting to merge these varied observations, we encountered challenges in forming a unified narrative, as each researcher’s insights highlighted different aspects of the translation process, making it difficult to construct a cohesive overall picture. Additionally, the participants showed us different aspects of their work, influenced by our individual backgrounds and previous experiences in similar work environments. Our understanding of certain processes varied, often depending on our familiarity with practices and concepts from our past professional engagements. Furthermore, our affiliations with the translation agencies and relationships with the project managers differed, adding another layer of complexity to our collective data collection process.
The depth of personal connections with participants can vary significantly from one researcher to another. In our study, a colleague and I each spent a total of five days conducting participant observations at a translation agency, observing two distinct project managers. We found that each of us formed a stronger rapport with a different project manager. Consequently, when planning the post-observation interviews, it was straightforward to decide who would conduct the interview with which participant. It was also easy to cover for unforeseeable absences of the researchers. However, this requires good and thorough documentation of communication with the participants, of the data gathered and the observations made by each team member. Effective communication within the team is crucial to ensure consistency and continuity in the research process. This involves maintaining detailed records of all interactions with participants, as well as the data and observations collected. Such meticulous documentation helps in seamlessly integrating the contributions of different researchers, particularly when covering for absences. It also facilitates the process of cross-referencing and corroborating findings, thereby strengthening the reliability and depth of the ethnographic study.
Coding Process
Our approach contrasts with what Saldaña (2015: 53) refers to as “lone wolf coders”. By working collaboratively, we leveraged multiple perspectives to enhance the depth and breadth of our analysis. This team-based approach ensured a more comprehensive understanding of the data through the diverse insights and interpretations brought forth by each team member. At every stage, it was crucial for us to maintain transparency and ensure that each step was clearly documented and traceable.
While not all researchers were directly involved in the data collection of each of the four case studies, it was vital that they were still familiar with the collected data and the overall context of each case as this is important for analysing and discussing the data together. This was facilitated by the main researchers preparing detailed summaries that highlighted key information and background on each case. After completing the data collection for a case study, the team would convene to discuss the collected data and to have the researchers who conducted the fieldwork share their thoughts and experiences in the field.
In these meetings, which we termed “Deutungswerkstätten”(Nadig 1986), we brainstormed preliminary codes for the data, following the qualitative analysis framework proposed by Kuckartz and Rädiker (2022). Firstly, we decided what we wanted to learn from our data, based on our research questions. Then, we figured out what kinds of categories we would use and how detailed they should be. After getting familiar with the data, we went through the texts one by one, sometimes creating categories from the content and sorting the data into these categories, employing a combination of inductive and deductive category development. We then refined our categories, merging some for clarity or introducing new ones as necessary. For our categories, we first developed a main category which we then defined and divided in as many subcategories as needed. While we first brainstormed on paper, we soon moved on to the qualitative analysis software MaxQDA.
In the end, we had a set of categories with clear definitions and examples for each category. As a team, we developed a guide for coding, but stayed open to making changes if needed. We conducted each of the steps together or in close collaboration. In the first phase of coding, four researchers worked on one interview and one observation protocol separately, discussing their experiences afterwards and thereby further developing the categories and filling them with definitions and examples. In a second preliminary coding phase, two researchers worked each on one interview and one observation protocol, and compared their work afterward. With these four researchers involved, we covered in alternating teams a total of six documents. Saldaña (2015: 54) suggests that a team that collectively codes data should consist of no more than five people, as problem-solving and decision-making then become exponentially more difficult. While I cannot say anything about working in bigger teams, I do suggest clearly dividing roles and tasks between the team members involved. Additionally, our project lead did not take part in the coding process, but participated in the analysis and the collaborative interpretation sessions, which gave her the position to make executive decisions and provide us with her guidance if needed.
After finalising our code set, we embarked on the main phase of coding. To efficiently manage the workload, we divided the tasks among four researchers, considering their varying familiarity with the cases, capacities, and availability. We adopted a strategy where each document was handled by two researchers: one for initial coding and another for reviewing and refining this preliminary work. This dual-researcher approach proved advantageous, especially when one researcher, perhaps less involved in data collection, encountered ambiguities that were readily clarified by their partner, who possessed deeper insights.
While MaxQDA, the qualitative analysis software we used, offers the possibility to calculate intercoder reliability, we decided not to use it. Our approach is similar to what Reyes, Bogumil, and Welch (2021) describe as a “Living Codebook”, a clearly traceable, transparent discussion of the constant evolution of codes. This moves beyond intercoder reliability, a tool, as Morse (1997: 445–446) suggests, more applicable to (semi-)structured data than data of a rather unstructured nature. Additionally, quantifying intercoder reliability could possibly lead researchers to judge the use of certain codes as right or wrong and would hinder discussions about the categories and the coding of the data. Instead, by reviewing and, if necessary, adapting our codes, we were able to significantly enrich our analysis, as we regularly identified and addressed these nuances. We also noticed that different researchers tended to focus on certain codes more than others, making it essential to review and supplement codings continuously.
Reyes, Bogumil, and Welch (2021: 6) refer to writing memos as“the substantive heart of qualitative data analysis”.The memos we produced fell into two primary categories: methodological and analytical, with the latter being closely tied to the data. It is important to note that these categories were fluid, allowing for a natural progression from one type to the other. Furthermore, memos served not only to justify our choice of a specific code but served the purpose outlined by Reyes, Bogumil, and Welch (2021: 19): “as a way to identify the patterns that the research assistants see as they go through the data, raise questions, discuss what they are seeing versus what is not being said, and always rooting their discussions in particular examples from the data.” Additionally, we used memos for unclear passages in the initial coding process to communicate questions and doubts to the researcher who was responsible for reviewing.
Reflections on Collaborative Research
During our weekly team meetings, we addressed potential challenges encountered when coding and kept each other updated on our progress. We were flexible in adapting our responsibilities to ensure the most efficient and thorough analysis possible all while meeting our self-set deadline for both initial and review coding. Including the preparation work such as anonymisation and transcription, as well as other related tasks, the entire process spanned over 8 months to complete. Dividing the work between researchers undoubtedly accelerated our progress.
As Saldaña (2015: 53) frames: “Multiple minds bring multiple ways of analysing and interpreting the data.” Embracing this principle introduces several steps that “lone wolf” researchers might not encounter. I hope to have illustrated insights into the practical aspects of conducting fieldwork as part of a team. The privilege of working within a team of brilliant Translation Studies researchers has not only enriched our project but has also been a profound personal learning experience. I am now looking forward to the next steps in our joint research process, which include analysing the data and co-authoring papers.
References
Morse, J. M. (1997). “Perfectly healthy, but dead”: The myth of inter-rater reliability. Qualitative Health Research, 7(4), 445–447. https://doi.org/10.1177/104973239700700401
Nadig, M. (1986) Die verborgene Kultur der Frau. Ethnopsychoanalytische Gespräche mit Bäuerinnen in Mexiko. Subjektivität und Gesellschaft im Alltag von Otomi-Frauen. [The hidden culture of women. Ethnopsychoanalytical conversations with female farmers in Mexico. Subjectivity and society in the everyday life of Otomi women.] Frankfurt a. M.: Fischer.
Reyes, V., Bogumil, E. & Welch, L. E. (2021). The Living Codebook: Documenting the Process of Qualitative Data Analysis. Sociological Methods & Research, 0(0). https://doi.org/10.1177/0049124120986185
Saldaña, J. (2021). The Coding Manual for Qualitative Researchers. SAGE Publications.
*This research was funded in whole by the Austrian Science Fund (FWF) [P 33132-G]. For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
** Antonia Baumann works as a research assistant on the RETREX project at the University of Vienna, where she is also completing her PhD thesis on feedback in the translation industry. Additionally, she works as a freelance Speech-to-Text Interpreter.
Leave a Reply