Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 19 April 2022
Sec. Educational Psychology
This article is part of the Research Topic Teaching and Learning Research Methods: Fostering Research Competence Among Students View all 9 articles

Measuring Perceived Research Competence of Junior Researchers

  • 1College of Health Professions, Virginia Commonwealth University, Richmond, VA, United States
  • 2Applied Pedagogy Department, Universitat Autònoma de Barcelona, Barcelona, Spain
  • 3Serra Hunter Fellow, Catalonia, Spain
  • 4School of Education, Virginia Commonwealth University, Richmond, VA, United States
  • 5Cognitive, Developmental, and Educational Psychology Department, Universitat Autónoma de Barcelona, Barcelona, Spain

Graduates of doctoral (Ph.D.) programs are expected to be competent at designing and conducting research independently. Given the level of research competence needed to successfully conduct research, it is important that assessors of doctoral programs (e.g., faculty and staff) have a reliable and validated tool for measuring and tracking perceived research competence among their students and graduates. A high level of research competence is expected for all Ph.D. graduates worldwide, in addition to in all disciplines/fields. Moreover, graduates of Ph.D. programs may complete their studies in one country but then obtain a research position in another country, emphasizing the need to ensure that all doctoral programs are fostering similar levels of research competence. Thus, the purpose of this study was to gather additional evidence for validity and reliability of the Research Competence (R-Comp) scale. Specifically, we sought to extend the findings of by adapting the scale, translating it to other languages, and applying the tool with a sample of early stage researchers. Our findings provide initial evidence that the adapted PR-Comp is appropriate for use in three languages and across a variety of disciplines/programs of study.

Measuring Perceived Research Competence of Junior Researchers

The degree doctor of philosophy (Ph.D.) is considered a research intensive degree, designed to foster the development of independent researchers. Upon completion of a Ph.D. program, individuals are expected to be not only experts in their chosen field but competent in designing and conducting research independently following the rules of science (De Jong, 2021); this expectation is shared across the globe (e.g., Hambrick, 1997; Pole, 2000; Trotter, 2003; Jackson, 2013; Pinto et al., 2013; Poh and Kanesan Abdullah, 2019; De Jong, 2021; Fairman et al., 2021; MacLeod and Urquiola, 2021). However, and despite conventional beliefs, doctoral study alone may not be adequately preparing students for performing their roles as independent researchers. Given the level of research competence needed for graduates of doctoral programs – particularly for those who go on to become academic researchers – it is important to have a reliable and validated tool for measuring and tracking perceived research competence among junior researchers (Fairman et al., 2021). Thus, the purpose of this study is to gather evidence for validity and reliability of the Perceived Research Competence (PR-Comp) scale, which could then be used to inform interventions for fostering greater research competence among early stage researchers.

Research Competence

According to self-determination theory (SDT; Deci and Ryan, 2000), motivation stems from an individuals’ need to satisfy three basic psychological needs, one of which is competence. Competence is thought to satisfy one’s psychological need to master personally challenging tasks (Deci and Ryan, 2000). If the psychological need of competence is met, students may be able to work more effectively and maintain greater well-being; conversely, if competence is not met, students may show signs of negativity (Deci and Ryan, 1985; Sheldon and Elliot, 1999; Andersen, 2000). Perceived competence is one’s perception of both their basic capability of performing a task and a personal judgment of the importance of the task. The importance of perceived competence has also been demonstrated in areas of life such as academic achievement (e.g., Losier and Vallerand, 1994, 1995), work (e.g., Gagné and Deci, 2005; Arshadi, 2010), and sports (e.g., Vansteenkiste et al., 2004; Wilson et al., 2006).

Research specifically focused on perceived research competence is limited both in quantity and in scope. Regarding graduate student and early career researchers’ perceived competence, research often focuses on broader perceived professional competency (Graesser, 2014; Latorre, 2020), clinical competency (Dozier, 2001), or competency utilizing specific assessments (Ingram et al., 2020) instead of competence carrying out research independently. From what we do know, it seems that students who participate in research activities as part of their training report higher levels of research competence, particularly in the areas of data analysis and applying results to practice (Olehnovica et al., 2015). Jung (2018) recently investigated what factors contributed to doctoral students’ perceived research competence. Research-oriented learning environments positively influenced task-oriented (e.g., critical thinking and problem solving) and idea-oriented (e.g., innovation and creativity) research competencies. Notably, Jung (2018) found that participating in manuscript preparation and dissertation writing did not have a strong influence on the students’ perceived research competence. Findings from other studies suggest that the more students are exposed to various aspects of the research process, such designing and carrying out studies, performing literature searches, and publishing manuscripts, the more confident students are in their ability to do research (Phillips and Russell, 1994; Lambie and Vaccaro, 2011; Lambie et al., 2014; Petko et al., 2020). Particularly important for academic researchers, especially those entering the tenure-track, perceived competence in one’s research abilities may also be linked to interest in conducting research, research productivity, innovation, and creativity for both graduate students and early career researchers (Olehnovica et al., 2015; Jung, 2018; Petko et al., 2020). Given the influence perceived competence may have on both academic and non-academic outcomes, coupled with the expectation that graduates of doctoral programs in all fields can effectively conduct research independently, developing a valid and reliable tool for measuring perceived research competence across disciplines and settings is needed.

Recognizing the need for programs to be able to assess the degree to which they were producing competent researchers, Böttcher and Thiel (2018) developed the Research Competence (R-Comp) questionnaire. The R-Comp, a self-report measure of one’s research competence, was designed to measure research competence across multiple disciplines. It was created in alignment with the RMRC-K model, which posits research competence as comprising five dimensions: skills in Reviewing the state of research, Methodological skills, skills in Reflecting on research findings, Communication skills, and content Knowledge (Thiel and Böttcher, 2014). Their resulting instrument consisted of five factors, one for each dimension of research competence. Though the R-Comp was intended to measure competence across multiple disciplines, it was developed using a sample of students enrolled in a science program at either the Bachelor’s (27.4%), Master’s (68.5%), or doctoral level (4.1%) at a German university. The R-Comp was also developed and administered in German then translated to English only for publication. As such, more work is needed to examine the R-Comp’s utility for measuring research competence among doctoral students and early career researchers broadly. Since the expectations for research competence are similar across fields globally, there is also a need to create a tool that reliably and validly measures perceived research competence across multiple disciplines and languages. As such, the purpose of our study was to gather additional evidence for validity and reliability to support the R-Comp’s intended use of measuring research competence across multiple disciplines. Specifically, and since the original items and factor structure were created and established in German and translated to English for publication purposes only, we sought to replicate Böttcher and Thiel’s (2018) findings by collecting data using the English-translated version of the questionnaire. To begin exploring the appropriateness of using the R-Comp across multiple languages, we also translated the items into two other languages to gather initial evidence for validity and reliability to support broader application of the instrument. Henceforth we refer to the questionnaire as the PR-Comp as we believe this name best captures the intended purpose of the instrument.

Materials and Methods

Participants

Current enrollees in or recent graduates of Ph.D. programs were recruited to participate in this study. Participants (N = 456) were primarily female (62.7%) and ranged in age from 19 to 64 with the average age being 33.1 (SD = 8.09) years of age. Our sample represented 28 nationalities (see Table 1) and 118 disciplines. The vast majority of participants (n = 405) were still enrolled in a Ph.D. program; these students’ current year of study ranged from first (21.5%) to five or more (7.9%). Approximately 12% of the sample were recent graduates/early career researchers. Most students (n = 365) were enrolled in or completed their doctoral program full-time though 20% were enrolled part-time. Participants reported an average of 5.10 (SD = 3.14) years of research experience, including their Ph.D. experience; research experience ranged from not having any experience at all (n = 9) to as much as 22 years of research experience (n = 1). Lastly, slightly more than half (54.40%) of the sample had not attended any additional education or training on research methods outside of what their Ph.D. program offered.

TABLE 1
www.frontiersin.org

Table 1. Nationalities represented by language.

Procedure

We began with a forward and back translation that would result in equivalent versions of the scale in three different languages: Catalan, Spanish, and English. These three languages were chosen as the authors were (1) native speakers of these languages and (2) worked and recruited in countries where these languages are predominantly spoken. The original R-Comp items were translated from German to English solely for publication purposes (see Thiel and Böttcher, 2014). Our process began first by translating the English version of the items into Catalan and Spanish by two native Catalan and Spanish speakers, respectively. Items were translated back into English by a non-native English speaker. Lastly, the items were revised by a native English speaker, at which point revisions and inconsistencies were discussed by the whole group. Part of this process included ensuring that items were unidimensional (i.e., not double-barreled). For example, the R-Comp item “I can confidently apply even complex methods to analyze data/sources/material” was split into the following three items: “I can confidently analyze quantitative data,” “I can confidently analyze qualitative data,” and “I can confidently use a variety of methods for analyzing data (Excel, specialized software, etc.).” Items were also reviewed and edited for clarity when needed. For instance, the item “I am able to plan a research process” was revised to “I am able to plan a research study.” The final list of items were reviewed by native speakers of each language as well as by two individuals who were fluent in each of the three languages. The adapted 36-items of the PR-Comp are presented side-by-side with the original R-Comp items in Table 2. We retained both the 5-point Likert response scale (“Strongly Disagree” to “Strongly Agree”) as well as the five proposed subscales (Böttcher and Thiel, 2018; Böttcher-Oschmann et al., 2021).

TABLE 2
www.frontiersin.org

Table 2. The adapted PR-Comp items compared to original R-Comp items by subscale.

Using an electronic recruitment campaign, a non-probabilistic sample of students and recent graduates from doctoral programs in Mexico, Spain, and the United States were recruited to participate in the study. Students were recruited from programs in these countries since the authors themselves worked in or had connections to programs in these countries. Additionally, students currently and recent graduates of doctoral programs were recruited to participate using social media advertisements. Consenting participants who received the link to the questionnaire either via email or by seeing the study advertisement on social media then completed the questionnaire online, on their own time, in one sitting. Participants were informed their participation was completely voluntary and anonymous. After consenting, participants could choose to complete the questionnaire in either Catalan (n = 141), English (n = 111), or Spanish (n = 204). Data collection took place from late fall of 2019 to summer of 2020.

Data Analysis

Data were screened for response patterns prior to analyses. Descriptive analyses were conducted and both Cronbach’s alpha and McDonald’s omega were calculated to examine the internal consistency of the five subscales in each language. Once internal consistency and normality within each language were established, we then conducted confirmatory factor analysis (CFA) using all data to confirm the five-factor structure of the PR-Comp and provide added evidence for validity based on internal structure. Model fit was assessed using the following indices: comparative fit index (CFI) > 0.95 (Hu and Bentler, 1999); Tucker-Lewis index (TLI) > 0.95 (Hu and Bentler, 1999); root mean square error of approximation (RMSEA) < 0.06 (Hu and Bentler, 1999) and its 90% confidence interval whereby (Kline, 2016). These guidelines and suggestions for “cutoff” points were used to inform overall evaluation of model fit (Marsh et al., 2004). We also compared AIC values of each model, with smaller AICs indicating better model fit (Schermelleh-Engel et al., 2003). The chi-square goodness of fit test was examined to test the null hypothesis that a model fitting the data exactly exists (Kline, 2016). Once we ensured factorial validity of the model, and with the idea of testing invariance across languages, we conducted CFA on all three language-models (Catalan, English, and Spanish), following the recommendations of Sass and Schmitt (2013). Assuming factorial validity of the three language-models was obtained, we then evaluated the configural invariance (as a baseline model) and measurement invariance (both metric and scalar invariance) of the factor model by applying the forward approach (sequentially adding more model constraints). To evaluate the model fit, we used Chen’s (2007) cutoff criteria: reject ΔCFI < −0.01 and ΔRMSEA < 0.01. We also considered the need of obtaining non-significant X 2 for the language-models. All data were analyzed using SPSS and AMOS version 23.

Results

Prior to analyses, we examined descriptive statistics and assessed our data for missingness; our data were complete and as such, there were no missing data patterns identified. However, those completing the PR-Comp in English did tend to have higher scores than those completing the PR-Comp in Catalan and Spanish; all descriptive statistics are provided in Table 3. Subscale scores were normally distributed within each language. Cronbach’s alpha was calculated for each of the five subscales within each language. All alphas were equal to or greater than 0.75 with most being above 0.80, indicating good internal consistency among subscales within each of the three languages (see Table 4). One item on the Communication Skills subscale displayed a low scale-item correlation. Specifically, item 3 showed a low item-scale correlation (0.27); alpha would increase from 0.84 to 0.92 if item 3 was deleted. Three items on the Methodological Skills subscale yielded moderate scale-item correlations; for each of these items, alpha would either increase marginally or not at all if these items were removed. McDonald’s omega was also calculated to ensure the stability of alphas in case one of the assumptions were not met. Omegas tended to be higher than alphas, providing added evidence that subscales produced reliable scores.

TABLE 3
www.frontiersin.org

Table 3. Descriptive statistics for PR-Comp subscale.

TABLE 4
www.frontiersin.org

Table 4. Reliability analyses of the PR-Comp subscales and instrument.

Confirmatory factor analyses were conducted to examine whether the proposed five-factor structure could be confirmed with our data. According to the chi-square goodness of fit test, we reject the null hypothesis that a model that fits the data exactly exists; X 2(5) = 12.58, p = 0.03. To assess model-data fit, we examined the RMSEA and its 90% confidence interval, the CFI, the TLI, and AICs of a five-factor model including all 36 PR-Comp items and a reduced five-factor with items having low or moderate scale-item correlations removed. Deleting items with low moderate scale-item correlations did not result in improved model fit. The full model resulted in a more favorable RMSEA value and a marginally more favorable CFI and TLI values as well as a lower AIC. However, the confidence intervals for the RMSEA provided weaker evidence of good model fit. Taking all of these indices together, the five-factor model including all 36 items was championed (see Table 5 and Figures 1, 2).

TABLE 5
www.frontiersin.org

Table 5. Model-fit indices for PR-Comp full versus reduced model.

FIGURE 1
www.frontiersin.org

Figure 1. Five-factor model of 36-item PR-comp scale.

FIGURE 2
www.frontiersin.org

Figure 2. Five-factor model of reduced PR-comp scale.

Model invariance across languages was first evaluated by conducting factorial validity of the model in each language. As seen in Table 6, model-fit indices were not adequate for any of the language-models (M1–M3). Indeed, Hoelter’s value at 0.05 of significance, indicated that M2 and M3 (English and Spanish, respectively) did not obtain the minimum sample size needed according to the complexity of the model, which should be above 200 (Garson, 2015). Thus, we were not able to guarantee factorial validity as a first step for model invariance. Nonetheless, and with the idea of exploring the results of evaluating measurement invariance, we continued with the analyses as if model-fit indices were accepted. As expected, Table 7 showed that there was configural non-invariance (unconstrained baseline model) as well as metric and structural non-invariance. It can be observed when looking at chi-square significance and the cutoff criteria suggested by Chen (2007) at CFI and RMSEA values. Thus, these results suggested that the data collected was not enough to ensure model invariance among languages.

TABLE 6
www.frontiersin.org

Table 6. Model-fit indices for PR-Comp language-models.

TABLE 7
www.frontiersin.org

Table 7. Measurement invariance across PR-Comp language-models.

Discussion

Our study aimed to further validate Böttcher and Thiel’s (2018) questionnaire as well as gather evidence for validity and reliability to support the use of this questionnaire across multiple disciplines and languages. Our findings provided added evidence that the PR-Comp can be applied across various disciplines and fields of study in addition to confirming the five-factor structure underlying the items, as proposed by Böttcher and Thiel (2018).

The results of our CFA were consistent with that of Böttcher and Thiel (2018) and our analysis confirmed that a five-factor structure fits the data well. This provided validity evidence based on the internal structure of the scale as well as supported the scale’s theoretical alignment with the RMRC-K model of research competence (Thiel and Böttcher, 2014). Furthermore, our results provided evidence for reliability of the scores produced by the PR-Comp both globally and within each language. All alphas were quite high and would either increase marginally or not at all with the deletion of any items. As previously mentioned, one item, item 3 on the Communication Skills subscale, did appear problematic and would result in a notable increase in alpha if it were removed from the scale. Furthermore, this finding was consistent across languages, suggesting this item should be examined further for clarity and relation to other variables in the scale. McDonald’s omega values were also calculated for each subscale score and tended to be higher than alphas. Taken together, both alphas and omegas suggest internal consistency of the subscale items is quite high. However, there are important limitations to our study that limit the generalizability of our findings.

Our findings do provide some initial evidence for external validity as it appears using the PR-Comp scale may be appropriate for use in and produces reliable scores when applied across countries, languages, and fields of study. However, due to the small sample size within each language group, we are not yet able to definitively demonstrate measurement invariance across languages. Future studies should seek to gather added evidence for external validity of the instrument. Another limitation to the broad applicability of our findings is that they are based on responses to a self-reported assessment of one’s own competence, which may limit the accuracy of responses. Participants may have responded more favorably to make it seem as though they were more competent than their true competence. Findings might be more accurate if scores represented another person’s assessment of an individual’s research competence rather than one’s own judgment of their competence. For instance, doctoral program faculty and/or instructors’ ratings of individual students’ competence on each of the PR-Comp items may provide a more objective rating of competence. Moreover, correlating instructors’ ratings of research competence with students’ ratings of research competence could provide convergent validity evidence of PR-comp scores. As research competence and research self-efficacy are often interchangeably used, future researchers could also consider using scores on a research self-efficacy measure as a means of gathering discriminant validity evidence for PR-Comp scores. This information would be useful to broaden our understanding of the theoretical differences between these two constructs. Though the PR-Comp items were drafted and edited by individuals having a Ph.D., it would be beneficial to further evidence for validity based on test content by having doctoral faculty from various fields review the items. This would ensure that items are relevant to all graduates of doctoral programs. Another limitation to note is that our sample was recruited from doctoral programs and/or countries to which the authors had ties or in which the authors worked. While steps were taken to gather data from a large sample that represented various fields and countries as well as languages spoken, our findings might not represent all early career researchers across the globe. Similarly, the majority of our sample were still enrolled in their doctoral programs, meaning our findings are less generalizable to early career researchers and recent graduates of doctoral programs. Finally, future studies should consider investigating measurement invariance (Davidov et al., 2012) to ensure that we are measuring similar constructs within each language group. While our primary purpose was to expand evidence for validity and reliability of the PR-Comp’s original purpose, more work is needed to be sure the scale captures the same construct across languages. Since the original items were created in German and then translated for publication, including data from a German-speaking sample would also be beneficial.

Having a tool for measuring research competence that is appropriate across settings and languages that also produces reliable scores has implications for both practice and research. Regarding practice, doctoral program faculty and staff could use this tool to measure and track their students’ and graduates’ perceived research competence. For example, PR-Comp scores could be used to identify areas of strength and weakness of a program which could then inform intervention efforts to boost research competence in a more targeted way. A tool that can be completed by students would not only be a more efficient means of collecting this data but would enable programs to track individual and cohort student growth over time. This tool could also be used as a means for assessing readiness for doctoral study since a basic understanding of some research concepts is needed prior to beginning doctoral study. Regarding research, the PR-Comp scale is a tool that appears to produce stable and generalizable findings that apply to the broader population of early career researchers and doctoral students. This could lead us to greater understanding of the global landscape of research competence of graduates of Ph.D. programs and, ultimately, support the global efforts of doctoral programs and their quest to train competent researchers (e.g., Hambrick, 1997; Pole, 2000; Trotter, 2003; Jackson, 2013; Pinto et al., 2013; Poh and Kanesan Abdullah, 2019; Fairman et al., 2021; MacLeod and Urquiola, 2021).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by Virginia Commonwealth University Institutional Review Board (IRB). The patients/participants provided their written informed consent to participate in this study.

Author Contributions

CQ-P and SM were the leaders of this project and contributed most to the project by designing the study and taking the lead on manuscript preparation. KN and ES-I contributed to data collection efforts and drafted the initial introduction of the study. JM-F supported data collection and reviewed drafts of the manuscript. All authors contributed to data collection efforts and manuscript preparation.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Andersen, S. (2000). Fundamental human needs: making social cognition relevant. Psychol. Inq. 11, 269–276. doi: 10.1207/S15327965PLI1104_02

CrossRef Full Text | Google Scholar

Arshadi, N. (2010). Basic need satisfaction, work motivation, and job performance in an industrial company in Iran. Procedia Soc. Behav. Sci. 5, 1267–1272. doi: 10.1016/j.sbspro.2010.07.273

CrossRef Full Text | Google Scholar

Böttcher, F., and Thiel, F. (2018). Evaluating research-oriented teaching: a new instrument to assess university students’ research competences. High. Educ. 75, 91–110. doi: 10.1007/s10734-017-0128-y

CrossRef Full Text | Google Scholar

Böttcher-Oschmann, F., Groß Ophoff, J., and Thiel, F. (2021). Preparing teacher training students for evidence-based practice promoting students’ research competencies in research-learning projects. Front. Educ. 6:642107. doi: 10.3389/feduc.2021.642107

CrossRef Full Text | Google Scholar

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Modeling 14, 464–504. doi: 10.1080/10705510701301834

CrossRef Full Text | Google Scholar

Davidov, E., Dülmer, H., Schlüter, E., Schmidt, P., and Meuleman, B. (2012). Using a multilevel structural equation modeling approach to explain cross-cultural measurement noninvariance. J. Cross Cult. Psychol. 43, 558–575. doi: 10.1177/0022022112438397

CrossRef Full Text | Google Scholar

De Jong, T. (ed.) (2021). Graduating around the globe: Protocols, principles, and traditions for PhD graduations - examples from the learning sciences. Available online at: https://issuu.com/utwente/docs/graduating_around_the_globe (accessed December 8, 2021).

Google Scholar

Deci, E. L., and Ryan, R. M. (1985). Intrinsic Motivation and Self-Determination in Human Behavior. Chicago: Plenum Press.

Google Scholar

Deci, E. L., and Ryan, R. M. (2000). The ‘what’ and ‘why’ of goal pursuits: human needs and self-determination of behavior. Psychol. Inq. 11, 227–268. doi: 10.1207/S15327965PLI1104_01

CrossRef Full Text | Google Scholar

Dozier, S. L. (2001). On Becoming a Therapist: An Examination of Psychology Doctoral Student’s Satisfaction With Training, Affective State, Functioning and Perceived Clinical Competence. Ph.D. thesis. Chicago: The Chicago School of Professional Psychology.

Google Scholar

Fairman, J. A., Giordano, N. A., McCauley, K., and Villarruel, A. (2021). Invitational summit: re-envisioning research focused PHD programs of the future. J. Prof. Nurs. 37, 221–227. doi: 10.1016/j.profnurs.2020.09.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Gagné, M., and Deci, E. L. (2005). Self-determination theory and work motivation. J. Organ. Behav. 26, 331–362. doi: 10.1002/job.322

CrossRef Full Text | Google Scholar

Garson, G. D. (2015). Structural Equation Modeling. Sunderland: Statistical Associates Publishers.

Google Scholar

Graesser, E. J. (2014). Serving Clients with Intellectual Disabilities: Clinical Psychology Training in APA-Accredited Doctoral Programs. Ph.D. thesis. New England: Antioch University.

Google Scholar

Hambrick, R. (1997). The identity, purpose, and future of doctoral education. J. Public Adm. Educ. 3, 133–148. doi: 10.1080/10877789.1997.12023423

CrossRef Full Text | Google Scholar

Hu, L., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Modeling 6, 1–55. doi: 10.1080/10705519909540118

CrossRef Full Text | Google Scholar

Ingram, P. B., Schmidt, A. T., Bergquist, B. K., and Currin, J. M. (2020). Coursework, instrument exposure, and perceived competence in psychological assessment: a national survey of practices and beliefs of health service psychology trainees. Train. Educ. Prof. Psychol. 16, 10–19. doi: 10.1037/tep0000348

CrossRef Full Text | Google Scholar

Jackson, D. (2013). Completing a PhD by publication: a review of Australian policy and implications for practice. High. Educ. Res. Dev. 32, 355–368. doi: 10.1080/07294360.2012.692666

CrossRef Full Text | Google Scholar

Jung, J. (2018). Learning experience and perceived competencies of doctoral students in Hong Kong. Asia Pac. Educ. Rev. 19, 187–198. doi: 10.1007/s12564-018-9530-0

CrossRef Full Text | Google Scholar

Kline, R. B. (2016). Principles and Practice of Structural Equation Modeling, 4th Edn. New York: The Guilford Press.

Google Scholar

Lambie, G. W., Hayes, B. G., Griffith, C., Limberg, D., and Mullen, P. R. (2014). An exploratory investigation of the research self-efficacy, interest in research, and research knowledge of Ph.D. in education students. Innov. High. Educ. 39, 139–153. doi: 10.1007/s10755-013-9264-1

CrossRef Full Text | Google Scholar

Lambie, G. W., and Vaccaro, N. (2011). Doctoral counselor education students’ levels of research self-efficacy, perceptions of the research training environment, and interest in research. Couns. Educ. Superv. 50, 243–258. doi: 10.1002/j.1556-6978.2011.tb00122.x

CrossRef Full Text | Google Scholar

Latorre, C. (2020). The Importance of Mindfulness and Self-Compassion in Clinical Training: Outcomes Related to Self-Assessed Competency and Self-Efficacy in Psychologists-in-Training. Ph.D. thesis. Morgantown: West Virginia University.

Google Scholar

Losier, G. A., and Vallerand, R. J. (1994). The temporal relationship between perceived competence and self-determined motivation. J. Soc. Psychol. 134, 793–801. doi: 10.1080/00224545.1994.9923014

PubMed Abstract | CrossRef Full Text | Google Scholar

Losier, G. F., and Vallerand, R. J. (1995). The development and validity of an interpersonal relations scale in sport. Int. J. Sport Psychol. 26, 307–326.

Google Scholar

MacLeod, W. B., and Urquiola, M. (2021). Why does the United States have the best research universities? Incentives, resources, and virtuous circles. J. Econ. Perspect. 35, 185–206. doi: 10.1257/jep.35.1.185

CrossRef Full Text | Google Scholar

Marsh, H. W., Hau, K. T., and Wen, Z. (2004). In search of golder rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers to overgeneralizing Hu and Bentler’s (1999) findings. Struct. Equ. Modeling 11, 320–341. doi: 10.1207/s15328007sem1103_2

CrossRef Full Text | Google Scholar

Olehnovica, E., Bolgzda, I., and Kravale-Paulina, M. (2015). Individual potential of doctoral students: structure of research competences and self-assessment. Procedia Soc. Behav. Sci. 174, 3557–3564. doi: 10.1016/j.sbspro.2015.01.1072

CrossRef Full Text | Google Scholar

Petko, J., Sivo, S., and Lambie, G. (2020). The research self-efficacy, interest in research, and research mentoring experiences of doctoral students in counselor education. J. Couns. Preparation Superv. 13, 1–27.

Google Scholar

Phillips, J. C., and Russell, R. K. (1994). Research self-efficacy, the research training environment, and research productivity among graduate students in counseling psychology. Couns. Psychol. 22, 628–641. doi: 10.1177/0011000094224008

CrossRef Full Text | Google Scholar

Pinto, M., Fernández-Ramos, A., Sánchez, G., and Meneses, G. (2013). Information competence of doctoral students in information science in Spain and Latin America: a self-assessment. J. Acad. Librariansh. 39, 144–154. doi: 10.1016/j.acalib.2012.08.006

CrossRef Full Text | Google Scholar

Poh, R., and Kanesan Abdullah, A. G. B. (2019). Factors influencing students’ research self-efficacy: a case study of university students in Malaysia. Eurasian J. Educ. Res. 82, 137–168. doi: 10.14689/ejer.2019.82.8

CrossRef Full Text | Google Scholar

Pole, C. (2000). Technicians and scholars in pursuit of the PhD: some reflections on doctoral study. Res. Pap. Educ. 15, 95–111. doi: 10.1080/026715200362961

CrossRef Full Text | Google Scholar

Sass, D. A., and Schmitt, T. A. (2013). “Testing measurement and structural invariance: Implications for practice,” in Handbook of Quantitative Methods for Educational Research, ed. T. Timothy (Rotterdam: SensePublishers), 315–345. doi: 10.1007/978-94-6209-404-8_15

CrossRef Full Text | Google Scholar

Schermelleh-Engel, K., Moosbrugger, H., and Muller, H. (2003). Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol. Res. Online 8, 23–74.

Google Scholar

Sheldon, K. M., and Elliot, A. J. (1999). Goal striving, need satisfaction, and longitudinal well-being: the self-concordance model. J. Pers. Soc. Psychol. 76, 482–497. doi: 10.1037/0022-3514.76.3.482

PubMed Abstract | CrossRef Full Text | Google Scholar

Thiel, F., and Böttcher, F. (2014). “Modellierung fächerübergreifender Forschungskompetenzen. Das RMKR-W-Modell als Grundlage der Planung und Evaluation von Formaten forschungsorientierter Lehre,” in Neues Handbuch Hochschullehre. Lehren und Lernen effizient gestalten. [Teil] I. Evaluation. Fachbereichs- /Studiengangsevaluation, eds B. Berendt, A. Fleischmann, J. Wildt, N. Schaper, and B. Szczyrba (Berlin: Raabe), 109–124.

Google Scholar

Trotter, J. (2003). Researching, studying or jumping through hoops? Reflections on a PhD. Soc. Work Educ. 22, 59–70. doi: 10.1080/02615470309132

CrossRef Full Text | Google Scholar

Vansteenkiste, M., Simons, J., Soenens, B., and Lens, W. (2004). How to become a persevering exerciser? Providing a clear, future intrinsic goal in an autonomy-supportive way. J. Sport Exerc. Psychol. 26, 232–249. doi: 10.1123/jsep.26.2.232

CrossRef Full Text | Google Scholar

Wilson, P. M., Longley, K., Muon, S., Rogers, W. T., Rodgers, W. M., and Wild, T. C. (2006). The psychological need satisfaction in exercise scale. J. Sport Exerc. Psychol. 28, 231–251. doi: 10.1123/jsep.28.3.231

CrossRef Full Text | Google Scholar

Keywords: research competence, doctoral programs, measurement, confirmatory factor analysis (CFA), validity, reliability, perceived competence

Citation: Marrs SA, Quesada-Pallarès C, Nicolai KD, Severson-Irby EA and Martínez-Fernández JR (2022) Measuring Perceived Research Competence of Junior Researchers. Front. Psychol. 13:834843. doi: 10.3389/fpsyg.2022.834843

Received: 13 December 2021; Accepted: 02 March 2022;
Published: 19 April 2022.

Edited by:

Tom Rosman, Leibniz Center for Psychological Information and Documentation (ZPID), Germany

Reviewed by:

Peter Adriaan Edelsbrunner, ETH Zürich, Switzerland
Hassan Mohebbi, European Knowledge Development Institute (EUROKD), Turkey
Jana Groß Ophoff, Pädagogische Hochschule Vorarlberg, Austria

Copyright © 2022 Marrs, Quesada-Pallarès, Nicolai, Severson-Irby and Martínez-Fernández. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sarah A. Marrs, marrssa@vcu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.