Perspectives of Paradigms – A Personal Exploration of Mixed Methods
Donna C. Tonini
Ed.D. Candidate, International Educational Development
Teachers College, Columbia University
November 25, 2009
Within the field of comparative and international education, there has long been ardent debate regarding quantitative and qualitative research paradigms, with contentions arising from their supposed dichotomous relationship (Onwuegbuzie, 2002). Yet, the numbers of scholars who eschew purely positivist or interpretivist approaches in favor of a mixed methodology is growing (Onwuegbuzie, 2002, Lund, 2005). Greene, Caracelli and Graham argue that the use of both quantitative and qualitative methods allows researchers to triangulate their data as well as expand the breadth and range of their inquiry (as cited in Johnson & Christensen, 2004).
I too firmly believe that there is a place for both quantitative and qualitative methods when conducting research. In my field of educational finance, most studies utilize some aspect of quantitative research, as statistics and regressions are useful in informing cost benefit analyses and resource optimization strategies. An epistemological combination of authoritative expertise through literature reviews and empirical research based on experiential data is often employed more than qualitative approaches. However, one particular experience I had in a third grade classroom in Harlem during my tenure as a reading tutor enlightened me to the need for more qualitative research in the field. I had been working with six children designated for special reading services as their most recent test scores indicated that these children were below the third grade reading level.
After a couple of months, I did notice marked improvement with most of my students. However, towards the end of the term, the teacher received the composite score of her students’ “E-Class” exams, which to her dismay, was a dismal 29%. I myself was also shocked. I knew that if I was in an administrative position, evaluating nothing but that test score without any personal connection to that classroom, I may have judged the children to be incapable and the teacher to be incompetent. However, by being in the classroom personally I knew that neither potential assumption was true; yet this conclusion was drawn on an episteme of intuition alone. I could not help but wonder how many times teachers and students were judged by the numbers on the pages rather than the content of their individual stories.
"My use of quantitative and qualitative data proved invaluable,
both to me and to my research participants..."
After that experience, I resolved to not only consider research methodologies that involved quantitative research, but qualitative study as well. That approach served me well in the field when I conducted my dissertation research on policies affecting secondary enrollment growth in Tanzania in 2008. I had designed my study to not only collect the cost data affiliated with two different education strategies employed by the government of Tanzania, but also to interview the people most impacted by these policies – school heads, teachers, parents and students – to gain insight into how the policies were being implemented and understood on the ground. My use of quantitative and qualitative data proved invaluable, both to me and to my research participants. Regarding the quantitative analysis, part of the data collection process involved working with groups of Tanzanian parents to identify the individual contributions and fees required to receive public secondary education. However, when working with these groups, it became apparent that as most parents did not track these expenses, they did not have a true understanding of the full cost of secondary education that they bore. In fact, after completing one such costing exercise, one parent exclaimed that she had no idea that secondary education in Tanzania cost as much as the numbers before her revealed. As such, the parents learned as much as I did that day.The qualitative portion of my study also yielded extremely important information that was critical to help me interpret the logistic regression models I had built to understand the impact of family characteristics on parents’ secondary schooling decisions in Tanzania. My models did not have strong explanatory power, despite containing many of the independent variables used by various other studies in educational finance. It was not until I had analyzed the qualitative data – interviews of scores of parents conducted in Tanzania – that I discovered a trend that led to an extenuating variable which fueled the implementation of the policies that were the focus of my research. A presidential statement, strongly urging parents to send their children to secondary school, became interpreted as a law, playing a major role in compelling parents’ compliance (Tonini, 2010). This extraneous variable, absent from the literature on educational finance, was not formally acknowledged in any policy or official Tanzanian document. If it were not for the voices of these parents shedding light on societal and political forces at work behind the policies, my study would have told a much different story and would have failed to serve testament to the situation that was actually happening on the ground.
Due to these personal experiences, I feel that research parameters are stronger when different paradigms are entertained. Returning to the example of my students in Harlem, to deduce the true nature of that test score in my classroom, I would have benefited by observing each child as they took the exam, and interviewing them during the procedure to understand how they interpreted the questions and how they formulated their answers. By gaining insight into their thought processes, I would have better comprehended the disconnect between my personal evaluation of my students’ abilities and the numerical assessment represented by the test score. Using the test scores alone might have caused me merely to deduce that the children were unintelligent, resulting in a false conclusion and invalid research, which would only serve to enforce the harmful and unwarranted yet prevalent stereotype of inner-city schools. In other words, to understand why capable children mustered a miserable collective score of 29%, I must consider how each individual understood the test before evaluating whether that test was an appropriate measure of student knowledge. Only by employing both paradigms, by comparing the original quantitative numerical assessment with a qualitative study indicating the true nature of the children’s difficulties with the exam itself, would I ever be able to convince a school board that the children were not necessarily failing the test, but the test was failing the children.
Johnson, B. & Christensen, L. B. (2004). Educational research: Quantitative, qualitative, and mixed approaches. Boston, MA: Allyn and Bacon.
Lund, T. (2005). The qualitative-quantitative distinction: Some comments. Scandinavian Journal of Educational Research, 49 (2), 115-132.
Onwuegbuzie, A. (2002). Why can’t we all get along? Towards a framework for unifying research paradigms. Education, 122 (3), 518-530.
Tonini, D. C. (2010). The wide divide between policies and impacts: Examining government influence over parental educational choice in Tanzania. Unpublished doctoral dissertation, Teachers College, Columbia University.
Steven J. Klees
Donna C. Tonini
Mariella I. Arredondo
Mary Mendenhall and Juleen Morford
William C. Brehm, and
Space designed for your suggestions, comments, or questions regarding the CIES Newsletter
Information about conferences and events, awards, recently-published books, and vacancies