Research methods in psychology
Introduction to psychology
Loraine Townsend & Cheryl de la Rey
After studying this chapter you should be able to:
•describe what steps to take when planning a research project
•explain the methods employed in conducting research
•explain the ways in which information is gathered from research participants
•show a basic understanding of data analysis
•describe the ways in which research findings are reported
•understand the difference between quantitative and qualitative approaches to research
•demonstrate an understanding of ethical principles in psychology.
Nosipho liked the idea of research because it meant finding out something new, perhaps an answer to an important question or information that would help to improve people’s lives in some way. It seemed to her a bit like being a detective. It was particularly interesting to think about discovering new things in a field like psychology when there was still so much mystery about why people felt, thought and behaved the way they did. In some way, she supposed, she had always been a kind of researcher. She had always observed the people around her very closely and, when the opportunity arose, asked them questions about their lives and how they saw the world. However, as Nosipho learned more about research at university, she realised that there were all sorts of useful ways of getting more accurate information and a deeper understanding of how things worked.
Consult any thesaurus and you will find that synonyms for research include: investigate, study, explore, look into and examine. Psychologists do all these things in their quest to understand human behaviour and social phenomena, and to generate new knowledge about these. In terms of professional practice, there is also an increasing emphasis on (and demand for) evidence-based practice.
In the past, most psychological research was concentrated on observable, measurable behaviour and phenomena. To study these, psychologists used quantitative research methods, which, broadly speaking, produce numeric data for statistical analysis. Then some psychologists began to use qualitative research methods (which, broadly speaking, produce word-based data in order to understand phenomena from the perspective of the research participants). This led to several years of polarised debate about the relative merits of quantitative and qualitative research. However, most psychologists now accept that both approaches are equally legitimate. In many instances, quantitative and qualitative approaches provide complementary insights into human behaviour and social phenomena. This chapter subscribes to this view and, as we explore each of the steps in the research process, both approaches will be described where relevant.
Figure 2.1 provides a useful depiction of the typical steps in the research process. These apply to both quantitative and qualitative research. Each of the steps is dependent on the others. Obviously, one would not analyse data (step 4) before having collected it (step 3). However, variations in this cycle will occur depending on the nature of the behaviour or phenomenon that is being researched. Depending on the methodology employed, greater emphasis may be placed on some of the steps while others might be ignored. We will now consider each step in turn.
Figure 2.1 The research cycle
Step 1: Planning
Selecting a research topic
Selecting a research topic is obviously the first step in the research process. Bless, Higson-Smith and Kagee (2006) suggest a number of sources of research topics:
•observation of everyday life
•prior research — this often prompts further studies, particularly if there have been vague, unclear or contradictory results, or if there have been concerns about the methods that were used; repeating a study (called replication) with a different group of people or in a different context is often very useful
•verifying or refuting a theory
•specific concerns of a certain group of people
•personal interest in a topic.
Each one of these sources could lead to a research topic, but ideally researchers would be motivated by a combination of them.
Reviewing existing literature
Once a topic area has been chosen, a researcher should consult previous work done in the field in order to find out more information on this topic. The researcher may need to understand the relationship between existing facts, or he/she may need to understand people’s first-hand experience of the problem. The researcher would need to identify, find and review recently published research that is relevant to the topic area. There are many purposes of a literature review:
•It orients a researcher to what other researchers have found. The researcher may discover, for instance, that the topic has been extensively researched, both locally and internationally, and that another study would not add anything new to the body of knowledge about the topic.
•If the literature review does reveal gaps or weaknesses, the researcher can direct his/her research in these directions.
•Knowledge of what research methods have been used in the past (successfully or unsuccessfully) will help the researcher choose an appropriate method for his/ her own study.
•An examination of the theoretical frameworks that have guided previous research will help a researcher to choose an appropriate theoretical framework for his/her own research.
The most common sources of relevant information are academic journals, books, colleagues and the internet. All academic institutions stock hard copies of a wide variety of academic journals and subscribe to selected electronic journals and databases. This enables students and staff to search for sources relating to specific topics using keywords and to download articles from electronic sources. A number of journals are now making their articles freely available online, while others allow the online purchase of single articles. Books written or edited by experts in particular fields are another valuable source of information. Searching a library’s online catalogue by keyword, author and/or title will reveal relevant literature. In addition to this, colleagues are an invaluable source of information. Valuable intellectual and practical information can be gained by talking to people who have researched, or simply have an interest in, a particular area. Finally, the internet is increasingly used as a source of information. Typing relevant keywords into search engines can produce an overwhelming amount of information. The trick here is to be able to distinguish authoritative sites from those that make questionable claims to knowledge.
Formulating the research problem and posing the research questions
After researchers have reviewed the relevant literature and decided that a topic is indeed worthy of investigation, the next step is to formulate a general research problem and to reduce it to one or more specific research questions (see Box 2.1).
Once the researchers have phrased one or more research questions, they must carefully consider whether these are answerable. They must think about whether there will be intellectual, practical, personal or ethical problems that could prevent them from answering these questions. For example, quantitative researchers would need to determine whether the research question can be answered through the observation of measurable facts. Qualitative researchers would need to consider whether research participants would be willing to share their experiences with the researcher. All researchers would need to consider practical questions such as: ’Is there enough time to conduct the study?’, ’Are there enough resources available?’, and ’Are there enough or too many potential participants?’ Also, researchers should examine their own motivation for undertaking the research. If researchers have a genuine interest in a topic, their research on this topic is likely to be of better quality than if this interest is lacking. Finally, any ethical issues need to be carefully considered. An example of an ethical problem in research can be found in Chapter 1 where Stanley Milgram’s classic study in obedience is described.
Planning a research study takes several steps:
•A workable and useful topic must be selected.
•The existing literature should be studied to identify what gaps in knowledge exist.
•The general problem to be addressed by the research must be formulated and specific research questions developed.
Step 2: Research methods
Researchers need to be clear about the type of research study they are undertaking. The purpose of their research study will influence their choice of research design.
Types of research studies
Researchers can undertake exploratory studies, descriptive studies or explanatory studies. An exploratory study is when researchers investigate a topic on which there is very little existing information. As this name suggests, the researchers will explore the topic of interest and provide tentative explanations from which new research questions and hypotheses will arise. If researchers wish to simply describe a behaviour or a phenomenon, they conduct a descriptive study. If researchers wish to explain the relationship between two or more variables, they conduct an explanatory study.
Types of research design
Durrheim (2006) suggests that a research design may vary along a continuum from inflexible blueprints at the one extreme to pragmatic guides for action at the other.
Quantitative researchers who believe that the variables of interest can be identified, observed and measured will develop a research design that is inflexible, specified in advance and technical in nature. They will state in detail how they will conduct the study and will seldom deviate from this plan.
2.1AN EXAMPLE OF HOW TO FORMULATE A RESEARCH PROBLEM
Imagine that, as a postgraduate student in psychology, you have to conduct a research project as part of your course work. You do not want to conduct research merely to be able to pass the course at the end of the year. You feel strongly that your study should be about something in which you have an interest, and that will also have some practical value.
You read a report in the local newspaper about the escalating incidence of date rape among female adolescents in the area where you live. The report quotes findings from two studies that appear to be contradictory: one found no change in the incidence of date rape among young women and the other found a dramatic increase. The latter study questioned the methods used in the former study and therefore disputed the findings. The report went on to describe one of the theories put forward by researchers as to why there was a rise in date rape among young women. These researchers suggested that recent empowerment of young women had led to men feeling disempowered, and that forcing young women to have sex could be a way in which men were seeking to re-empower themselves. The report concluded with an appeal by local community leaders for the community to take action to prevent date rape among young women.
You have had a personal interest in sexual abuse since one of your friends was raped by her date some years previously. Emotionally she has never recovered from her experience; she dropped out of school before completing Grade 12 and she cut all ties with her old friends.
On the basis of your personal interest, your reading of the newspaper report and your awareness of the significant threat that date rape poses to female members of your community, you realise that this is a topic that you would like to research. You go to the university library and also search online to source a variety of information about sexual abuse among young people. After reviewing the literature, and after thinking about potential intellectual and practical problems, you formulate the research problem and the specific research questions that will guide your research, which are shown below.
Research problem: What are the factors that contribute to date rape among young women in Gardenia*, Cape Town?
1.What is the incidence of date rape in Gardenia, Cape Town?
2.Does a young woman’s empowerment increase her risk of date rape?
3.What are the consequences of date rape?
* A fictitious name
On the other hand, qualitative researchers are likely to develop a more flexible research design: one that is not specified in advance, and which is open to change during the course of the study. Indeed, some qualitative researchers suggest that one should not begin with a predetermined research design; instead, research designs should only develop during the research process.
However, most research falls somewhere along the continuum rather than at an extreme. For example, a quantitative researcher may discover that a certain measure (e.g. a personality test) is not available and will need to amend his/her design accordingly, while a qualitative researcher may specify in advance which type of data collection method will be used.
Whether conceived by a quantitative or qualitative researcher, a research design is a specification of the most appropriate actions to be performed in order to answer the research questions successfully and/or test the hypotheses. When writing their research design, researchers should state what variables they will use, and what type of conclusions they want to make based on their analysis. They also need to state the time frame in which they will collect their data, and from whom or what they will collect this information.
Using variables in the design of a quantitative research study
A variable is a property of an individual that could differ from one person to the next. For example, if a researcher’s sample consisted of both men and women, gender would be a variable. However, if a researcher’s sample consisted only of young women, gender would not be a variable, and would instead be referred to as a constant. Constants are aspects that (within the sample under study) do not vary from one person to the next. For example, if a researcher is studying first-year students, academic year would be a constant. Common variables in psychology research include age, IQ, level of education and academic results.
Much research in psychology is interested in understanding the relationship between two or more variables. We refer to independent variables and dependent variables. An independent variable (IV) is a variable that has values which exert an influence on another variable (the dependent variable). A dependent variable (DV) is a variable that has values which change as a result of the influence of another variable (the independent variable). For example, level of family income (IV) may exert an influence on where the family lives (DV).
We can also draw a distinction between categorical and continuous variables. Categorical variables can be divided into distinct categories (e.g. class at school; graduates or non-graduates), while continuous variables can exist at any point within a range or on a continuum (e.g. time, height and weight). However, how we decide whether a variable is categorical or continuous is open to some debate. For example, time could be rounded off to the nearest minute, in which case it would become a categorical variable (consisting of a set of consecutive minutes).
When variation in an independent variable leads directly to variation in a dependent variable, we say that there is a causal relationship between these variables. However, sometimes two variables vary in a related way, but we cannot determine whether a causal relationship exists between them or whether they are both influenced by a third variable. In such a case, we say that there is a correlation between these two variables. If two variables vary in the same direction, we say that there is a positive correlation between them; if two variables vary in opposite directions, we say that there is a negative correlation.
When using variables in our research designs, we should clearly describe the characteristics of each variable because a variable may mean different things to different people. For example, consider the variable of empowerment. In defining this variable, the researcher should describe and justify each of the elements which make up empowerment. To assist with this, the researcher would attempt to define the variable operationally. An operational definition of a variable includes all of ’ specific procedures used to produce or measure it’ (Holt et al., 2012, p. 39). For example, we talked above about ’academic results’ as a variable, but what exactly do we mean by this? Before we begin our study, we would have to define this operationally. It could be final exam results or all results during a semester or only results obtained under controlled condition (tests and exams). Measurement may be especially difficult in psychological research as many aspects can only be measured indirectly.
In order to construct an operational definition of a variable, it is often useful to draw a diagram similar to the one shown below. Each of the elements of empowerment (self-efficacy, self-esteem, self-actualisation, equity, self-concept and free will) should be described in the operational definition as well as the theoretical discussion justifying their inclusion. Because empowerment is described as being made up of (constructed from) these elements, it is known as a construct.
Figure 2.2 Elements of empowerment informing an operational definition of the construct
Units of analysis
The people or objects from whom a researcher wants to collect information are known as the units of analysis. They may be individuals, groups of people, organisations, periods of time and/or social artefacts. The units of analysis will influence who or what a researcher selects to participate in his/her study (the sample selection), how information will be collected from participants (the data collection), and what conclusions can be drawn from the results of the study (the interpretation). It is therefore important for researchers to be clear about what constitutes the units of analysis in their study, which will in turn impact on the type of research design they will develop.
A hypothesis is a specific statement related to the research question. It is based on tentative explanations for the research question. A researcher can propose a hypothesis as part of a research design. This hypothesis would then need to be tested to assess whether it can be accepted or must be rejected.
2.2AN EXAMPLE OF USING VARIABLES IN A RESEARCH DESIGN
Imagine that you were trying to answer the following three questions (which were formulated in Box 2.1):
1.What is the incidence of date rape in Gardenia, Cape Town?
2.Does a young woman’s empowerment increase her risk of date rape?
3.What are the consequences of date rape?
To answer the first question, it would be necessary to establish how many young women in your sample have experienced date rape. The variable of having experienced date rape or not can be given the numerical values 1 (having experienced date rape) and 2 (not having experienced date rape). You could answer this question by counting the 1 answers and the 2 answers and extrapolating from your sample.
To answer the second question, you would look at whether the likelihood of a woman having experienced date rape is influenced by the extent to which that woman feels empowered. In this case you are proposing that having been date raped or not is the dependent variable, as you are trying to see if it is influenced by another variable.
You are also proposing that the extent to which a young woman feels empowered is the independent variable. This sense of empowerment may be quantified according to a scale that gives each woman a score for empowerment, where a low score (1 or 2) indicates little empowerment, and a high score (5 or 6) indicates a strong sense of empowerment.
You are working with the hypothesis that the more empowered a young woman is (indicated by higher scores on the empowerment scale), the more likely this woman is to have experienced date rape. If you are able to exclude all other possible causes of date rape, you may then be able to show that a causal relationship exists between a young woman’s empowerment and her likelihood of having experienced date rape. If you cannot say with certainty that young women’s empowerment causes date rape, the two variables may still be shown to be correlated.
(In the above example the nature of causality has been simplified. In reality, before a researcher can say with certainty that a causal relationship exists between two variables, sophisticated methodological and analytical procedures need to be followed, but these are beyond the scope of this chapter.)
If a hypothesis were to be accepted, then the researcher could consider the relevant research question as having been satisfactorily answered. However, if a hypothesis were to be rejected, then that particular research question would not have been satisfactorily answered, and the researcher would need to reformulate the question and test a new hypothesis. In this way, hypotheses help to direct research.
An important part of this process is establishing the null hypothesis. This is the hypothesis that the researcher is trying to disprove (or nullify). This is the opposite of what the researcher is actually hypothesising, the experimental hypothesis. For example, the researcher might state: ’More than an hour a week of strenuous exercise improves mood’. In this case, the null hypothesis would be: ’More than an hour of strenuous exercise per week does not improve mood’ (Davey, 2004). The null hypothesis is important because we cannot always prove our original hypothesis. If we can reject the null hypothesis with confidence, this gives support for accepting the experimental hypothesis.
Consider the second question in Box 2.1. Just one hypothesis generated from this question could be: In Gardenia, young women who have experienced date rape have high self-esteem in comparison to those young women who have not experienced date rape.
If a researcher then found that young women who had experienced date rape scored considerably higher on some measure of self-esteem than those who had not experienced date rape, the researcher would claim that the hypothesis had been accepted. Conversely, if the researcher found that young women who had experienced date rape scored considerably lower on some measure of self-esteem than those who had not experienced date rape, or if there were no difference between the scores of these two groups of women, then the researcher would state that the hypothesis had been rejected.
Questioning the relevance of variables, definitions and formal hypotheses
The use of variables and formal hypotheses is characteristic of quantitative research. However, the main aim of most qualitative studies is to explain the ways that people come to understand and account for issues, events and behaviours in their lives. The information gathered covers the perceptions and interpretations of the participants. Because qualitative (or interpretive) research comes from the belief that human experiences and characteristics cannot be reduced to numbers, qualitative researchers would not be interested in giving numerical values to variables. Nor would they be interested in determining causal or correlational relationships between variables. Rather, they would focus on individuals’ subjective experiences of the phenomenon that forms the basis of the study. To this end, operational definitions have little relevance in qualitative research; researchers would be more interested in the explanations that people give of themselves and their lives (Grbich, 2007). Similarly, qualitative researchers do not conduct research as a means to accept or reject formal hypotheses. This is because qualitative research is not so much about establishing facts as about exploring what certain so-called facts mean to the individual research participants, and how these participants experience them.
Let us look again at the third question in Box 2.1: ’What are the consequences of date rape?’ A qualitative researcher would try to answer this question by talking to survivors of date rape and asking about how the experience affected their functioning. The researcher would explore what the young women themselves have to say about the effects of their experience. Indeed, a qualitative researcher may not even ask the participants directly about self-esteem, but would be more interested to see whether they speak of self-esteem issues themselves, or whether other issues are more significant to them. The interpretive approach obviously has little use for operationally defined and measured variables, nor formally stated hypotheses that need testing.
If all the data are collected from the units of analysis at one particular point in time, the research design is for a cross-sectional study. Box 2.2 refers to a single instance of collecting data on young women’s sense of empowerment.
If the data will be collected from the units of analysis over a period of time (e.g. every six months, or every year for three years), the research design is for a longitudinal study. For an example of a longitudinal study, see Box 3.10. This refers to Birth to Twenty (BT20), the largest and longest-running research project in Africa which has studied the health and development of 3 273 infants since their birth in 1990. One of the problems with longitudinal research is the loss of participants over time. BT20 has been quite successful in this regard, consistently collecting data from 50—65 per cent of participants.
A research design should clearly indicate what units of analysis will be accessed to provide accurate information that is relevant to the research topic. Suppose a quantitative researcher wishes to collect information from women who have experienced date rape. If that researcher could collect information from every single woman who had ever experienced date rape (the entire potential population of relevance to the study), he/she would be assured of the accuracy of the information gained (provided the participants were truthful), and the accuracy of the conclusions drawn from the information. However, accurate information can also be gained from a smaller group (a subset) of all the women who have experienced date rape; this subset is known as a sample. If that sample of women is representative of the population, the researcher can draw accurate conclusions from it. Then, the researcher can generalise these conclusions to apply to the entire group of women who have experienced date rape. Sampling theory allows researchers to select representative samples (see below) from populations of interest.
In order to make claims about a population on the basis of a sample from that population, researchers need to select their sample so that the people in it are representative of the population. This is called a representative sample. A representative sample is a group of people, selected by a researcher from a population, who are in all ways similar to that population. As long as the researcher has ensured that the sample is representative, he/she is able to generalise the results from the sample to the population as a whole. The practice of generalising findings from a sample to a population is called statistical inference.
The sampling frame
The first step to ensuring a representative sample is to use a complete and accurate sampling frame. A sampling frame records all the units of analysis from which data could be gathered. In essence, it is a complete list of the population of interest. A sampling frame cannot exclude any unit of analysis, and every unit in the sampling frame must have the same or a specified probability of being selected into a sample.
There are two main types of sampling: probability sampling and non-probability sampling.
When we know that every unit of analysis in a population has an equal chance of being selected into a sample, and we know what the likelihood of each unit being selected is, then we are using probability sampling. Simple random sampling, interval sampling, stratified random sampling and multi-stage sampling are all types of probability sampling.
In simple random sampling, a common procedure is for the researcher to assign a number to every unit in the sampling frame, then to write the numbers on identical pieces of paper and place them in a container. After being thoroughly mixed, numbers are drawn from the container and the corresponding unit of analysis is then included in the sample. An example of this procedure would be a lucky draw following a raffle. Computers can also be used to generate lists of random numbers, or tables of random numbers can be consulted. Again, numbers are assigned to every unit of analysis in the sampling frame, and units corresponding to the generated or tabled numbers are selected for the sample.
Interval sampling occurs in a similar way to simple random sampling. Numbers are again assigned to each unit of analysis in the sampling frame. The researcher then selects numbers at equal intervals, starting at a randomly chosen number. The units of analysis corresponding to the numbers thus selected are drawn into the sample. For example, an educator may select every fifth learner to recite the poem the class had to learn.
Simple random sampling and interval sampling are usually appropriate for medium-sized populations, but unmanageable where large populations are concerned. Instead for large populations stratified random sampling can be used. Stratified random sampling entails first dividing the population into different groups (strata) so that each unit of the population belongs to only one group. Then, simple random sampling or interval sampling is employed for each group. For example, a researcher may want to sample five learners from each class in a school. It would be unwieldy to list all of the learners so this could be done class by class (the strata).
The problem with all of the above sampling methods is that they rely on the availability of a complete list of every unit of analysis in the population of interest. But often this is impossible to obtain or create. In the absence of such a list, researchers can use multi-stage sampling (which is also called cluster sampling). In Box 2.3 this sampling method is applied to the date-rape study that we have already discussed.
When researchers are not concerned with the representativeness of a sample, they do not need to use probability sampling. Qualitative researchers often use non-probability sampling because they seldom attempt to generalise their findings to a population. Convenience sampling, purposive sampling and snowball sampling are all types of non-probability sampling.
In convenience sampling, a researcher will merely choose the closest people as participants. For example, a researcher may spend some time at a rape crisis centre and ask those people accessing the service to participate in his/ her study.
In purposive sampling, researchers use their own judgement about which participants to choose, and they select only those who best meet the purposes of their studies. A researcher who spends time at a rape crisis centre will encounter many people of varying age and gender. However, if that researcher believes that a typical date-rape survivor is adolescent and female, that researcher will only ask young females to participate in his/her study.
This same researcher could instead decide to approach only one woman who comes to the rape crisis centre, and, if she agrees to participate in the study, could then ask her to recruit others whom she knows have had a similar experience. This is known as snowball sampling.
When non-probability sampling methods have been used, the findings should not be generalised to a population.
How many is enough?
For both quantitative and qualitative researchers, deciding how many people should be selected into their sample is an important consideration.
For quantitative researchers, the major criterion to use here is how representative the sample is of the population. The larger the sample, the greater the chance that it will be representative of the population from which it comes. This is particularly true if the population is heterogeneous (diverse). However, while large samples are more likely to be representative of the population, they may be impractical to work with and too expensive. Furthermore, having a large sample does not necessarily guarantee representativeness. A large sample that is based on an inadequate sampling frame or one that has not been selected using random sampling methods will not provide a representative sample. A small sample based on a complete sampling frame or a small sample that has been selected using random sampling methods would be more representative than a large sample based on a flawed methodology. To determine the actual size of the sample required, researchers can use predetermined tables or various statistical equations.
Qualitative researchers may begin data collection and then stop once they have reached a point of saturation. By this we mean that new information no longer challenges or adds to the interpretive account. Decisions about sample size will also take into account whether there is an existing body of well-developed theory about the research topic. If there is, it is likely that fairly specific research questions will be generated and a small sample size may be adequate. On the other hand, exploratory research that is not informed by an established body of theory will require a larger sample size. Furthermore, if a researcher expects to collect lengthy and detailed accounts from participants, only a few participants may be required. Conversely, if a researcher expects to collect relatively brief accounts, more participants may be required. As a general rule of thumb, Kelly (2006) suggests that when a homogenous sample is used or when research protocols call for long sessions of information gathering, then between six and eight participants may be used. In explorative research or if collection methods are brief, then 10 to 20 participants should be used.
For qualitative researchers, issues around sample size may not be as central as the characteristics of the people who comprise their sample. For example, if the purpose of the study is for people to describe phenomena or behaviours, it may in fact be more valuable to select sample participants who are good communicators, who are open and not defensive, who have an interest in the research, and who are likely to understand the value of participating.
Clearly, for both qualitative and quantitative researchers, there is no definitive answer to the question: ’How big is big enough?’ Deciding on a sample size will depend on the level of accuracy a researcher requires, the degree of diversity within the population of interest, and the amount and quality of data gathered. It will also depend on the researcher’s resources as large studies tend to be expensive.
2.3SAMPLING FOR A REPRESENTATIVE SAMPLE
For practical reasons, you have refined your first research question to be: ’How many Grade 12 female learners in Cape Town have experienced date rape?’ The population from which you will now draw your sample is Grade 12 female learners in Cape Town. Because you want your sample to be as representative of the population as possible, you decide the best sampling strategy to use is multi-stage sampling. You follow the following steps:
Step 1:You obtain a list of all the school districts in Cape Town, and, using simple random sampling, you select five (a cluster).
Step 2:For each of the five school districts selected, you obtain a list of high schools. From the resultant list of 200 schools, you use simple random sampling to select 10 schools (a cluster).
Step 3:For school #1 you obtain a list of every female learner in Grade 12 and employ simple random sampling to select 20 learners (a cluster). You repeat this process for each of the other schools.
Thus, by following these steps (or stages), 200 female Grade 12 learners will comprise a representative sample.
In conducting a research study, the following aspects must be considered:
•What type of study will be conducted? Will it be exploratory, descriptive or explanatory?
•What research design will be most appropriate to answer the research questions? Should the research be quantitative or qualitative?
•What variables will need to be defined and operationalised? What independent and dependent variables will be used? Will a causal relationship between variables be established or merely a correlation?
•Who or what will the study collect data from? What will the study’s units of analysis be?
•What specific hypotheses will be proposed based on the research questions? In qualitative studies, hypotheses may not be appropriate.
•How long will the study run for? Will it be cross-sectional or longitudinal?
•How will the sample of units of analysis be chosen? Does the sample need to be representative and, if so, how will this be achieved?
Step 3: Data collection
Before it comes to collecting data, researchers will have made some critical decisions about their research topics. They will have formulated the research questions and, where applicable, specified which variables will be measured, specified how concepts will be defined and generated hypotheses. They will have settled on an appropriate research design, and performed appropriate sampling in order to select those people from whom they wish to gather information. The next step in the research process involves actually gathering information from their participants. This process may involve issues around quantification of variables, levels of measurement and the appropriate techniques for gathering information.
Levels of measurement
We have already described how researchers assign numbers to their variables of interest (see Box 2.2). When we quantify variables in this way, we can choose between four levels or scales of measurement: nominal scales, ordinal scales, interval scales and ratio scales. These often determine what types of analysis a researcher can carry out.
When we use a nominal scale, we simply classify variables into mutually exclusive categories and assign labels in the form of numbers to them. Some examples of nominal scales applied to certain variables are: male and female; young, middle-aged and old; happy and sad. Numbers are assigned to these variables merely as a matter of practicality. For example, 1 = male, 2 = female; 1 = young, 2 = middle-aged, 3 = old; 1 = happy; 2 = sad.
When we use an ordinal scale, we go one step further. We assign labels to variables in the form of numbers such that one variable can be placed in relation to another in terms of the amount of the relevant attribute that they possess. For example, we could assign numbers to examination marks so that they are categorised as follows: fail (0—40%) = 0; unsatisfactory (40—50%) = 1; satisfactory (50—60%) = 2; good (60—70%) = 3; excellent (70%+) = 4. The numbers assigned are not only mutually exclusive labels of examination attainment, but can also indicate that a score of 1 is worse than a score of 2, or that a score of 4 is better than a score of 3.
Building on the idea of an ordinal scale, an interval scale assigns numbers in such a way that the size of the difference between any two numbers corresponds to the size of the difference in the attribute being measured. For example, the difference between an IQ score of 90 and 95 is the same as a difference between a score of 110 and 115. Most measures in the behavioural sciences are interval measures.
A ratio scale is similar to an interval scale except that a ratio scale has a true zero value. An example of a ratio level of measurement would be age. A person cannot be less than 0 years old, and a person who is 10 years old is twice as old as someone who is five years old.
IQ scores are not ratio levels of measurement because a person who scores 0 does not necessarily have no intelligence, and one cannot say that a person who scores 80 has half as much intelligence as someone who scores 160.
Methods for gathering data
There are a number of ways that researchers go about gathering information. The most common method of gathering information is by directly asking participants to answer questions or express their views. However, information can also be obtained by merely observing people.
Non-participant observation involves a researcher observing people without interacting with them. For example, a researcher may be interested in bullying behaviours among primary school children. To gain information about the incidence and nature of bullying among the children, he/she may choose to sit alongside school playing fields during break and observe children playing. A major drawback of this type of observation is that people may behave differently to the way they normally would because they know they are being watched.
To overcome this drawback, researchers may decide not to reveal their intentions to the people they wish to observe. They could do this by becoming part of the group and covertly observing their behaviour. For example, a researcher may be interested in drug use at local nightclubs. He/she could attend selected nightclubs and party until the early hours of the morning. However, unbeknown to the other people there, the researcher would also be observing their behaviour. In this instance a researcher would be conducting participant observation. This has obvious ethical implications.
Table 2.1 Example of a semi-structured interview (Pedroso, Kessler & Pechansky, 2013)
1.What was your life trajectory like before you first used crack?
Aspects related to birth, schooling, work, family, and friendships.
2.At what age did you start using drugs before crack?
Age at the beginning of use of licit and illicit drugs.
3.How many times were you hospitalised before?
Trajectory of previous hospitalisations.
4.How many times did you try to be hospitalised?
Hospitalisation seeking trajectory: number of attempts.
5.In your opinion, which factors have contributed to your relapse after discharge or to abandonment of previous hospitalisations?
Risk factors related to crack use.
6.Have you ever been involved with crime, prostitution or under arrest due to your crack addiction?
Legal aspects: crime or exchange of sex for crack.
7.Has your physical health been affected due to the use of crack?
Physical aspects: clinical conditions.
8.Do you believe that faith, religion and/or spirituality may help you to abandon crack?
Spiritual beliefs related to abstinence from crack use.
Another type of observation would involve observing people from behind a one-way mirror in a laboratory-like setting. For example, a researcher may be interested in the interaction between teenage mothers and their infants. The researcher could then get such mothers to play with their infants in a room and observe their interaction from behind a one-way mirror.
In simple terms, interviewing involves the researcher asking questions, listening and analysing the responses. It is a process of gathering information for research using verbal interaction. The process typically begins with an approach being made to a potential participant. The researcher usually begins by giving some brief information about him-/herself, such as name and institutional affiliation, and by describing the purpose of the study. Before the interview begins, the participant’s consent must be obtained.
Interviews may differ in terms of their structure and this will depend on what type of information a researcher wishes to gather. In a structured interview, the interviewer follows a set list of questions in a certain sequence. Researchers who use this format probably require certain straightforward information. However, in an unstructured interview (an open-ended interview), the researcher merely tries to remain focused on an issue of study and uses few predetermined questions, if any. Typically, the interviewer will begin by asking a general or open-ended question on the issue of interest, but will allow the format and flow to be shaped by the interviewee. This type of interview is appropriate when the aim is to gain insight into the participant’s interpretation of his/her experiences. In a semi-structured interview, the researcher ensures that certain areas of questioning are covered, but there is no fixed sequence or format of questions. Semi-structured interviews are typically used for qualitative research and for focus groups.
A questionnaire is a text that asks specific questions in a specific order. Questionnaires also contain a relatively precise way in which answers can be given, for example: yes, no or don’t know; agree or disagree; always, sometime, or never. Questionnaires commonly use Likert-type scales (see Figure 2.3). In each item (question), the person completing the scale is given a number of choices (usually between three and 10). All of the items together comprise the scale. This is the format used most typically in personal inventories (see Chapter 6). Questionnaires are ideally suited for quantitative studies where, for example, ’yes’ can be coded as 1, ’no’ as 2, and ’don’t know’ as 3, and so on.
Questionnaires can be used in structured interviews or they can be used in situations where there is little or no direct contact between the researcher and the study participant. The researcher may distribute the questionnaires to the participants and collect them again once the participants have completed them. Alternatively, the researcher may send the questionnaire via the mail with the instruction to participants to return it once they have completed it. Questionnaires can also be made available to participants via a secure internet site. Once completed, the participants return their questionnaires via the internet. Self-administered questionnaires are those that are completed by participants without the presence of the researcher.
For people who are illiterate or disabled, special administration measures will need to be taken. For example, a questionnaire could be read aloud and answers recorded by the interviewer. Attention also needs to be paid to the interviewee’s home language so that unfamiliar terms can be explained, if necessary.
Please fill in the number that represents how you feel about the computer software you have been using
Figure 2.3 Example of a Likert-type scale
Focus group interviews
Interviews may be conducted in a one-to-one situation or using a group setting. Focus group interviews, originally used in market research, are now widely used in psychological research. Focus group interviews basically entail a group discussion that explores a particular topic selected by the researcher. A moderator facilitates the discussion among the participants. An advantage of a focus group is that it offers a researcher the opportunity to gather information in a situation where participants are interacting with one another. In such interactive settings, statements made by one participant may initiate a reaction and additional comments from other participants, thus offering a wider perspective on the topic under discussion. A focus group needs to be large enough to generate rich discussion but not so large that some members are left out; between six and 10 participants is ideal. Participants are selected on the basis of their experience with the topic at hand. For example, the sample semi-structured interview referred to in Table 2.1 was used on a sample of in-patient crack users.
Table 2.2 Methods for gathering data
Non-participant observation (the researcher observes participants without interacting with them)
Participant observation (the researcher interacts with participants while also observing them)
Observation through a one-way mirror
The researcher asks questions of the participant, and records and analyses the answers.
Interviews may be structured, semi-structured or unstructured.
Asks a specific set of questions and usually has a structured format
Involves a group discussion exploring a particular topic
Documents (e.g. letters, diaries) and other media (e.g. TV programmes)
2.4METHODS FOR GATHERING DATA FOR THE DATE-RAPE STUDY
Imagine that you were continuing to use most of the research questions from Box 2.2, but had rephrased the first question as in Box 2.3:
1.How many Grade 12 female learners in Cape Town have experienced date rape?
2.Does a young woman’s empowerment increase her risk of date rape?
3.What are the consequences of date rape?
Having drawn your representative sample (in Box 2.3), you are now in a position to make decisions about how you will collect information from your participants.
To help you to answer the first research question, you decide to construct a number of questions that will enable you to categorise the participants according to those who have experienced date rape, and those who have not.
Regarding the second question, you have been fortunate enough to be able to access established questionnaires that assess levels of self-efficacy, self-esteem, self-actualisation, self-concept and beliefs about equity and free will among similar populations. These are the elements that constitute your construct, namely empowerment. Given the sensitivity of the topic, you decide that a self-report questionnaire would provide your participants with a greater sense of anonymity, thus increasing the possibility that they will answer the questions truthfully.
Because you have realised the limitations of merely gathering quantitative data, to help you to answer the third question you plan to conduct individual, in-depth interviews with 10 participants. So you end the questionnaire about empowerment with an explanation about your intention and a request that, if the survivors of date rape are prepared to be interviewed, they provide their contact details. You then set up a mutually convenient place and time to conduct interviews with the willing participants. After consulting the literature, you decide to follow a narrative approach to the interviews.
Other methods of gathering data
Other methods of gathering data include the collection and study of documents, letters and entries in personal diaries. Audio and visual material may also be used for research purposes, although these are then usually also transcribed into written texts.
Once the methodology of the study has been finalised, the researcher can proceed to gathering the information he/she needs to answer the research questions. The following issues are relevant:
•In quantitative research, the researcher needs to decide how the variables will be quantified. This is so that the data can be analysed statistically. There are four levels of scales: nominal, ordinal, interval and ration scales.
•Data may be gathered in different ways; which one the researcher chooses depends on the research design. Data may be gathered by observing participants, either ’from the sidelines’ or by becoming part of the group he/she wishes to observe. The researcher may conduct structured or unstructured interviews or give participants a questionnaire. Lastly, the researcher may conduct a focus group. Data may also be gathered from documents or other media.
Step 4: Analysis of data
Analysis of quantitative data
Researchers and statisticians employ a number of methods to analyse quantitative data. Which method is used will depend on the nature of the research question, the research design and the nature of the data itself. As these methods are usually described in detail at a later stage in undergraduate training, here we provide just a brief outline of data entry into a database, descriptive data analysis, measures of central tendency and variability, and the way in which the reliability and validity of data are assessed.
Entering data into a database
Once a quantitative researcher has finished collecting data, he/she will assign numbers to all the participants’ responses to the research questions. These numbers will then be entered into a computer database so that they may be subjected to statistical analysis. But before a researcher begins with any analysis, he/she will check the data for errors, and, where necessary, go back to the original questionnaire or the dataset in order to correct them. Errors in coding and in entering are more likely when working with large datasets. Once a researcher has a clean and accurate dataset, he/she will begin the analysis.
2.5RELIABILITY AND VALIDITY
It is important for instruments that measure some psychological phenomenon, attribute or characteristic to do so as effectively and accurately as possible. When a measurement instrument is developed or an existing one is used with a different population or in a different context, researchers should provide some indication of the instrument’s psychometric properties (in other words, its reliability and validity).
Reliability refers to whether the measurement instrument produces accurate and consistent results each time it is used. For example, a tape measure that measures a set distance, say 1 m, as 1.15 m on one occasion and 0.98 m on another is not a reliable measuring instrument. In psychological properties, however (like personality traits or intelligence), it is much harder to be sure about the accuracy of measurement. Because of this, most psychological tests have an error variance or ’error of measurement’ (EOM) value which sets out the range within which the so-called true score is to be found. For example, in IQ measurement, the EOM is 10 IQ points, meaning that we can say your IQ is somewhere within that range (e.g. between 105 and 115).
Validity refers to the extent to which a measurement instrument measures what it set out to measure. For example, we would not use a measure of someone’s maths ability to tell us about his/her creative ability.
See Chapter 6 for a more detailed discussion on reliability and validity.
Descriptive data analysis
A descriptive data analysis describes data by examining the distribution of scores for each variable. This type of analysis gives the researcher an initial picture of how people scored on each variable and how their scores are related to the scores of others. Often a researcher will generate a frequency distribution. For example, the frequency distribution of empowerment of Grade 12 female learners who had experienced date rape may be graphically depicted as shown in Figure 2.4.
This depiction provides the researcher with a visual idea of the level of empowerment among women who had ex perienced date rape. Just by looking at it, a researcher can see that their level of empowerment appears to be quite high (they mostly score 4 and 5 on the scale). However, in order to determine whether date-rape survivors’ levels of empowerment are higher than those who had not experienced date rape, the researcher would measure empowerment among a sample of women who had dated but who had never experienced date rape. The researcher would then combine the two sets of information, as depicted in Figure 2.5.
Figure 2.4 The frequency distribution of empowerment among Grade 12 female date-rape survivors
Figure 2.5 A comparison of the frequency distribution of empowerment among Grade 12 female date-rape survivors and those who had not experienced date rape
Just by looking at this second graph, a researcher can see that the level of empowerment among date-rape survivors is indeed higher than among women who had dated but not experienced date rape. However, a researcher cannot come to any definitive conclusions based on this information. The researcher would need to conduct statistical analyses before being able to claim that the difference in empowerment among these two groups of women is a statistically significant finding.
Measures of central tendency
A measure of central tendency (also referred to as a summary statistic) provides a researcher with a single numerical value that represents all the values of a particular variable in the dataset. These indicate what is typical or average of the dataset. The different types of measures of central tendency are the mean, the mode and the median.
The mean is an average score. For example, a student received the following grades on five tests during her first-year psychology course: 34%, 88%, 34%, 50% and 78%. She may want to know what her average grade is because she needs to have scored above an average of 50% in order to gain admission to the year-end examination. She would add up the grades for the five tests (284) and divide this by the total number of tests (5) to obtain her average mark (56.8%).
The mode is the score in a distribution which occurs most frequently. In the above example, 34% would be the mode because it occurs twice, whereas the other scores each only occur once.
The median is the middlemost score when the scores are ordered from lowest to highest. Ordering the psychology assessment scores in the above example, we would get: 34%, 34%, 50%, 67% and 88%. The median here is 50% because it falls right in the middle of the scores, with two scores on each side of it. Where there is an even number of values, the median is found by adding the two middle scores and dividing them by two (see Figure 2.6).
Figure 2.6 Illustration of a median
Measures of variability
Although measures of central tendency are extremely useful, we also need to consider what measures contribute to a mean, mode or median. For example, the mean of 30 plus 30 is 30; however, the mean of 60 plus 0 is also 30. Thus to fully understand a data set, we also need to consider measures of variability. A measure of variability provides an indication of how diverse or variable the spread of scores is. Measures of variability include the range, the variance and the standard deviation. These are the most important descriptive statistics as they form the basis for most advanced statistical procedures. However, the way in which these measures are calculated, and how and why they are central to advanced statistical procedures is beyond the scope of this chapter and will undoubtedly form part of later psychological research courses.
Analysis of qualitative data
Information gathered via interviews and other qualitative data sources is usually transcribed (see below). The analysis then involves sifting through the transcripts in a systematic way so that conclusions may be reached about the issue under investigation.
There are several methods of analysing qualitative data, such as thematic analysis, narrative analysis and discourse analysis, and all of these revolve around the analysis of meaning. In these approaches, words constitute the basic unit to be analysed.
Qualitative research is fundamentally different from quantitative research and therefore has its own verification processes.
Transcription of data
Interviews and focus group discussions are always audio-taped. Researchers must then transcribe these sources of data into written format. Audiotapes are best transcribed word for word into an electronic document. The researcher should also insert comments (or symbols) that denote hesitation, pauses and sometimes, particular reactions of participants such as laughs or sighs.
A researcher must transcribe entire interviews rather than selecting what he/she thinks is relevant. At the point of transcription, it is unusual for a researcher to be fully aware of the importance and relevance of what participants have said. It is only after the repeated reading of the transcripts during analysis that researchers can be sure of the importance and relevance of different pieces of information. However, as the researcher transcribes the information, it is useful to make notes about possible interpretations.
Once the transcription is complete, the researcher should check it by reading through it while listening to the audiotape, or by getting a colleague to do so.
Thematic analysis, narrative analysis and discourse analysis
Thematic analysis is the most commonly used method of qualitative data analysis. According to this process, audiotaped transcriptions are first broken down into units of meaning (e.g. self-esteem or self-determination). The researcher then uses a technique to place the units of meaning into categories. In this way, themes are systematically identified. When collated, these themes give insight into the particular issue being studied. In the past, there was no computerised assistance for the researcher. However, recognising the increasingly important role of qualitative research, some of the bigger statistical software companies have developed tools to aid in this analysis.
Narrative analysis is a technique that approaches the transcript as if it were a story following some type of sequence. Riessman (2008), a well-known scholar of narrative, explains that a narrative always responds to the question: ’And then what happened?’ Narrative analysis is especially useful for the study of changes over time. It may be used to explain how participants in a study understand past events and actions and also how they make sense of themselves and their own actions. Box 2.6 provides an example of a study that used narrative analysis to investigate how women make sense of abusive relationships.
Discourse analysis places a great deal of emphasis on the role of language. It is a technique through which the researcher analyses the use and structure of language to reveal representations of the world or sets of meanings. The researcher typically tries to draw attention to dominant meanings of particular phenomena as well as how these meanings may be contradictory and ambivalent. For example, Verlien (2003) examined how the staff members at a detention home for young women talked about these women as if they were still children, denying them any sense of sexuality.
In reality, many qualitative studies use a mix of data analysis techniques, but when researchers do this they must be systematic in their analyses.
Verification of qualitative data
There are several debates about the rigour of qualitative research. These revolve around questions such as: ’How can we trust the authenticity of qualitative research?’, and ’How can we be sure that such research is reliable and valid?’
There are many different perspectives on how to make sure that qualitative data is trustworthy and rigorous. A frequently used technique involves correspondence checks. These checks may involve the use of colleagues and other researchers to analyse the data independently. These analyses are then compared with that done by the primary researcher to check for correspondence. Some researchers take the analysed data back to the participants to find out what they think of the analysis.
Another technique that may be used is to consider openly alternative interpretations of the data and then report on these too. Whatever technique is used, all researchers must show that their interpretation relates to the overall goals, theory and method of the study. In the final analysis, the researcher should provide enough information to allow others to assess the merits and trustworthiness of the work (Reissman, 2008).
•Once all of the data has been collected, it needs to be analysed. How this is done will depend on whether the data is quantitative or qualitative.
•The method for analysing quantitative data will depend on the research questions. Data first needs to be coded and entered into a database.
•Once the coded quantitative data has been checked, descriptive statistics will be produced. These include measures of central tendency (mean, mode and median) and measures of variability (range and standard deviation). Graphs help show the data visually. Issues of reliability and validity must be considered.
•For qualitative data, recorded interviews and focus groups are first transcribed in full. The transcripts are then read repeatedly.
•The most commonly used approaches in qualitative analysis are thematic analysis (identifying specific units of meaning), narrative analysis (studying the stories participants tell of how they understand past events and actions), and discourse analysis (interpreting language to reveal representations of the participant’s world).
•Qualitative data must be verified through correspondence checks and by considering alternative interpretations. The participants may also be asked to check the interpretation.
2.6QUALITATIVE RESEARCH IN ACTION: SOUTH AFRICAN WOMEN’S NARRATIVES OF VIOLENCE
In South Africa, abuse of women is a pervasive problem affecting millions of families. Why do many women remain in relationships with men who abuse them? Boonzaier and De la Rey (2003) reported on a study in which 15 women living in Mitchell’s Plain in the Western Cape were interviewed. All the women had been married for periods of 5 to 26 years, and they had each experienced long-term abuse from their partners.
To gain a comprehensive picture of the women’s understanding of the violence, open-ended interviews were conducted. The issues covered in the interviews included the specific woman’s experience of the abuse, how she responded, her feelings towards her partner, and her reflections on staying or leaving. Prior to the interviews, all the women were informed about the nature and procedure of the research, and the voluntary nature of their participation was explained.
During the interviews, the women described traumatic and painful memories of abuse. The interviewer took care to establish rapport, show empathy and convey sensitivity. At the end of each interview, the interviewer, who was from Mitchell’s Plain herself, initiated a debriefing session in which the women could discuss the interview and the research process.
All the interviews were recorded and then transcribed. The interview transcripts were read many times, paying particular attention to the contents. These transcripts were then analysed to uncover the similarities and differences across and within cases. In analysing the transcripts, the gender identity of the role players as women or men was particularly evident. Narrative analysis was used to show how the women’s understandings of romantic love played an important role in binding them to their partners.
Many of the women viewed their partners as having two sides: a good side, which they loved, and then an abusive side. The women connected the appearance of the bad side to alcohol and drug abuse. Many of them described how their partners verbally abused them by calling them derogatory names. The researchers showed how these terms are examples of dominant cultural representations of women as either virgins or whores.
All of the women interviewed had sought assistance from various sources such as family, religious institutions and social agencies. The analysis revealed how these sources played contradictory roles, sometimes condemning the violent partner, but also encouraging the woman to be a good wife and remain in the relationship. Overall, the study showed how women’s understanding of their experience of abuse is linked to the particular context in which their experiences occurred. Economic hardships, alcohol and drug abuse, and dominant perspectives of masculinity and femininity all interacted to shape women’s responses and reactions.
Step 5: Reporting findings
Once the researcher has analysed the data, he/she needs to explain and interpret the findings. The researcher must also critically reflect on the entire research study to identify whether anything could have distorted the findings. It is virtually impossible for research of any sort to be totally free from errors or bias. It is therefore important for researchers to make every effort to identify, reduce and/or compensate for these.
Measurement errors in quantitative data refer to data that is wrong or inaccurate. If the errors influence all the data, they are called constant errors. If the errors only occur occasionally and/or are due to particular conditions, they are called random errors.
Errors can creep in at any stage of the research process, which means that researchers should consider each step they have taken. For example, the initial definitions might not have been formulated clearly, an inadequate research design may have been developed, sampling errors may have resulted in a sample that was not representative of the population, and a badly constructed questionnaire could have led to inconsistent or confusing responses.
Bias is any ’deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions’ (Šimundić, 2012, p. 12). Bias may occur when there are undetected influences on any stage of the research process which may influence the relationships between the variables. For example, a researcher may be investigating the influence of having an alcoholic parent on self-esteem in adolescents. If his interest in this area is because of his own experiences with his father, he may be unable to detect how this might bias his whole research process from hypothesis to analysis.
In order to explain the difference between errors and bias, we have adapted an example from Bless et al. (2006, p. 165). Suppose a woman reads a book in isiXhosa. While she has a fair understanding of isiXhosa, she may struggle to understand the exact meaning of some of the words. She can avoid errors in understanding by consulting a dictionary. Alternatively, she may decide to read an English translation of the book because English is her mother tongue. However, she will then be at the mercy of the bias introduced by the translator who had his own understanding of the original book.
There are very many specific sources of bias, including funding bias (direct or indirect influence from the research funders), hindsight bias (the sense that the researcher ’knew it all along’) and confirmation bias (the tendency to interpret the data in a way that confirms the researcher’s preconceived ideas). However, in this chapter we describe some of the broader sources of bias in research. Understanding and being alert to the various sources of potential bias, and understanding how bias may affect the results, is the best way to minimise it in a research study (Pannucci & Wilkens, 2010).
An interviewer can influence the answers a participant gives in a number of ways. For instance, the interviewer may ask questions in an aggressive, judgemental or disinterested way. He/she may also record participants’ answers incorrectly, or ask a question in such a way that may suggest to the participant how to respond. (It has been known for interviewers to indicate answers without even asking the relevant questions.)
Participants could be unresponsive or may give inconsistent answers to questions. They could also purposefully provide false answers. For example, it is not uncommon for participants to under-report behaviours that are not socially desirable and to over-report behaviours that are. They may also misunderstand some questions and so provide inaccurate responses, or have difficulty expressing themselves.
Analysts could code and enter data into a database in an inaccurate way. They could also incorrectly classify responses to open-ended questions or choose an inappropriate statistical procedure for analysing data. All of these examples would introduce analyst bias.
At all points during the research process, a researcher’s personal views influence the decisions he/she makes. As suggested above, the researcher’s underlying assumptions about human experience may shape his/her research topic, as well as who is chosen to participate in the study, and what interpretations are given to the findings.
The issue of objectivity and reflexivity
Many researchers attempt to avoid bias by striving for objectivity in their research and research practices. However, a central feature of qualitative research is the rejection of the possibility of the researcher’s objectivity. Researchers, like the rest of us, are always interpreting what they see, hear and experience. When they ask a question or report on some findings, the language they use never just reflects reality, but always also conveys something about their interpretation of reality. So, instead of trying to eliminate totally the influence of the researcher’s understanding, qualitative methods have attempted to develop techniques that take into account the researcher’s perspective and then address this as a component of the knowledge-generation process.
Rather than objectivity, qualitative researchers use the concept of reflexivity. In the research process, reflexivity incorporates self-reflection on the part of both the researcher and the participants (Grbich, 2007). As pointed out in Grbich (2007, p. 9), in the course of the research process, ’interaction between you and those researched will serve to produce a constructed reality’ with prominence being given to the voices of the researched.
2.7ETHICS IN RESEARCH
Source: Bless et al. (2006)
Before researchers contact their first participant, they must ensure that their research protocol has passed ethical evaluation. The purpose of ethical evaluation is to ensure that participants in a research study are treated humanely and sensitively, and that their right to privacy and protection from physical or psychological harm is upheld. Most academic institutions have professional bodies that provide ethical guidelines to which researchers must adhere. These guidelines are informed by a number of fundamental principles, briefly defined as follows:
•The principle of non-maleficence. Participants must not be harmed in any way, either physically or psychologically, through their participation in a research project.
•The principle of beneficence. Research should have the potential to benefit not only the research participants but other people as well.
•The principle of autonomy. People must voluntarily participate in the research project. No one should be forced or coerced to participate.
•The principle of justice. People must not be discriminated against on the basis of gender, race, income level or any other characteristic.
•The principle of fidelity. Research participants must be able to trust and have faith in the researchers. This calls for researchers to honour all promises and agreements made between themselves and participants.
•Respect for participants’ rights and dignity. All human beings have legal and human rights that no researcher may violate. The dignity and self-respect of participants must always be preserved.
Over and above these basic principles, researchers must adhere to the following ethical requirements:
•Informed consent. Participants should be fully informed about the research project, how the project will affect them, and the risks and benefits of participation. They must be informed of their right to decline to participate.
•Confidentiality. Information that participants provide must be protected and must be unavailable to anyone other than the researchers themselves.
•Anonymity. Information that participants provide must not be linked in any way to their names or any other identifiers.
•Referral. The psychological well-being of participants must be ensured at all times. Appropriate referral to a relevant authority should be provided if participants become distressed through their participation in a research project.
•Discontinuance. Participants must be assured that they may discontinue with a research project at any time without having to say why, and without having any repercussions.
•Research with vulnerable populations. Vulnerable populations are those who may not fully understand the implications and requirements of participation in a research project. Researchers must be attuned to the special needs of such populations and must not be patronising or condescending.
•Quality. Researchers have an ethical obligation to conduct research that is well designed and of high quality.
•Analysis and reporting. Researchers may not change or falsify their data or their observations.
•Reporting and publication. Researchers should report back to their study participants in a way that is easily understandable to them. In publishing the research results, researchers should ensure that participants’ anonymity is maintained. Credit must be given to all persons who assisted in the research project.
See Chapter 1 for examples of ethical issues in psychology.
The organisation of a research report
There are no strict rules about how a research report should be structured. The structure will depend on the audience for which it is intended. For example, if a report is intended for research-funding institutions, it will need to be a detailed and complete account of the entire research process. A report written for an academic journal will need to be less detailed, but nevertheless demonstrate a high level of scientific quality. Some audiences may be less interested in the technical aspects of the study, and prefer to be fully informed about the findings and their implications. If a report were intended for readers of a consumer magazine, or the local newspaper, it would probably describe the research in a more general way and avoid scientific language. Some institutions provide guidelines as to the format they require. All academic journals have fairly specific instructions for authors. Whatever format is appropriate, it is important that the report is sequentially and logically structured. Generally, the sections of a report will follow the steps in the research process closely.
This section typically begins by identifying the research problem. The researcher does this by stating what is already known about the broad area in which the problem is situated, what is unclear or unknown, and the relevance of further investigation into the area. A review of relevant literature follows.
In this part of the introduction, the researcher should be careful to remain focused on the research topic and evaluate the theoretical underpinnings and other research findings directly relevant to the research topic. It is often useful to provide subheadings that deal with each of the variables of interest. Other background information should also be provided. The introduction is then concluded with a more precise description of the research problem, and a clear but concise statement about the purpose of the study. Where relevant, this would include formally stating the research hypotheses and providing a rationale for their development. Again, if relevant, operational definitions of the variables of interest should be provided.
The purpose of the introduction is to give the reader a clear idea about what is known about the research topic, what is unclear or unknown, and what and how the study will add to the relevant field.
The methodology section
This section of a report lays out in sequential detail how the study was conducted. Again, many researchers use subheadings to structure this material logically. In general, this section begins by describing the research participants. It includes a description of the population, the sampling procedures and the sample size. Relevant demographic information is often included. The reader must have no doubt about who participated in the study.
The next step is for the researcher to describe what was required of the participants. For example, the researcher should say whether participants had to complete a self-administered questionnaire, participate in unstructured interviews or focus groups, or provide biological specimens. Questionnaires, interview schedules and/or tasks should be described. The researcher should explain the relationship between the variables of interest and the tasks required of the participants. The variables, which were mentioned in the introduction, should now be discussed in more detail. For example, the researcher should state which variables are being proposed as independent variables and which are being proposed as dependent variables.
The next step is for the researcher to describe the procedures that were followed when the data was collected. The researcher must say how participants were recruited into the sample, what instructions were given to them, how the setting was arranged, how long the activities took, and how ethical issues such as informed consent were addressed. Finally, the process whereby the data was analysed should be provided in detail.
The results section
In this section, the researcher first gives a description of the main results from the data analysis. Tables, graphs and diagrams are useful ways to depict quantitative findings, while excerpts from transcripts help to illustrate qualitative findings. It is often useful to structure the results in the same order in which the relevant research questions were initially posed.
The discussion section
The discussion includes a summary and interpretation of the findings, and states the conclusions that can be drawn from the findings. At this point, if appropriate, findings can be generalised. Limitations of the study should be addressed. Suggestions and recommendations for future research can be proposed. Implications for policy and/or interventions can be considered.
All the literature mentioned in the report must be included at the end of the report in the alphabetical list of authors. Referencing styles will vary depending on the publication in which the report will appear. For psychology publications, the conventions of the American Psychological Association are usually followed.
The abstract, executive summary and appendices
An abstract or executive summary usually precedes the sections described above. Typically an abstract is a summary of all the sections of the report and is usually around 200 words in length. An executive summary is typically longer and a little more detailed. The purpose of these summaries is to provide the reader with a good idea of the contents of the entire report.
Appendices may be added at the end of a report and may contain, for example, detailed tables of research findings, and/or the questionnaire(s) or measurement scale(s) used in the study.
Step 6: Theory building
In the chapters that follow you will encounter many psychological theories and descriptions of research findings. These two aspects of psychology are integrally linked.
The findings from a research study will either confirm or refute the theory that provided the explanatory framework for the research. If the former is the case, the theory about the behaviour or phenomenon that was examined is strengthened. Psychologists may even be able to make predictions about the probability of future behaviours or phenomena occurring under certain circumstances. Being able to predict with a great degree of certainty whether a behaviour or phenomenon will occur under certain circumstances would allow psychologists to intervene and influence the course of events positively.
However, if a theory is refuted, the behaviour or phenomenon that was explored needs to be refined or reformulated. Psychologists then return to the theory and propose refinements in line with their findings. In all likelihood, the refined theory would then be subjected to the research process again (whether by those psychologists who refined the theory or others).
In both instances, new knowledge is being created. Whether research findings strengthen a theory or require a reformulation of a theory, scientific knowledge is being advanced. As Bless et al. (2006, p. 13) observe:
If theories were not advanced, deeper understandings of social phenomena would not be achieved and knowledge would become stagnant. For the frontiers of knowledge to be pushed, theories need to be continually refined and improved.
•Once the research data has been analysed, the results need to be explained, interpreted and reported.
•At this stage, the researcher must critically reflect on the study, attempting to identify any errors or bias.
•Errors may be constant or random.
•Bias may arise from a variety of sources: the interviewer, participants, the analyst or the researcher him-/herself.
•Traditionally, researchers have tried to be as objective as possible; however, there is a growing acceptance that, especially in qualitative research, that objectivity is not possible or even appropriate. Instead, qualitative researchers use the concept of reflexivity.
•The research report will be structured depending on its intended audience. The traditional report contains the following standard sections: abstract, introduction, methodology, results, discussion, references and any appendices that may be necessary.
•Finally, the research study findings may be used in the process of theory building.
All researchers have an ethical duty to conduct the highest quality research. This means that researchers must consistently conduct studies following the correct procedures and to the best of their abilities, they must be adequately trained and have the appropriate expertise, and they must disseminate their findings with integrity.
bias: undetected influences that can occur at any stage of the research process that may alter or conceal the relationships between variables
categorical variable: a variable that can be divided into distinct categories
causal relationship: a relationship between variables whereby variation in one variable leads directly to variation in another
constant: any property of an object or individual that does not vary from object to object or individual to individual
constant errors: measurement errors in quantitative data that influences all the data
construct: a psychological phenomenon that can be measured
continuous variable: a variable that can exist at any point within a range or on a continuum
convenience sampling: a sampling procedure whereby a researcher chooses the nearest people to participate in his/her study
correlation: when two variables vary in the same way and we cannot determine whether a causal relationship exists between them, or whether they are both influenced by a third variable (when two variables vary in the same direction, they are said to be positively correlated; when two variables vary in opposite directions, they are said to be negatively correlated)
cross-sectional study: a research study where all information is collected from participants at one point in time
dependent variable: a variable whose values change as a result of the influence of one or more other variables
descriptive data analysis: an initial analysis of data that provides the researcher with a picture of people’s scores on each of a study’s variables, and how some people’s scores are related to the scores of others
descriptive study: a research study conducted when a researcher wishes to describe a phenomenon or behaviour
discourse analysis: a method of qualitative data analysis whereby a great deal of emphasis is placed on the role of language
experimental hypothesis: the hypothesis that a researcher makes to try to answer the research question
explanatory study: a research study conducted when a researcher wishes to explain the relationship between variable(s), for example whether it is correlational or causal
exploratory study: a research study conducted when very little is known about a research topic
focus group: a method for gathering information from research participants whereby a small group of research participants explore, through discussion, a particular topic chosen by a researcher
generalise: in the context of research, the ability to draw conclusions about a population on the basis of conclusions reached about a sample from that population
hypothesis: a speculative statement about the expected relationship between phenomena, which is then investigated empirically
independent variable: a variable whose values exert an influence on another variable
interval sampling (systematic sampling): a sampling procedure whereby units of analysis are selected into a sample by selecting units at predetermined intervals, starting at a random number
interval scale: a level of measurement that assigns numbers in such a way that the size of the difference between any two numbers corresponds to the size of the difference in the attribute being measured
longitudinal study: a research study where information is collected from participants over a period of time (e.g. every six months or every year for a fixed time period)
mean: a measure of central tendency that is an average score
measure of central tendency: a single numerical value that represents all the values of a particular variable in a dataset (see mean, mode and median)
measure of variability: an index of the spread of scores on measures of variables
median: a measure of central tendency that is the middlemost score when individual scores are ordered from lowest to highest
methodology: the means employed to study reality (e.g. a quantitative researcher would employ an empirical method of enquiry, and a qualitative researcher would employ an interpretive method of enquiry)
mode: a measure of central tendency that is the most commonly occurring score in a distribution of scores
multi-stage sampling (cluster sampling): a sampling procedure whereby researchers progress through a number of stages, randomly selecting clusters from the population at each stage until units of analysis are randomly selected in the final stage of sampling
narrative analysis: a method of qualitative data analysis whereby transcriptions from interviews or focus groups are approached as if they are a story following some form of sequence
nominal scale: a level of measurement that classifies variables into mutually exclusive groups that have numbers assigned to them
non-participant observation: a method for gathering information from research participants whereby the researcher observes participants without interacting with them
non-probability sampling: a sampling method where every unit of analysis in a population does not have an equal chance of being selected into a sample, and we do not know what the likelihood of each unit being selected is
null hypothesis: the opposite of the experimental hypothesis; the researcher tries to disprove the null hypothesis
objectivity: the belief that research and research findings are value free, separate from ideology, culture and politics
operational definition: defines a variable in terms of what needs to be done in order to observe and measure it
ordinal scale: a level of measurement that assigns labels to variables in the form of numbers such that one variable can be placed in relation to another in terms of the quantity of the attribute that each possesses
participant observation: a method for gathering information from research participants whereby the researcher observes and interacts with participants
probability sampling: a sampling method where every unit of analysis in a population has an equal chance of being selected into a sample, and we know what the likelihood of each unit being selected is
purposive sampling: a sampling procedure whereby participants are selected into a sample on the basis of a researcher’s own judgements about the participants
qualitative research methods: research methods that obtain data in the form of descriptive narratives in order to understand a phenomenon from the perspective of the research participant, and gain an understanding of the meanings people give to their experience
quantitative research methods: research methods that involve the application of statistical analysis to data, and the development of statistical approaches for measuring and explaining human behaviour
questionnaire: a set of predetermined, specific questions with explicit wording and sequence of presentation
random errors: measurement errors in quantitative data that occur occasionally and/or are due to particular conditions
ratio scale: a level of measurement that assigns numbers in such a way that the size of the difference between any two numbers corresponds to the size of the difference in the attribute being measured (ratio scales cannot have negative numbers)
reflexivity: the ability of both the researcher and the research participants for self-reflection during the research process
reliability: the accuracy and consistency of a measurement instrument (its ability to produce similar results over repeated administrations)
replication: the process whereby researchers repeat a study with a different group of people and/or in a different context
representative sample: a group of people, selected by a researcher from a population, who are in all ways similar to that population
research design: a specification of the most satisfactory actions to be performed in order to answer research questions successfully, and/or test hypotheses
sample: a collection of people or objects from which a researcher will collect information
sampling: a process employed by researchers whereby individuals or objects are selected to participate in a research study
sampling frame: a record of all units of analysis from which information could be gathered
self-administered questionnaire: questionnaire that is completed by participants without the presence of a researcher
semi-structured interview: a method for gathering information from research participants whereby the researcher ensures that certain areas of questioning are covered, but there is no fixed sequence or format of questions
simple random sampling: a sampling procedure whereby every unit of analysis has an equal chance of being selected into a sample
snowball sampling: a sampling procedure whereby one or more individuals are requested to recruit others to participate in a research study
statistical inference: the practice of generalising information gained from a sample to a population from which the sample is drawn
stratified random sampling: a sampling procedure whereby the population of interest is first separated into different groups (strata) so that each unit of the population belongs to one group only, and then random or interval sampling is employed for each group
structured interview: a method for gathering information from research participants whereby the researcher follows a set list and sequence of questions
thematic analysis: a method of qualitative data analysis whereby transcribed interviews or group discussions are broken down into units of meaning or themes
transcripts: spoken interactions that have been written down word for word
units of analysis: the objects from whom a researcher wants to collect information (they may be individuals, groups of people, organisations, periods of time and/ or social artefacts)
unstructured interview (open-ended interview): a method for gathering information from research participants whereby the researcher merely tries to remain focused on an issue of study without any pre-set list of questions
validity: the extent to which a measurement instrument actually measures what it is meant to measure
variable: any property of an object or individual that can vary from person to person or object to object
Multiple choice questions
1.There is a relationship between the time a learner spends studying and exam performance. In this instance, exam performance is the:
2.Which of the following statements relating to qualitative research is false?
a)The researcher aims to interact with the research participants in a reciprocal relationship.
b)A qualitative researcher should ensure objectivity in research.
c)Reflexivity, rather than objectivity, is central to the qualitative research process.
d)Since our understandings of the world are always mediated, there is always an interpretive component in research.
3.Which of the following is a technique not used in qualitative research?
c)central tendency analysis
4.Discourse analysis involves:
a)grouping units of meaning into categories
b)drawing attention to language usage
c)examining changes over time
d)tallying scores to frequencies.
5.What type of study would it be best to conduct when there is little known about a particular behaviour or phenomenon?
a)a descriptive study
b)an exploratory study
c)an explanatory study
d)a correlational study.
6.A study that collects data just once from a sample is called a __________________; a study in which there are repeated collections of data is called a __________________.
a)correlational study; longitudinal study
b)longitudinal study; cross-sectional study
c)cross-sectional study; cross-sectional study
d)cross-sectional study; correlational study.
7.If you are unable to obtain a full list of all units of analysis for the population you wish to study, which sampling method would you use?
a)simple random sampling
c)multi-stage (cluster) sampling
d)all of the above.
8.Most measurement scales in the behavioural sciences are:
9.In a distribution of measurement scores, the __________ is the middlemost score when the scores are arranged from lowest to highest, the ________ is the average score, and the ________ is the most commonly occurring score.
a)median; mean; mode
b)mode; median; mean
c)mean; mode; median
d)median; mode; mean.
10.What is the ethical principle that states that participants must not be harmed in any way, either physically or psychologically, through their participation in a research project?
a)the principle of fidelity
b)the principle of autonomy
c)the principle of non-maleficence
d)the principle of beneficence.
1.Explain the difference between dependent and independent variables. Provide examples based on a study you may do.
2.Provide examples to describe the three measures of central tendency.
3.Outline the most commonly used types of analysis employed by qualitative researchers.
4.How do researchers go about verifying qualitative data?
5.’It is impossible for research of any sort to be totally free from errors or bias.’ Discuss this statement.
REFERENCES FOR PART 1
American Psychiatric Association. (2014). Diagnostic and statistical manual of mental disorders — DSM-5 (5th ed.). Washington, DC: APA.
American Psychological Association. (2010). Ethical principles of psychologists and code of conduct. Retrieved February 2, 2011 from http://www.apa.org/ethics/.
Benjamin, L. (2008). A history of psychology. Original sources and contemporary research (3rd ed.). New York: Wiley-Blackwell.
Bless, C., Higson-Smith, C. & Kagee, A. (2006). Fundamentals of social research methods. An African perspective (4th ed.). Cape Town: Juta.
Boonzaier, F. & De la Rey, C. (2003). ’He’s a man, and I’m a woman’: Cultural constructions of masculinity and femininity in South African women’s narratives of violence. Violence Against Women, 9(8),1003—1029.
Bowman, B., Duncan, N. & Swart, T. (2008). Social psychology in South Africa: Towards a critical history. In C. van Ommen & D. Painter (Eds.), Interiors: The history of South African psychology (pp. 319—312). Pretoria: Unisa Press.
Coon, D. & Mitterer, J. O. (2013). Gateways to psychology: An introduction to mind and behaviour (13th ed.). Pacific Grove, CA: Wadsworth/Cengage Learning.
Cooper, S., Nicholas, L., Seedat, M. & Statman, J. (1990). Psychology and apartheid: The struggle for psychology in South Africa. In L. Nicholas & S. Cooper (Eds.), Psychology and apartheid (pp. 1—21). Johannesburg: Madiba/Vision.
Csikszentmihalyi, M. (2003). Legs or wings? A reply to R. S. Lazarus. Psychological Inquiry, 14(2), 113—115. Davey, G. (2004). Complete psychology. Abingdon: Hodder & Stoughton.
De la Rey, C. (2001). Racism and the history of university education in South Africa. In N. Duncan, A. van Niekerk, C. de la Rey & M. Seedat (Eds.), Race, racism, knowledge production and psychology in South Africa (pp. 7—16). New York: Nova Science.
Duncan, N., Stevens, G. & Bowman, B. (2004). Race, identity and South African psychology. In D. Hook, P. Kiguwa, N. Mkhize, A. Collins & I. Parker (Eds.), Critical psychology (pp. 360—388). Cape Town: UCT Press/Juta.
Durrheim, K. (2006). Research design. In M. Terre Blanche & K. Durrheim (Eds.), Research in practice (2nd ed.) (pp. 33—59). Cape Town: University of Cape Town Press.
Engler, B. (2009). Personality theories (8th ed.). Belmont, Ca: Wadsworth.
Foster, D. & Louw, J. (1991). Historical perspective: Psychology and group relations in South Africa. In D. Foster & J. Louw-Potgieter (Eds.), Social psychology in South Africa (pp. 57—90). Johannesburg: Lexicon.
Gable, S. L. & Haidt, J. (2005). What (and why) is positive psychology? Review of General Psychology, 9(2), 103—110.
Grbich, C. (2007). Qualitative data analysis: An introduction. London: Sage.
Hergenhahn, B. & Henley, T. (2013). An introduction to the history of psychology. Boston, MA: Cengage. Holt, N., Bremner, A., Sutherland, E., Vliek, M., Passer, M. & Smith, R. (2012). Psychology: The science of mind and behaviour (2nd ed.). London: McGraw-Hill.
Kelly, A. E. & Rodriguez, R. R. (2006). Publicly committing oneself to an identity. Basic and Applied Social Psychology, 28, 185—191.
Kosso, P. (2011). A summary of scientific method. Dordrecht: Springer.
Lamb, R. H. & Bachrach, L. L. (2001). Some perspectives on deinstitutionalisation. Psychiatric Services, 52, 1039—1045.
Leahey, T. (2004). A history of psychology: Main currents in psychological thought (6th ed.). New York: Prentice-Hall.
Pannucci, C. J. & Wilkens, E. G. (2010). Identifying and avoiding bias in research. Plastic and Reconstructive Surgery, 126(2), 619—626. doi: 10.1097/PRS.0b013e3181de24bc.
Passer, M., Smith, R., Holt, N., Bremner, A., Sutherland, E. & Vliek, M. (2009). Psychology: The science of mind and behaviour. Maidenhead, Berkshire: McGraw-Hill.
Pedroso, R. S., Kessler, F. & Pechansky, F. (2013). Treatment of female and male inpatient crack users: A qualitative study. Trends in Psychiatry and Psychotherapy, 35(1). Retrieved March 6, 2015 from http://dx.doi. org/10.1590/S2237-60892013000100005.
Riessman, C. K. (2008). Narrative methods for the social sciences. London: Sage.
Seligman, M. E. P. & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55(1), 5—14.
Šimundić, A-M. (2012). Bias in research. Biochemica Medica, 23(1), 12—15.
Terre Blanche, M. & Seedat, M. (2001). Martian landscapes: The social construction of race and gender in South Africa’s National Institute for Personnel Research, 1946—1984. In N. Duncan, A. Van Niekerk, C. De la Rey & M. Seedat (Eds.), Race, racism, knowledge production and psychology in South Africa (pp. 61—82). New York: Nova Science.
Tomlinson, M. & Swartz, L. (2003). Imbalances in the knowledge about infancy: The divide between rich and poor countries. Infant Mental Health Journal, 24, 547—356.
Verlien, C. (2003). Innocent girls or active young women? Negotiating sexual agency at a detention home. Feminism & Psychology, 13(3), 345—367.