An A—Z of Research Methods and Issues Relevant to Health Psychology - Health Psychology in the Context of Biology, Society and Methodology

Health Psychology: Theory, Research and Practice - David F. Marks 2010

An A—Z of Research Methods and Issues Relevant to Health Psychology
Health Psychology in the Context of Biology, Society and Methodology

’Be curious.’

Anon

Outline

In this chapter, we present an A—Z of methods and issues within health psychology research in four categories: quantitative, qualitative, action research, or mixed. Quantitative researchers place an emphasis on reliable and valid measurement in controlled experiments, trials and surveys. Qualitative researchers use interviews, focus groups, narratives, diaries, texts or blogs to explore health and illness concepts and experience. Action researchers facilitate change processes, improvement, empowerment and emancipation. Mixed methods researchers combine methods from different traditions. This chapter also introduces issues that are crucial to the progression of the field, such as replication, power and repeatability.

Introduction

Curiosity is the overarching motive for any piece of research. We always want to know more, to better understand how things work. Researchers are generating an increasingly diverse array of questions that require equally diverse methods to answer them. Inevitably publications end with more questions than answers and conclude with the statement that more research is needed. Theories play a crucial role in guiding our curiosity along worthwhile avenues. We need to go where the field is most fertile, and avoid sterile ground, error and false conclusions. We would not know where to start without theories, models and hypotheses to guide us. Using sound methodology and analysis is of equal importance in testing theories and models, putting theory into practice, and evaluating the consequences of doing so. In interventions and actions to produce change, we need to know what works and doesn’t work, and why. The process is as important as the outcome. In an ideal situation we understand both, and can repeat the process and achieve the same outcome on multiple occasions.

Many traditional methods and research designs are quantitative, placing an emphasis on reliable and valid measurement in controlled investigations with experiments, trials and surveys. Multiple sources of such evidence are integrated or synthesized using systematic reviews and meta-analysis. Case studies are more suited to unique, one-off situations that merit investigation. Qualitative methods use interviews, focus groups, narratives or texts to explore health and illness concepts and experience. Action research enables change processes to feed back into plans for improvement, empowerment and emancipation. Interest in qualitative methods and action research has been increasing. These different kinds of method complement each other, and are necessary if we want a complete picture of psychology and health. Which method is appropriate in any given situation depends entirely upon the question being asked and the context.

Research in health psychology has grown exponentially. Figure 7.1 plots the growth over the period 1990—2009. Approximately 8,000 published health psychology studies appeared in print in 2009 compared to around 800 in 1990, a ten-fold increase. That level of growth continues, and there is little sign of it levelling off.

Figure 7.1 Trends in numbers of health psychology studies, 1990—2009

Image

Source: Marks (2013)

The sections below present an A—Z of the most commonly used research methods and issues that arise in health psychology.

Action Research

Action research is about the process of change and what stimulates it. The investigator acts as a facilitator, collaborator or change-agent who works with the stakeholders in a community or organization to help develop a situation or make a change of direction happen. Action research is particularly suited to organizational and consultancy work when a system or service requires improvements. In a community it aims to be emancipatory, helping to empower members to take more control over the way things work in their local community.

Action research can be traced back to the Gestalt psychology of Kurt Lewin (1936: 12): ’Every psychological event depends upon the state of the person and at the same time the state of the environment. … One can hope to understand the forces that govern behaviour only if one includes in the representation the whole psychological situation.’ Lewin later wrote about what he called ’Feedback problems of social diagnosis and action’ (1947) and presented a diagram of his method (see Figure 7.2). A series of action steps with feedback loops allows each action step to be ’reconnoitred’ before further action steps.

Disciples of Lewin (e.g., Argyris, 1975) interpret Lewin in terms of increasing the collaboration of community members, stakeholders and researchers in the design and interpretation of the study. Feedback of early results to participants often leads to a redesign of methods in light of consultation about the findings.

Participant action research (PAR) is a prominent method in community health psychology (Campbell and Cornish, 2014). In PAR, researchers share power and control with participants and need to tolerate the uncertainty that rolls out of power-sharing. PAR is a suitable research approach in direct social actions that are organized to create change in entrenched scenarios where power imbalances are disadvantaging many of the actors. For example, Yeich (1996) described how housing campaigns with groups for homeless people involved assisting in the organization of demonstrations and working with the media to raise awareness of people’s housing needs.

Action research takes time, resources, creativity and courage. It requires collaboration with different agencies. It is an approach that does not follow a straight line but proceeds in a halting, zig-zag format. Often there are personal challenges and disappointments for the researcher, who must devote substantial emotional and intellectual energy to the project (Brydon-Miller, 2004). Cornish et al. (2014) proposed the Occupy movement as a paradigm example of community action that they labelled ’trusting the process’ (see Chapter 17).

PAR researchers also use the arts and performance as vehicles for envisioning and promoting change. Two examples are the photo-novella (Wang et al., 1996; Lykes, 2001) and PhotoVoice (Haaken and O’Neill, 2014; Vaughan, 2014). Research participants in PhotoVoice take and display photographs with the aim of becoming more reflectively aware and are able to mobilize around personal and local issues. Tucker and Smith (2014) developed a Lewinian approach to the investigation of life situations and a specific example of self-care in a service user’s home. A study of accidents in a fishing community used the PAR approach (Murray and Tilley, 2004), as did Gray and his colleagues when they transformed interviews with cancer patients into plays performed to support groups (Gray et al., 2001; Gray and Sinding, 2002).

Figure 7.2 Planning, fact-finding and execution, as described by Kurt Lewin (1947)

Image

Between Groups Designs

A between groups design allocates matched groups to different treatments. If the measures are taken at one time, this is called a cross-sectional design, in contrast to a longitudinal design where the groups are tested at two or more time-points. When we are comparing only treatment groups, a failure to find a difference between them on the outcome measure(s) might be for one of three reasons: they are equally effective; they are equally ineffective; they are equally harmful. For this reason, one of the groups should be a control group that will enable us to discover whether the treatment(s) show a different effect from no treatment.

Ethical issues arise over the use of control groups. Not treating someone in need of treatment is unacceptable. However, if there is genuine uncertainty about what works best, it is better to compare the treatments with a control condition than to continue for ever applying a treatment that may be less effective than another. Once it has been determined which therapy is the most effective, this can be offered to the control group and to all future patients (Clark-Carter and Marks, 2004).

The choice of the control condition is important. The group should receive the same amount of attention as those in the treatment condition(s). This type of control is known as a placebo control (see below) as treatment itself could have a non-specific effect to ’please’ the client and enhance his/her well-being.

If all of the various groups’ responses are measured only after an intervention, then we haven’t really measured change. All groups, including the control group, could have changed, but from different starting positions, and failing to find a difference between the groups after the treatment could miss this. We can help to deal with this problem by using a mixed design when we measure all groups before and after the treatment. However, we would be introducing some of the difficulties mentioned above for a cross-over or within-subjects design (Clark-Carter, 1997).

Bonferroni Correction

It is common in research to make multiple statistical comparisons. At the .05 or 5% significance (α) level with 20 tests there is a 64% chance of observing at least one significant result, even if none is actually significant (a Type I error). In a research project, numerous simultaneous tests may be required and the probability of getting a significant result simply by chance may be very high. The α level must be adjusted in some way, so that the probability of observing at least one significant result due to chance remains below the desired significance level. The Bonferroni correction places the significance cut-off at α/n. For example, with 20 tests and α at .05, you would only reject a null hypothesis if the p-value is less than .05/20 = 0.0025.

Case Studies

The term ’case study’ is used to describe a detailed descriptive account of an individual, group or collective. The purpose of case studies is to provide a ’thick description’ (Geertz, 1973) of a phenomenon that would not be obtained by the usual quantitative or qualitative approaches. It requires the researcher to be expansive in the types of data collected, with a deliberate aim to link the person with the context, e.g., the sick person in the family. The researcher usually attempts to provide a chronological account of the evolution of the phenomenon from the perspective of the central character.

A challenge for the researcher is in establishing the boundaries of the case. These need to be flexible to ensure that all information relevant to the case under investigation is collected. The major strength of the case study is the integration of actor and context and the developmental perspective. Thus, the phenomenon under investigation is not dissected but rather maintains its integrity or wholeness and it is possible to map its changes over time.

There are several different types of case study. The empirical case study is grounded in the data. The theoretical case is an exemplar for a process that has already been clarified. The researcher can conduct interviews, or repeat interviews and observe the case in different settings. The process of analysis can be considered the process of shaping the case. Thus, the researcher selects certain pieces of information and discards others so as to present a more integrated case. One example is De Visser and Smith’s (2006) investigation of the links between masculine identity and social behaviour with a 19-year-old man living in London.

Confidence Interval

A confidence interval (CI) is the interval around the mean of a sample that one can state with a known probability contains the mean of the population. A population parameter is always estimated using a sample. The reliability of the estimate varies according to sample size. A confidence interval specifies a range of values where a parameter is estimated to lie. The narrower the interval, then the more reliable the estimate. Typically, the 95% or 99% CI is stated in the results of a study that has obtained a representative sample of values, e.g., the mean heart rate for a sample might be 75.0 with a 95% CI of 72.6 to 77.4. Confidence intervals are normally reported in tables or graphs, along with point estimates of the parameters, to indicate the reliability of the estimates.

Conflicts of Interest

Conflicts of interest (also known as ’competing interests’) occur when an investigator is affected by personal, company or institutional bias towards a conclusion that is favourable to a treatment yet unsubstantiated by research findings. Authors are expected to declare any conflicts of interest at the point of publication. One can argue that this may already be too late in the process to trust the findings.

Cross-Over or Within-Participants Designs

The cross-over or within-participants design is used when the same people provide measures of a dependent variable at more than one time and differences between the measures at the different times are recorded. An example would be a measure taken before an intervention (pre-treatment) and again after the intervention (post-treatment). Such a design minimizes the effect of individual differences as each person acts as his/her own control.

There are a number of problems with this design. Any change in the measure of the dependent variable may be due to other factors having changed. For example, an intervention designed to improve quality of life among patients in a long-stay ward of a hospital may be accompanied by other changes, such as a new set of menus introduced by the catering department. In addition, the difference may be due to some aspect of the measuring instrument. If the same measure is being taken on both occasions, the fact that it has been taken twice may be the reason that the result has changed.

Failure to find a difference between the two occasions doesn’t tell you very much; in a worsening situation, the intervention still might have been effective in preventing things from worsening more than they have already. The counterfactual scenario in which nothing changed is an unknown entity. Additionally, if a cross-over design is used to compare two or more treatments, the particular results can be an artefact of the order in which the treatments are given. To counter order effects, one can use a baseline or ’washout period’ before and after treatment periods. Also, one can randomly assign people to different orders or, if one is interested in seeing whether order does have an effect, then a more systematic allocation of participants to different orders can be employed; for example, if there are only two conditions, participants can be alternately placed in the two possible orders, or if there are more than two conditions, a Latin square design can be employed (Clark-Carter and Marks, 2004).

Cross-Sectional Designs

Cross-sectional designs obtain responses from respondents on one occasion only. With appropriate randomized sampling methods, the sample can be assumed to be a representative cross-section of the population under study and it is possible to make comparisons between sub-groups (e.g., males versus females, older versus younger people, etc.). However, cause and effect can never be inferred between one variable and another and it is impossible to say whether the observed associations are caused by a third background variable not measured in the study.

Cross-sectional designs are popular because they are relatively inexpensive in time and resources. However, there are problems of interpretation; not only can we say nothing about causality, but generalizability is also an issue whenever there is doubt about the randomness or representativeness of the samples. Many studies are done with students as participants and we can never be sure that the use of a non-random, non-representative sample of students is methodologically rigorous. The ecological validity of the findings is strongly contentious in the sense that they are unlikely to be replicated in a random sample from the general population. Any study with a non-random student sample should be repeated with a representative sample from a known population. Cross-sectional designs are also unsuited to studies of behaviour change and provide weak evidence in the testing of theories.

Diaries and Blogs

Diaries and diary techniques have been used frequently as a method for collecting information about temporal changes in health status. These diaries can be prepared by the researcher or participant or both, and can be quantitative or qualitative, or both. They can be compared to the time charts that have been used by health professionals for generations to track changes in the health status of individuals. Blogs also provide a rich source of data on different illnesses and conditions and lay ideas on ’healthy living’.

A summary of the current uses of the diary in health research is reproduced in Table 7.1.

Image

Diaries and blogs have benefits for the participant irrespective of the researcher (Murray, 2009). Research by Pennebaker (1995) and others has demonstrated that expressive writing can be psychologically beneficial. A series of studies has provided evidence that journal writing can lead to a reduction in illness symptoms and in the use of health services (e.g., Smyth et al., 1999). There are a number of explanations for this, including the release of emotional energy, cognitive processing and assistance with narrative restructuring. However the effects are small, and not always easy to replicate.

The internet provides a resource for blogs, diaries and forums in which individuals share their experiences, seek information and provide virtual social support. Anonymity may be used by bloggers to foster self-disclosure in describing embarrassing conditions. Chiu and Hsieh (2013), using focus-group interviews with 34 cancer patients, explored how cancer patients’ writing and reading on the internet play a role in their illness experience. They found that personal blogs enabled cancer patients to reconstruct their life stories, and express closure of life and how they expected to be remembered after death. Reading fellow patients’ stories significantly influenced their perceptions and expectations of their illness prognosis, which was sometimes a greater influence than their doctors.

Direct Observation

The simplest kind of study involves directly observing behaviour in a relevant setting, for example patients waiting for treatment in a doctor’s surgery or clinic. Direct observation may be accompanied by recordings in written, oral, auditory or visual form. Several investigators may observe the same events so that reliability checks can be conducted. Direct observation includes casual observation, formal observation and participant observation. However, ethical issues are raised by planned formal observational study of people who have not given informed consent to such observations.

Discourse Analysis

Discourse analysis is a set of procedures for analysing language as used in speech or texts. It focuses on the language and how it is used to construct versions of ’social reality’ and what is gained by constructing events using particular terms. It has links with ethnomethodology, conversation analysis and the study of meaning (semiology). There are two forms of discourse analysis. The first, discursive psychology, evolved from the work of Potter and Wetherell (1987) and is concerned with the discursive strategies people use to further particular actions in social situations, including accounting for their own behaviour or thoughts. This approach has been used to explore the character of patient talk and the character of doctor—patient interactions. There is a particular preference for naturally occurring conversations, e.g., mealtime talk (Wiggins et al., 2001). Locke and Horton-Salway (2010) analysed how class leaders talked to antenatal class members about pregnancy, childbirth and infant care in ’golden age’ or ’bad old days’ stories variably to contrast the practices of the past with current practices. The second type of discourse analysis, Foucauldian discourse analysis (FDA), was developed by Ian Parker (1997) and others because they criticized the previous approach as evading issues of power and politics. FDA aims to identify the broader discursive resources that people in a particular culture draw upon in their everyday lives. This approach has been used to explore such issues as smoking (Gillies and Willig, 1997) and masculine identity (Tyler and Williams, 2014).

Double-Blind Control

A double-blind control is used in randomized controlled trials to prevent bias: both the investigator and the participant (subject) are prevented from knowing whether they are in the treatment or control condition. A single-blind is when only the participant is unaware of the condition they have been allocated to.

Effect Size

An effect size is the strength of the association between study variables and outcome as measured by an observed difference or correlation. Cohen’s d and Person’s r are the most popular indices of effect size in psychological studies. The effect size is a measure of the importance of an effect rather than its statistical significance. Effect sizes are used in meta-analysis as a means of measuring the magnitude of the results obtained over different studies. Effect size is related to the power of a study to detect a difference that really exists. A weak study cannot detect a real difference because it has samples that are too small relative to the magnitude of the difference that exists, a common problem in psychology. It is estimated that 60—70% of published studies in psychology journals lack sufficient power to obtain statistical significance.

Ethical Approval

This is a necessary requirement before any research can be started. Ethics boards and research review panels in all research institutions and universities have been established for this purpose. Any research project must present before a panel of experts on ethical issues, and have the panel’s explicit approval of: the full details of the aims, the design, the participants and how they will be chosen; the information provided to the participants; the method of consent used; the methods of data analysis; the nature and timing of debriefing of participants; and the methods of dissemination. Funding and publication are normally contingent on ethical approval being obtained.

Ethnographic Methods

Ethnographic methods seek to build a systematic understanding of a culture from the viewpoint of the insider. Ethnographic methods are multiple attempts to describe the shared beliefs, practices, artefacts, knowledge and behaviours of an intact cultural group. They attempt to represent the totality of a phenomenon in its complete, naturalistic setting. Detailed observation is an important part of ethnographic fieldwork. Ethnography can provide greater ecological validity. The processes of transformation can be observed and documented, including how the culture becomes embodied in participants, alongside the recording of their narratives. It is more labour-intensive, but combining ethnography with narrative interviews can produce richer information than qualitative interviews alone (Paulson, 2011).

The observation can be either overt or covert. In the overt case, the researcher does not attempt to disguise his/her identity, but rather is unobtrusive so that the phenomenon under investigation is not disturbed. In this case, the researcher can take detailed notes, in either a prearranged or discursive format. In certain cases, the researcher may decide that his/her presence may disturb the field. In this case, two forms of covert observation may be used. In one form, the focus of observation is not aware at all of the presence of the researcher. An alternative approach is when the person observed may be aware of the researcher’s presence but is unaware that he/she is a researcher. In both of these forms the researcher needs to consider whether such covert surveillance is ethically justified. A form of participant observation that is not covert is when the researcher accompanies the person but tries not to interfere with the performance of everyday tasks. Priest (2007) combined grounded theory and ethnography to explore members’ experience of a mental health day service walking group, including the psychological benefits of the physical activity, the outdoor environment and the social setting. Stolte and Hodgetts (2015) used ethnographic methods to study the tactics employed by a homeless man in Central Auckland, New Zealand, to maintain his health and help him to gain respite while living on the streets, an unhealthy place.

Focus Groups

Focus groups comprise one or more group discussions in which participants ’focus’ collectively upon a topic or issue that is usually presented to them as a series of questions, although sometimes as a film, a collection of advertisements, cards to sort, a game to play, or a vignette to discuss. The distinctive feature of the focus group method is its generation of interactive data (Wilkinson, 1998). Focus groups were initially used in marketing research. As its title implied, they had a focus that was to clarify the participants’ views on a particular product. Thus, from the outset the researcher had set the parameters of the discussion and as it proceeded he/she deliberately guided the discussion so that its focus remained limited. More recent use of the focus group has been much more expansive. In many cases, the term ’discussion group’ is preferred to give an indication of this greater latitude. The role of the researcher in the focus group is to act as the moderator for the discussion. The researcher can follow similar guidelines to those for an interview by using a guide, except the discussion should be allowed to flow freely and not be constrained too much by the investigator’s agenda. The researcher needs to ensure that all the group participants have an opportunity to express their viewpoints. The method is often combined with interviews and questionnaires.

At the beginning of the discussion the researcher should follow the usual guidelines. It is important that the group is briefed on the basic principles of confidentiality and respect for different opinions. It is useful for them to know each other’s first names and to have name badges. This facilitates greater interaction. It is also useful to have some refreshments available.

Although it is usual for the moderator to introduce some themes for discussion, this can be supplemented with a short video extract or pictures relevant to the topic being investigated. As the discussion proceeds, the researcher can often take a background role, while ensuring that the discussion does not deviate too far from the focus of the research and that all the participants have an opportunity to express their views. An assistant can help in completing consent forms, providing name-tags, organizing refreshments, keeping notes on who is talking (this is useful for transcription), and monitoring the recording equipment. The focus group recording should be transcribed as soon as possible afterwards since it is often difficult to distinguish speakers. Here are a few examples: Jones et al. (2014a) carried out a focus group and telephone interviews with patients in rural areas to examine the management of diabetes in a rural area; Liimakka (2014) drew upon focus group discussions to explore how young Finnish university students viewed the cultural ideals of health and appearance; Griffiths et al. (2014) used pre- and post-intervention focus groups to test a website, Realshare, for young oncology patients in the south-west of England; and Bogart (2015) examined the social experiences of 10 adolescents aged 12—17 years with Moebius Syndrome, a rare condition involving congenital facial paralysis.

Grounded Theory Analysis

Grounded theory analysis is a term used to describe a set of guidelines for conducting qualitative data analysis. It was originally developed by Glaser and Strauss (1967) and has subsequently gone through various revisions. In its original form, qualitative researchers were asked to dispense with theoretical assumptions when they began their research. Rather, they were encouraged to adopt a stance of disciplined naïvety. As the research progresses, certain theoretical concepts are discovered and then tested in an iterative fashion. In the case of the qualitative interview, the researcher is encouraged to begin the analysis at a very early stage, even as the interview is progressing. Through a process of abduction, the researcher begins to develop certain theoretical hypotheses. These hypotheses are then integrated into a tentative theoretical model that is tested as more data are collected.

This process follows a series of steps beginning with generating data. At this stage, the researcher may have some general ideas about the topic but this should not restrict the talk of the participant. From the very initial stages the researcher is sifting through the ideas presented and seeking more information about what are considered to be emerging themes. From a more positivist perspective, it is argued that the themes emerge from the data and that the researcher has simply to look for them. This approach is often associated with Glaser (1992). From a more social constructionist perspective, certain theoretical concepts of the researcher will guide both the data collection and analysis. This approach is more associated with the symbolic interactionist tradition (Strauss, 1987; Charmaz, 2003).

Having collected some data, the researcher conducts a detailed coding of it, followed by the generation of bigger categories. Throughout the coding the researcher follows the process of constant comparative analysis. This involves making comparisons of codes within and between interview transcripts. This is followed by the stage of memo-writing, which requires the researcher to begin to expand upon the meaning of the broader conceptual categories. This in turn can lead to further data generation through theoretical sampling. This is the process whereby the researcher deliberately selects certain participants or certain research themes to explore further because of the data already analysed. At this stage, the researcher is both testing and strengthening the emergent theory. At a certain stage in this iterative process the researcher feels that he/she has reached the stage of data saturation — no new concepts are emerging and it is considered fruitless to continue with data collection.

A few examples are as follows: DiMillo et al. (2015) used grounded theory methodology to examine the stigmatization experiences of six BRCA1/2 gene mutation carriers following genetic testing; Searle et al. (2014) studied participants’ experiences of facilitated physical activity for the management of depression in primary care; Silva et al. (2013) used the method to study the balancing of motherhood with drug addiction in addicted mothers.

Hierarchy of Evidence

In traditional top-down approaches to research, a hierarchy of evidence or research methods is often utilized (Figure 7.3). In this pyramidical hierarchy of methods, meta-analyses and systematic reviews occupy the pinnacle and qualitative methods are at the base. Researchers who prefer the alternative, bottom-up approach are much more likely to employ qualitative and mixed methods. Critical health psychologists dispute the validity of the evidence hierarchy, which tends to be formulaic, restrictive and lacking in innovation.

Historical Analysis

Health and illness are socially and historically located phenomena. As such, psychologists have much to gain by detailed historical research (historical analysis) on the development of health beliefs and practices. They can work closely with medical or health historians to explore the evolution of scientific and popular beliefs about health and illness or they can work independently (see Chapter 6). An example is the work of Herzlich and Pierret (1987). Their work involved the detailed analysis of a variety of textual sources such as scientific medical writings, but also popular autobiographical and fictional accounts of the experience of illness. They noted the particular value of literary works because of their important contribution to shaping public discourse. Such textual analysis needs to be guided by an understanding of the political and philosophical ideas of the period.

Figure 7.3 Hierarchy of evidence

Image

Source: Public domain

Health psychologists need also to be reflexive about the history of their own discipline. It arose at a particular historical period sometimes described as late modernity. Initially it was seen as providing a complement to the excessive physical focus of biomedicine. Now some see it as part of the broader lifestyle movement.

There are different approaches to the writing of history. There are those who can be broadly characterized as descriptive and who often provide a list of the growth of the discipline in laudatory terms (e.g., Stone et al., 1987). Conversely, there are those who adopt a more critical approach and attempt to dissect the underlying reasons for the development of the discipline. Within health psychology, this latter approach is still in its early stages (e.g., Stam, 2014).

Interpretative Phenomenological Analysis

Phenomenological research is concerned with exploring the lived experience of health, illness and disability. Its aim is to understand these phenomena from the perspective of the particular participant. This in turn has to be interpreted by the researcher. A technique that addresses this challenge is interpretative phenomenological analysis (IPA) (Smith, 2004). IPA focuses on the cognitive processing of the participant. Smith (2004) argues that it accords with the original direction of cognitive psychology that was concerned with exploring meaning-making rather than information-processing. IPA provides a guide to conducting the researcher’s making sense or meaning of reported experiences. It begins by accessing the participant’s perceptions through the conduct of an interview or series of interviews with a homogeneous sample of individuals. The interview is semi-structured and focuses on the particular issue of concern.

Data analysis in IPA goes through a number of stages. Initially, the researcher reads and re-reads the text and develops a higher order thematic analysis. Having identified the key themes or categories, the researcher then proceeds to look for connections between them by identifying clusters. At this stage, the researcher is drawing upon his/her broader understanding to make sense of what has been said. Once the researcher has finished the analysis of one case, he/she can proceed to conduct an analysis of the next case in a similar manner. Alternatively, the researcher can begin to apply the analytic scheme developed in the previous case. The challenge is to identify repeating patterns but also to be alert to new patterns. Further details of this form of analysis are available in Smith et al. (1999) and Smith and Osborn (2003).

A few examples are as follows: Conroy and De Visser (2015) studied the importance of authenticity for student non-drinkers; Mackay and Parry (2015) studied two perspectives on autistic behaviours; Burton et al. (2014) used an interpretative phenomenological analysis of sense-making within a dyadic relationship of living together with age-related macular degeneration; Ware et al. (2015) used IPA to study the experience of hepatitis C treatment for people with a history of mental health problems; Levi et al. (2014) investigated phenomenological hope among perceptions of traumatized war veterans.

Interventions

Interventions are deliberate attempts to facilitate improvements to health. The idea for the intervention can come from a theory or model, from discussions with those who are knowledgeable about the condition or situation that needs to be changed, or from ’out of the blue’.

A key aspect of designing and/or implementing any intervention is evaluation — attempting to prove whether or not the intervention is effective or efficacious. Furthermore, reports of intervention studies are typically brief, opaque descriptions of what can often be complex interventions.

Reports of behaviour change studies typically provide brief summaries of what in reality may be a highly complex and unique intervention. One problem is that there is no meaningful method of classifying interventions for behaviour change into any single theory or method of description.

There is no meaningful method of relating the practice of behaviour change to any single theory or taxonomy. This means that the researcher does not know how to label what they have done in a way that communicates this in any precise manner to others (Marks, 2009). A key criterion for the reporting of an intervention must be transparency. Can another person or group repeat the study in his/her/their own setting with his/her/their own participants? The need to be concise in publishing studies means that the level of detail required for successful replication may often be missing. It is therefore almost impossible for new investigators to repeat a published intervention with any exactitude in their own settings.

Interviews (Semi-Structured)

Semi-structured interviews are designed to explore the participant’s view of things with the minimal amount of assumptions from the interviewer. A semi-structured interview is more open-ended than a structured interview and allows the interviewee to address issues that he/she feels are relevant to the topics raised by the investigator (see Qualitative research methods below). Open-ended questions are useful in this kind of interview. They have several advantages over closed-ended questions. The answers will not be biased by the researcher’s preconceptions as much as closed-ended questions can be. The respondents are able to express their opinions, thoughts and feelings freely, using their own words in ways that are less constrained by the particular wordings of the question. The respondents may have responses that the structured interview designer has overlooked. They may have in-depth comments that they wish to make about the study and the topics that it is covering that would not be picked up using the standard questions in a structured interview.

In preparing for the interview the researcher should develop an interview guide. This can include a combination of primary and supplementary questions. Alternatively, the researcher may prefer to have a list of themes to be explored. However, it is important that the researcher does not formally follow these in the same order but rather introduces them at the appropriate time in the interview. Prior to the interview, the researcher should review these themes and order them from the least invasive to the more personal.

Interviews (Structured)

A structured interview schedule is a prepared, standard set of questions that are asked in person, or perhaps by telephone, of a person or group concerning a particular research issue.

Literature Search

An essential skill in any research project is to carry out a literature search. Usually this will be best achieved using keywords. The key data in all scholarly publications consist of the title, the abstract, which is a summary of 100 to 250 words, and the keywords that are listed with the data about the article. By inserting keywords into any search engine, it is possible to obtain a comprehensive list of scholarly research reports, dissertations and conference papers, books and monographs. One popular search engine is Google Scholar. Another major database for researchers is the ’ISI Web of Knowledge’. This contains a large selection of peer-reviewed publications from journals with a proven track record of high-quality publications. Examples of the results from searches of the health psychology literature are shown in Figures 7.1 and 7.4.

Longitudinal Designs

Longitudinal designs involve measuring responses of a single sample on more than one occasion. The measurements may be prospective or retrospective. Prospective longitudinal designs allow greater control over the sample, the variables measured and the times when the measurements take place. Such designs are superior to cross-sectional designs because one is better able to investigate hypotheses of causation when the associations between variables are measured over time. Longitudinal designs are among the most powerful designs available for the evaluation of treatments and of theories about human experience and behaviour, but they are also the most expensive in terms of labour, time and money.

Meta-Analysis

Meta-analysis is the use of statistical techniques to combine the results of primary studies addressing the same question into a single pooled measure of effect size, with a confidence interval. The analysis is often based on the calculation of a weighted mean effect size in which each primary study is weighted according to the number of participants. A meta-analysis follows a series of steps, as follows: (1) develop a research question; (2) identify all relevant studies; (3) select studies on the basis of the issue being addressed and methodological criteria; (4) decide which dependent variables or summary measures are allowed; (5) calculate a summary effect; and (6) reach a conclusion in answer to the original research question.

Mixed Methods Research

Methodology that involves collecting, analysing and integrating quantitative and qualitative research. Examples can be found in Creswell and Clark (2007), Johnson et al. (2007), Lucero et al. (2016) and Kuenemund et al. (2016).

Narrative Approaches

This approach is concerned with the desire to seek insight and meaning about health and illness through the acquisition of data in the form of stories concerning personal experiences. The narrative approach assumes that human beings are natural storytellers and that the principal task of the psychologist is to explore the different stories being told (Murray, 2015). The most popular source of material for the narrative researcher is the interview. The focus of the narrative interview is the elicitation of storied accounts from the interviewee. This can take various forms. The life-story interview is the most extended form of interview. As its name implies, the life-story interview seeks to obtain an extended account of the person’s life. The primary aim is to make the participant at ease and encourage him/her to tell their story at length.

A particular version of the narrative interview is the episodic interview in which the researcher encourages the participant to speak on a variety of particular experiences. This approach assumes that experiences are often stored in memory in narrative episodes and that the challenge is to reveal these without integrating them into a larger narrative. Throughout the interview the role of the interviewer is to encourage sustained narrative accounting. This can be achieved through a variety of supportive remarks. The researcher can deliberately encourage the participant to expand upon remarks about particular issues.

Narrative analysis (NA) can take various forms. It begins with a repeated reading of the text to identify the story or stories within it. The primary focus is on maintaining the narrative integrity of the account. The researcher may develop a summary of the narrative account that will help identify the structure of the narrative, its tone and the central characters. It may be useful to engage in a certain amount of thematic analysis to identify some underlying themes. But this does not equate with narrative analysis. NA involves trying to see the interconnections between events rather than separating them. Having analysed one case, the researcher can then proceed to the next, identifying similarities and differences in the structure and content of the narratives.

Observational Studies

The term ’observational study’ is used to describe research carried out to evaluate the effect of an intervention or treatment that does not have the advantages of a control group. A single group of patients is observed at various points before, during and after the treatment in an attempt to ascertain the changes that occur as a result of the treatment. There are strict limitations on the conclusions that can be reached as a consequence of the lack of control group (e.g., see Randomized controlled trials below). However, there are occasions when a randomized controlled trial is impossible to carry out because of ethical or operational difficulties.

Participatory Action Research

Participatory action research (PAR) is a version of action research (see above) that deliberately seeks to provoke some form of social or community change.

Power and Power Analysis

Power refers to the ability of a study to find a statistically significant effect when a genuine effect exists. The power (1—ß) of a statistical test is the complement of ß, the Type II or beta error probability of falsely retaining an incorrect H0. Statistical power relies on three parameters: (1) the significance level (i.e., the Type I error probability or α level); (2) the size(s) of the sample(s); and (3) an effect size parameter defining H1 and thus indicating the degree of deviation from H0 in the underlying population.

Cohen (1972) found that psychology studies had about a 50% chance of finding a genuine effect owing to their lack of statistical power. The situation has changed in the last 40 years but it remains problematic. This lack of power is caused by study samples being too small to permit definite conclusions. Given the easy availability of free software online, there can be little excuse for not doing a power analysis before embarking on a research project. Funding agencies, ethics boards, research review panels and journal editors normally require a power analysis as a condition of funding, approval and publication.

There are several different types of power analysis, some being more robust than others. In a priori power analyses (Cohen, 1988), sample size N is computed as a function of the required power level (1—ß), the pre-specified significance level α, and the population effect size to be detected with probability 1—ß. Cohen’s definitions of small, medium and large effects can be helpful in effect size specifications.

A variety of software is available to expedite rapid power analyses, including G* Power 3 (Faul et al. 2007) and free online software online such as OpenEpi (Dean et al., 2014).

Qualitative Research Methods

Qualitative research methods aim to understand the meanings, purposes and intentions of behaviour, not its amount or quantity. A huge variety of methods are available and these are described in this A—Z under the following headings: diaries and blogs; discourse analysis; focus groups; grounded theory; historical analysis; interpretative phenomenological analysis; interviews, especially semi-structured; and narrative approaches. Figure 7.4 shows the rapid growth of qualitative research in health psychology over the last few decades. This trend is expected to continue. A wide variety of software is available to support qualitative and mixed methods research analyses, such as NVivo, MAXQDA and QDA Miner Lite.

Questionnaires

Questionnaires in health psychology consist of a standard set of questions with accompanying instructions concerning attitudes, beliefs, perceptions or values concerned with health, illness or health care. Ideally, a questionnaire will have been demonstrated to be a reliable and valid measure of the construct(s) it purports to measure.

Questionnaires vary in objectives, content (especially in their generic versus specific content), question format, the number of items, and sensitivity or responsiveness to change. Questionnaires may be employed in cross-sectional and longitudinal studies. When looking for changes over time, the responsiveness of a questionnaire to clinical and subjective changes is a crucial feature. A questionnaire’s content, sensitivity and extent, together with its reliability and validity, influence a questionnaire’s selection. Guides are available to advise users on making a choice that contains the appropriate generic measure or domain of interest (e.g., Bowling, 2001, 2004). These guides are useful as they include details on content, scoring, validity and reliability of dozens of questionnaires for measuring all of the major aspects of psychological well-being and quality of life, including disease-specific and domain-specific questionnaires and more generic measures.

The investigator must ask: What is it that I want to know? The answer will dictate the selection of the most relevant and useful questionnaire. The most important aspect of questionnaire selection is therefore to match the objective of the study with the objective of the questionnaire. For example, are you interested in a disease-specific or broad-ranging research question? When this question is settled, you need to decide whether there is anything else that your research objective will require you to know. Usually the researcher needs to develop a specific block of questions that will seek vital information concerning the respondents’ socio-demographic characteristics. This block of questions can be placed at the beginning or the end of the main questionnaire.

Questionnaire content may vary from the highly generic (e.g., How has your health been over the last few weeks? Excellent, Good, Fair, Poor, Very Bad) to the highly specific (e.g., Have you had any arguments with people at work in the last two weeks?). Questionnaires vary greatly in the number of items that are used to assess the variable(s) of interest. Single-item measures use a single question, rating or item to measure the concept or variable of interest. For example, the now popular single verbal item to evaluate health status: During the past four weeks how would you rate your health in general? Excellent, Very good, Good, Fair, Poor. Single items have the obvious advantages of being simple, direct and brief.

Questionnaires remain one of the most useful and widely applicable research methods in health psychology. A few questionnaire scales have played a dominant role in health psychology research over the last few decades. Figure 7.4 shows the number of items in the ISI Web of Knowledge database for three of the most popular scales. Over the 20-year period 1990—2009, usage of scales designed to measure health status has been dominated by three front-runners: the McGill Pain Questionnaire (Melzack, 1975), the Hospital Anxiety and Depression Scale (HADS; Zigmond and Snaith, 1983), and the SF-36 Health Survey (Brazier et al., 1992). The SF-36 is by far the most utilized scale in clinical research, accounting for around 50% of all clinical studies (Figure 7.4).

Figure 7.4 Trends in numbers of health psychology studies using different research measures and methods, 1990—2009

Image

Source: Marks (2013)

Randomized Controlled Trials

Randomized controlled trials (RCTs) involve the systematic comparison of interventions using a fully controlled application of one or more ’treatments’ with a random allocation of participants to the different treatment groups. The statistical tests that are available have as one of their assumptions that participants have been randomly assigned to conditions. In real-world settings of clinical and health research, the so-called ’gold standard’ of the RCT cannot always be achieved in practice, and in fact may not be desirable for ethical reasons.

We are frequently forced to study existing groups that are being treated differently rather than have the luxury of being able to allocate people to conditions. Thus, we may in effect be comparing the health policies and services of a number of different hospitals and clinics. Such ’quasi-experimental designs’ are used to compare treatments in as controlled a manner as possible, when, for practical reasons, it is impossible to manipulate the independent variable, the policies, or allocate the participants.

The advantage of an RCT is that differences in the outcome can be attributed with more confidence to the manipulations of the researchers, because individual differences are likely to be spread in a random way between the different treatments. As soon as that basis for allocation of participants is lost, then questions arise over the ability to identify causes of changes or differences between the groups; in other words, the internal validity of the design is in question.

Randomized controlled trials are complex operations to manage and describe, which has led to a difficulty in replication of RCTs. To help solve this problem, the CONSORT guidelines for RCTs published by Moher et al. (2001) and the TREND statement for non-randomized studies (Des Jarlais et al., 2004) were intended to bridge the gap between intervention descriptions and intended replications. These guidelines have driven efforts to enhance the practice of reporting behaviour change intervention studies. Davidson et al. (2003) expanded the CONSORT guidelines in proposing that authors should report: (1) the content or elements of the intervention; (2) the characteristics of those delivering the intervention; (3) the characteristics of the recipients; (4) the setting; (5) the mode of delivery; (6) the intensity; (7) the duration; and (8) adherence to delivery protocols/manuals.

Another issue with RCTs has been bias created by industry sponsorship. Critics claim that research carried out or sponsored by the pharmaceutical industry should be treated with a high degree of suspicion as the investigators may have a hidden bias that can affect their ability to remain independent. Lexchin et al. (2003) carried out a systematic review of the effect of pharmaceutical industry sponsorship on research outcome and quality. They found that pharmaceutically sponsored studies were less likely to be published in peer-reviewed journals. Also, studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98—5.51). They found a systematic bias favouring products made by the company funding the research.

There have been significant abuses of RCTs in clinical and drug trials. Many trials have not been registered so that there is no record of them having been carried out. Trials showing non-significant effects have been unreported, which distorts the evidence base by suggesting that a drug is better than it actually is. Placebo control conditions have been manipulated to enhance drug effects. The double-blind requirements for RCTs have been broken. Investigators who have received funding from drug companies have written biased or misleading reports. Ghost writers have been employed to write glowing reports. These abuses have led to the wastage of public funds on ineffective drugs and treatments, missed opportunities for improving treatments, and trials being repeated unnecessarily.

The AllTrials movement is campaigning to remove these abuses and to obtain publication of all clinical trials (see www.alltrials.net/).

Repeatability

Reproducibility is one criterion for progress in science. If a study is repeated under similar conditions, then it should be possible to obtain the same findings. However, journal reviewers and editors may not accept a replication or failed replication for publication on the grounds that it is not as ’newsworthy’ as an original study. In one landmark study, fewer than half of 100 studies published in 2008 in three top psychology journals could be successfully replicated (Open Science Collaboration, 2015). Lack of replication indicates that: (1) Study A’s result may be false, or (2) Study B’s results may be false, or (3) both may be false, or (4) there may be some subtle differences in the way the two studies were conducted — in other words, there were differences in the context. The OSC analysis showed that a low p value was predictive of which studies could be replicated. Twenty of the 32 original studies with a p < 0.001 could be replicated, while only 2 of the 11 papers with a value greater than 0.04 were successfully replicated. The reproducibility of health psychology studies is yet to be fully evaluated.

Replication

Replication is one of the most important research methods in existence. Yet it is hardly used or mentioned in textbooks about research methods. Replication refers to the attempt by an investigator to repeat a study purely to determine whether the original findings can be repeated. Essentially, the researcher wants to know whether the original findings are reliable or whether they have been produced by some combination of chance or spurious factors. If study findings can be replicated, then they can be accepted as reliable and valuable to knowledge and understanding. However, if the findings of a study cannot be replicated, then the findings cannot be accepted as a genuine contribution to knowledge.

Lack of replication has been a bone of contention in many areas of psychology, including health psychology. Traditionally, a low priority has been given to replication of other researchers’ results. Perhaps researchers believe that they will not be perceived as sufficiently creative if they replicate somebody else’s research. In a similar vein, journal editors do not give replications of research — especially failed replications — the same priority as novel findings. This bias towards new positive results, and away from failed replications, produces a major distortion in the academic literature. Lack of replication before publication is the main reason for the so-called ’Repeatability Crisis’ in psychology and other disciplines.

Single Case Experimental Designs

Single case experimental designs are investigations of a series of experimental manipulations with a single research participant.

Surveys

Surveys are systematic methods for determining how a sample of participants respond to a set of standard questions. They attempt to assess their feelings, attitudes, beliefs or knowledge at one or more times. For example, we may want to know how drug users’ perceptions of themselves and their families differ from those of non-users, or better understand the experiences of patients receiving specific kinds of treatment, how health and social services are perceived by informal carers of people with dementia, Parkinson’s, multiple sclerosis (MS) or other chronic conditions, or learn more about how people recovering from a disease such as coronary heart disease feel about their rehabilitation. The survey method is the method of choice in many of these types of study.

The survey method, whether using interviews, questionnaires, or some combination of the two, is versatile and can be applied equally well to research with individuals, groups, organizations, communities or populations to inform our understanding of a host of very different research issues and questions. Normally, a survey is conducted on a sample of the study population of interest (e.g., people aged 70+, women aged 20—44, teenagers who smoke, carers of people with dementia, etc.). Issues of key importance in conducting a survey are the objective(s), the mode of administration, the method of sampling, the sample size and the preparation of the data for analysis.

As in any research, it is essential to have a clear idea about the objective, why we are doing our study (the theory or policy behind the research), what we are looking for (the research question), where we intend to look (the setting or domain), who will be in the sample (the study sample) and how we use the tools we have at our disposal. The investigator must be cautious that the procedures do not generate any self-fulfilling prophecies. Lack of clarity about the purposes and objectives is one of the main stumbling blocks for the novice investigator to overcome. This is particularly the case when carrying out a survey, especially in a team of investigators who may have varying agendas with regard to the why, what, who, where and how questions that must be answered before the survey can begin.

Modes of administration include face-to-face interview, telephone interview, social media, group self-completion and postal self-completion.

Next, you need to decide who will be the sample for your survey and also where you will carry it out. Which population is your research question about? The sample should represent the study population as closely as possible. In some cases, the sample can consist of the entire study population (e.g., every pupil in a school; every student at a university; every patient in a hospital). More usually, however, the sample is likely to be a random selection of a proportion of the members of a population (e.g., every tenth person in a community, or every fourth patient admitted into a hospital). This method is called simple random sampling (SRS). A variation on SRS is systematic sampling. In this case, the first person in the sampling frame is chosen at random and then every nth person on the list from there on, where n is the sample fraction being used.

In stratified sampling, the population is divided into groups or ’strata’ and the groups are randomly sampled, but in different proportions so that the overall sample sizes of the groups can be made equal, even though they are not equal in the population (e.g., the 40—59, 60—79 and 80—99 age groups in a community sample, or men and women in a clinical sample). These groups will therefore be equally represented in the data. Other methods include non-probability sampling of six kinds: convenience samples, most similar/dissimilar samples, typical case samples, critical case samples, snowball samples and quota samples.

All such sampling methods are biased; in fact, there is no perfect method of sampling because there will always be a category of people that any sampling method under-represents. In any survey, it is necessary to maximize the proportion of selected people who are recruited. If a large proportion of people refuse to participate, the sample will not represent the population, but be biased in unknown ways. As a general principle, surveys that recruit at least 70% of those invited to participate are considered representative. The sample size is a key issue. The variability of scores obtained from the sampling diminishes as the sample size increases, so the larger the sample, the more precise will be the estimates of the population scores, but the more the survey will cost.

Systematic Reviews

A systematic review (SR) is a method of integrating the best evidence about an effect or intervention from all relevant and usable primary sources. What counts as relevant and usable is a matter for debate and judgement. Rules and criteria for selecting studies and for extracting data are agreed in advance by those carrying out the review. Publishing these rules and criteria along with the review enables such reviews to be replicable and transparent. Proponents of the SR therefore see it as a way of integrating research that limits bias. Traditionally, the method has been applied to quantitative data. Recently, researchers have begun to investigate ways and means to synthesize qualitative studies also.

Knowing how to carry out and to critically interpret an SR report are essential skills in all fields of health research. They enable researchers and clinicians to integrate research findings and make improvements in health care.

Systematic reviews act like a sieve, selecting some evidence but rejecting other evidence. To retain the metaphor, the reviewers act as a filter; what they see and report depends on how the selection process is operated. Whenever there is ambiguity, the process may well tend to operate in confirmatory mode, seeking positive support for a position, model or theory rather than disconfirmation. It is essential to be critical and cautious in interpreting and analysing SRs of biomedical and related topics. If we want to implement new practice as a direct consequence of such reviews, we had better make certain that the findings are solid and not a mirage. This is why the study of the method itself is so important. Systematic reviews of the same topic can produce significantly different results, indicating that bias is difficult to control. Like all forms of knowledge, the results of an SR are the consequences of a process of negotiation about rules and criteria, and cannot be accepted without criticism and debate. There are many examples of how SRs cause controversy, for example Law et al. (1991), Swales (2000), Marks (2002c), Dixon-Woods et al. (2006), Millett (2011), Roseman et al. (2011) and Coyne and Kok (2014).

Taxonomy for Intervention Studies

This section describes an idea for a taxonomy designed to help solve a variety of issues mentioned elsewhere in this A—Z, namely, the description of interventions, replication and transparency. As noted, lack of replication has been a major issue in psychology. One reason for the failure to replicate is the sheer complexity of different interventions that are available. A vast array of interventions and techniques can be delivered in multitudinous combinations, enabling literally millions of different interventions designed to change behaviour (Marks, 2009).

If interventions are incompletely described, it is not possible to: (1) determine all the necessary attributes of the intervention; (2) classify the intervention into a category or type; (3) compare and contrast interventions across studies; (4) identify which specific intervention component was responsible for efficacy; (5) replicate the intervention in other settings; or (6) advance the science of illness prevention by enabling theory testing in the practice of health care.

One way to put order into the chaos is to use a taxonomic system similar to those used to classify organisms or substances. Taxonomies for living things have been constructed since the time of Aristotle, with the periodic table in chemistry being the best-known example. Some researchers approached this issue by generating ’shopping lists’ of interventions used in different studies. For example, Abraham and Michie (2008) described 26 behaviour change interventions, which they claimed provided a ’taxonomy’ of generally applicable behaviour change techniques. Michie et al. (2008: Appendix A) also produced a list of 137 heterogeneous techniques. However, these lists are not useful as taxonomies because they do not demonstrate any systematic structure or organization of classification. A list of techniques is no more useful than a list of chemicals. Only when there is an organization like the periodic table do we gain an understanding of the underlying structure and the relationship between the various elements that lie in the table.

Psychology lacks a system for classifying interventions into a single system consisting of all known techniques and sub-techniques. In an effort to fill this gap, one of the authors has described a taxonomy with six levels, as illustrated in Figure 7.5. This taxonomic system includes six nested levels:

1. Paradigms, e.g., individual, community, public health, critical.

2. Domains, e.g., stress, diabetes, hypertension, smoking, weight, exercise, etc.

3. Programmes, e.g., smoking cessation, obesity management, stress management and assertiveness training.

4. Intervention types, e.g., relaxation induction, imagery, planning, cognitive restructuring, imagery, buddy system monitoring.

5. Techniques, e.g., within imagery there are a large number of techniques, such as mental rehearsal, guided imagery, flooding in imagination and systematic sensitization.

6. Sub-techniques, e.g., within guided imagery there exist a variety of sensory modalities (sight, sound, smell, taste, touch, warmth/coldness), scenarios (e.g., beach, forest, garden, air balloon), delivery methods (e.g., spoken instruction, self-administered by reading, listening to audio tapes), settings (e.g., individual, group) and participant positions (e.g., supine, sitting on floor, sitting on chair).

This taxonomic system is capable of including all health psychology paradigms, domains, programmes, intervention types, techniques and sub-techniques, as defined with universal reference in the form of a tree diagram. Any research design that is sufficiently specific can be placed within this taxonomic system to enable any imaginable intervention to be constructed, delivered, evaluated, labelled, reported and replicated in an unambiguous fashion. This system, or something similar, is needed to remove some basic problems that hold back progress in psychology as a discipline.

Top-down versus Bottom-up Research Approaches

A ’top-down’ research approach is where an executive decision-maker, who may be a theorist, a research director or other influential person within an organization, makes decisions about the nature of a research programme that should be carried out, the objectives of the research, and the methodology, with or without the consultation of an advisory board. This executive decision requires the existence of suitable funding, for example from governmental and/or commercial sources. A hierarchical system with different levels of research personnel responds according to the requirements of the programme. In many instances, the research programme will be carried out across multiple institutions, which compete for the resources by demonstrating their excellence in their commitment to the research question and in their competence to carry it forward.

Figure 7.5 Tree diagram showing paradigm, domain, programme, intervention, technique and sub-technique levels of description

Image

Source: Marks (2009)

The top-down research approach mirrors the social hierarchy found in ancient Egypt, wherein the Pharaoh ruled over a hierarchy of social and occupational classes residing at various levels below (Figure 7.6).

A top-down research approach has been the predominant approach across universities, institutes and research organizations. The ’Pharaoh’ is normally a leading theoretician, funding body, institute director or professor who sets the goals for the research, organizes the funding and appoints the principal investigators (PIs) who are responsible for implementing the research programme, including the methodology, defining the specific research questions, and selecting personnel qualified to organize recruitment of the participants (or ’subjects’) and the data collection. In turn, the PIs are responsible for recruiting assistants to collect data and statisticians to analyse the data from participants, who are normally patients or college students, at the bottom of the research hierarchy. Expert paper writers may consist of the PIs themselves or be especially hired for their ability to write up the study in the most favourable light to the study hypotheses. The ’Pharaoh’ rarely, if ever, interacts or communicates with anybody lower in the hierarchy than the PIs, especially the research participants. Typically, ’Pharaohs’ prefer quantitative variables that are believed by him/her to be less prone to error and bias, but they may also opt for subjective, self-report measures, which are more prone to confirmation bias in a non-blinded trial (e.g. see White et al., 2011)

Figure 7.6 Top-down research approach

Image

Some health psychologists, especially those who prefer qualitative methods, disagree with the top-down approach, which imposes a particular theoretical framework or mould on the research and the research participants. They argue that a formulaic, top-down approach tends to produce confirmation biases and group-thinking, which constrain creativity and innovation. Researchers who prefer the reverse approach, the so-called ’bottom-up approach’ (please note that it is ’bottom-up’ and NOT ’bottoms up’, which is the kind of thing people say before downing a stiff drink!), tend to use an open-ended approach using qualitative or mixed methods data to learn about the thoughts, feelings and lived experiences of the research participants to produce a set of findings that are out of the mould. They argue that the voices of patients are crucially important in the production of new theories and therapies.

Transparency

A key issue in designing and reporting research studies in health psychology is transparency. This refers to the ability to accurately and openly describe in full detail the participants or patient population (P), intervention (I), comparison (C) and outcome (O) (’PICO’). Above, we have mentioned the CONSORT and TREND guidelines that were designed to improve transparency of the descriptions of interventions. Intervention studies are typically designed to compare one, two or, at most, three treatments with a control condition consisting of standard care, a waiting list control, a placebo, or no treatment.

The standard designs are simple because precious resources must be stretched across a large number of trials. Rarely, if ever, does an intervention include only one technique, with practically all trials including two or more techniques in combination. If an intervention domain such as smoking has, say, 500 techniques, then there would be 2.5 million possible two-technique combinations, 124 million three-technique combinations and 62 billion four-technique combinations! These eye-watering figures may help to explain why replication so often fails. Which specific combination is used in any individual case, and in what order, depends on the subjective choices of the practitioner. Only if the ’PICO’ description is fully detailed and transparent can an independent investigator have the opportunity to reproduce a replica of the study.

Type I Error

The probability of falsely rejecting an incorrect H0, leading to the false conclusion that there is a statistically significant effect (a false positive). A Type I error is detecting an effect that is not present. At the .05 significance level, the probability of a Type I error is .05. When making multiple statistical tests it is necessary to reduce the risk of a Type I error by using a higher level of significance (e.g., .01, .001, or .0001) or by making a correction such as the Bonferroni.

Type II Error

The probability of falsely retaining an incorrect H0 (a false negative); failing to detect an effect that is present.

Uncontrolled Variable

An uncontrolled variable is the bête noire of any research study. This is a background variable that, unknown to the investigator, operates within the research environment to affect the outcome in an uncontrolled manner. As a consequence, the study will contain the risk of producing a false set of findings.

Future Research

1. More studies using qualitative and action research methods will help to broaden the focus on quantitative research in health psychology.

2. More research is needed on the health experiences and behaviour of children, ethnic minority groups, disabled people and older people.

3. The evidence base on the effectiveness of behaviour change interventions needs to be strengthened by larger-scale randomized controlled trials.

4. More extensive collaboration with health economists is needed to carry out cost-effectiveness studies of psychosocial interventions.

Summary

1. The principal research methods of health psychology fall into three categories: quantitative, qualitative and action research.

2. Quantitative research designs emphasize reliable and valid measurement in controlled experiments, trials and surveys.

3. Qualitative methods use interviews, focus groups, narratives, diaries or texts to explore health and illness concepts and experience.

4. Action research enables change processes to feed back into plans for improvement, empowerment and emancipation.

5. A top-down research approach is when a theorist, director or senior professor decides on the nature of the research to be carried out, the research goals, the questions or hypotheses to be investigated, and the methods used. Critics argue that the top-down approach tends to produce confirmation biases and group-thinking, which constrain creativity and innovation.

6. The ’bottom-up approach’ uses an open-ended approach with qualitative or mixed methods data to learn about the thoughts, feelings and lived experiences of the research participants. The voices of patients and their families are viewed as crucially important in the production of new theories and therapies.

7. A hierarchy of evidence has been proposed which places meta-analyses and systematic reviews at the top of the hierarchy and qualitative research at the bottom. Multiple sources of evidence may be synthesized in systematic reviews and meta-analyses, which is helpful in appraising the state of knowledge in particular fields. However, qualitative methods about lived experience provide a necessary counterweight to descriptive methods that are purely quantitative in nature.

8. Evaluation research to assess the effectiveness of health psychology interventions has generally been too small-scale and of low quality. There is a need for large-scale studies that are methodologically rigorous to evaluate interventions.

9. Interventions need to describe completely, using a taxonomy, so that we can compare and contrast interventions across studies, replicate the intervention in other settings and advance the science of illness prevention by enabling theory testing in the practice of health care.

10. Health psychology has yet to show its full potential by conducting high-quality research with a full gamut of methods and disseminating the findings across society.