Ethical issues are those which arise when a conflict emerges in research, between the rights of participants and the intended aims of the research. For example, if participants are informed of the aim of a study, this may lead them to change their behaviour and therefore produce invalid results. However, by not revealing the aim, participants could be being deceived and not know what they are signing up for, which may not be morally justifiable. The British Psychological Society (BPS) issues ethical guidelines which researchers must adhere to. If they do not, they may lose their job and the ability to practise as a psychologist. Major ethical issues, and ways to deal with them as detailed in the BPS, include the following:
Informed consent: Making participants aware of the aims and purposes of a study, so that they can agree to take part in the full knowledge of what the research is about and what they are letting themselves in for. This would include informing the participant that they have the right to withdraw from the study at any point (any payments made for taking part must still be given in cases of withdrawal). It may be that gaining full informed consent is not possible before the study, as it would render the results meaningless (for example, the Asch study into conformity).
- Dealing with the issue: Participants are given information outlining every detail of the study, before they agree to take part. If they are under 16, parental consent must be sought. If full informed consent cannot be gained, researchers could gain presumptive consent (getting a group of people similar to the participants to say if they would consent to take part in the study- if they say yes, it can be presumed the participants would also have agreed); prior general consent (participants give ‘general permission’ to take part in a number of studies, some involving deception); or retrospective consent (participants give full informed consent via a debrief at the end of the study, at which point they can ask to withdraw their results).
Deception: Deliberately lying to, or misleading the participants, is deception. If deception has occurred, participants cannot give informed consent. Deception may be necessary in some cases for an experiment to work (for example, not telling the participant that other people in the study are confederates. The deception should not be too severe, however.
- Dealing with the issue: Participants should be fully debriefed, where the aims of the study, deceptions, and reasons for any deception are revealed. At this stage, participants should be given the right to withhold their data.
Protection from harm: Participants should be protected from physical or psychological harm in a study. The risk of harm should be no greater than in everyday life (for example, in the Strange Situation, although the babies were distressed at being separated from the caregiver, this is something which would happen in their everyday lives). If participants feel uncomfortable in a study, they have the right to withdraw from it.
- Dealing with the issue: In the debriefing, participants should be reassured about their performance in the study, and offered counselling if appropriate.
Privacy and confidentiality: Privacy refers to not invading people’s personal lives as part of the study- observing people in a park is acceptable, as it is a public place and people would expect to be looked at. Observing someone through their bedroom window is not acceptable. Confidentiality is having any personal data, such as names, or information that could lead to the participant being identified, kept hidden.
- Dealing with the issue: Participants can be referred to by number rather than name, or by the use of initials (such as KF). In the debrief, participants are reassured that their personal information will be kept confidential.
- Identify one ethical issue in psychology and explain how it could be dealt with. (4 marks- 1-2 paragraphs)
- Your answer should include: Informed / Consent / Deception / Debriefing / Harm / Privacy / Anonymous
The Role of Peer Review
Peer review is used to assess the quality of research conducted by psychologists (and any other scientists) in terms of its validity and reliability. This is done to ensure that any published findings of research are trustworthy and of a high quality. A small group of experts will scrutinise the research and how it was conducted, in order to ensure that the findings and conclusions are genuine. The reviewers will be anonymous and unknown to the researcher. The aims of peer review are to validate the quality of research in terms of accuracy, to allocate research funding, for example deciding if a grant should be awarded to a research body, and to suggest amendments or improvements before research is published.
- Sometimes, reviewers may use the process to ‘settle old scores’, for example refusing to publish research by a psychologist who may have criticised them in the past. This reduces the objectivity of the process.
- The ‘file drawer phenomenon’ means that there is a bias towards publishing statistically significant results of studies, and ignoring or not publishing results of studies which find so significance. This means a false impression is created of a research area, as ten studies finding no positive result are not published, and the one that does, does get published.
- Research which contradicts long-established theories may be more likely to be rejected by peer review, therefore slowing down the rate at which knowledge advances.
Implications of Psychological Research for the Economy
‘The economy’ relates to anything that affects prosperity, for example employment rates and effectiveness of workers, tax revenues, the spending of income on public services, and so on. Psychological research often has implications for this, for example attachment research has shown the importance of the role of the father, meaning that today it is recognised that both parents are equally capable of raising the child. This has implications in terms of shared parental leave, and the sharing of childcare duties at home so both parents are able to work. ‘Implications’ therefore, often refers to:
- The time spent working/number of days off sick
- The amount of tax revenue that can be raised
- The pressure on public services such as the NHS
- With reference to this example, outline the effect of psychological research on the economy. (4 marks- 1-2 paragraphs)
- Your answer should include: Normal / Lives / Hospital / Economy
- Studies have shown that depression can be treated using CBT, and that patients who undergo CBT alongside drug therapies are more likely to be able to function normally in their day-to-day lives and are less likely to be hospitalised. With reference to this example, outline the effect of psychological research on the economy. (4 marks- 1-2 paragraphs)
- Your answer should include: Normal / Lives / Hospital / Economy
Reliability is a measure of consistency- producing the same results over and over again. For example, a pair of scales should, if weighing the same thing, show the same reading each time. Any change in the reading would be due to a change in the object being weighed. In psychological research, reliability refers to the consistency of results from a study. If the same method is used and the same results are produced, the study can be said to be reliable.
Ways of assessing reliability: Reliability can be assessed through the test-retest __method. This simply means giving participants the same questionnaire, or the same tasks in an experiment, on more than one occasion. As the participants are the same, the results gained should be the same as well. Enough time must pass between the test and the retest so that the participant will not easily remember what they did the first time around. Once completed, the participant’s scores can be correlated and a statistical test can be done to check the degree of similarity. If there is a significant positive correlation (a __coefficient of +.80 or more) the study is likely to be reliable.
Another way of assessing reliability in observational research is through testing inter-observer reliability. This can be done by having at least two observers recording the same behaviours using the same behavioural categories. The results of the observers are correlated and a statistical test can be done to see if there is a significant relationship. If there is, the reliability of the observation is likely to be good. This technique can also be used for content analyses and interviews (inter-interviewer reliability).
- Experiments: the reliability of these can be improved by controlling the conditions of the experiment as far as possible, so that complete replication is possible. This would include publishing full details of the method and materials used, using standardised instructions, and so on.
- __Observations: __behavioural categories need to be fully operationalised (measurable/observable), should not overlap with each other- for example, ‘hitting’ and ‘striking’ would be too similar- and all possible behaviours should be covered. This makes the categories as objective as possible, and less open to individual interpretation.
- Questionnaires: these can be improved by using the test-retest method previously described. If reliability is low, the questionnaire should be modified by amending or removing questions (for example if they are unclear), before using the test-retest method again to see if it has improved.
- Interviews: using the same interviewer each time will enhance reliability. Alternatively, the questions should not be leading or ambiguous. Structured interviews are much more likely to be reliable than unstructured interviews.
Validity refers to whether the psychological study truly measured what it intended (or claimed) to measure, and whether the results are a true reflection of behaviour in other contexts- in other words, how far they can be generalised. Using the example of the scales, it is possible to be reliable but not valid. If the scales are faulty, they might give the same reading each time, but that reading is not a true reflection of the weight of the object. It isn’t possible however to be unreliable but valid, as if the study is producing different results each time then this suggests there is something wrong with the method being used, so the result of it couldn’t be valid either. There are many different types of validity.
Internal validity: This is related to what actually happens in a study; whether the study measured what it intended to measure. In terms of an experiment it refers to whether the independent variable really has had an effect on the dependent variable, or whether the dependent variable was caused by some other extraneous or confounding variable. For example, Milgram’s electric shock experiment may not be internally valid because the participants were aware it was an experiment so must have known deep down that no harm was coming to the learner. In which case, the experiment was not really measuring obedience to authority, as demand characteristics affected the results.
External validity: This refers to whether the findings of a study really can be generalised beyond the present study. We can break external validity down into three types:
- Population validity: the extent to which the findings can be generalised to other populations of people. For example, Asch’s conformity experiment only tested American men, so people from other cultures and females may have acted in a different way.
- Ecological validity: the extent to which the findings can be generalised to other situations outside of the research study. Often this means considering whether the study represents behaviour in a more natural setting. For example, Asch’s study used a task (judging line lengths) which is very unlike anything that would occur in everyday life, so the task lacked mundane realism. Therefore, the ecological validity of the study is lowered.
- Temporal validity: the extent to which the findings can be generalised to other time periods. For example, Asch’s study took place in 1950s USA, which can be argued to be a more conformist time generally due to the fear of communism (and people being secret communists or Russian spies) which was rife during the Cold War. Therefore, people may have been much more concerned with fitting in than they would be today, for instance.
Assessment of validity: One way of assessing validity is to use face validity which simply means looking at the test or questionnaire and deciding, at face value, if it measures what it intends to measure. Concurrent validity can also be used. This is where the results of a test are compared with another existing, well-established test which measures the same thing, such as an IQ test for intelligence. If there is a strong positive correlation (above +0.8) between the participant’s scores on the two tests, then the test is likely to be valid.
- Experiments: using a control group to compare the results of the experimental group to can improve validity. For example, a drug is tested by using participants who take the drug, and those that take a placebo. If the experimental group’s results are different to the control group, the IV is likely to have changed the DV. Using single and double-blind procedures can reduce the chance of extraneous variables such as demand characteristics and investigator effects having an impact on an experiment.
- Observations: covert observations are likely to be high in validity. As the participant is unaware they are being observed, their behaviour is more likely to be natural. Having clear and unambiguous behavioural categories will also improve validity.
- Questionnaires: keeping results anonymous increases the chances that the participant will answer truthfully. Many questionnaires and tests have a lie scale built in, which is a set of questions designed to test the truthfulness of a participant’s answers.
- Qualitative methods: case studies and interviews (particularly unstructured ones) are thought to have higher ecological validity, as they more accurately reflect the richness and complexity of the human experience. The researcher must take care to clearly report any findings so that they are free from bias. Including direct quotes is an example of how this can be done. Triangulation (using a number of sources of evidence such as observations, interviews with family members, and so on) is another way of enhancing validity for these methods.