Psych Chapter 10

Published by admin on

Why do we conduct experiments?
Because we would like to understand as much as we can about the nature of a relationship between two variables so that we can possibly influence some outcome
Example: Seeing red
Background
Red has many connotations, mostly negative

Psychologist Andrew Elliot wanted to test whether the color red impacted performance on a test

The theory was based on an approach/avoid orientation in achievement

Approach – strive to get answers right; excited about challenge
Avoid – try not to fail; anxiety

Example: Seeing red
Experiment
Hypothesis: exposing students to red on a test causes them to withdraw and perform worse on a test

Tested with an Experiment:
71 students in a lab
5-minute test unscrambling anagrams

Identical tests except that the student ID numbers were written in either red, green or black ink on the inside of the test booklet

Students were randomly assigned to the conditions

Example: Seeing red
Results
Example: Seeing red
Results
Example 2: eating pasta
Experiment
Experiment looking at the impact that size of container influences how much we eat.

Students at Cornell University’s Food and Brand Lab were studied

Randomly assigned to a large bowl and medium bowl condition

Participants who served themselves from a large bowl took more and ate more, compared to those who served themselves from a medium bowl.

Example 2: eating pasta
Results
Example 2: eating pasta
Results
What are the three types of experimental variable?
Independent, dependent, control
Independent variable
manipulated variable
Dependent Variable
measured variable
control variables
held constant on purpose so they cannot explain the change in the DV
How can meet the three causal rules?
Covariance
Temporal precedence
Internal validity
Example of covariance being met with an experiment
In the Elliot study, covariance was seen between the causal variable (color of ID number) and effect variable (performance on the anagram test)
The covariance was explained by the difference in means of the two groups.
Independent variables have to vary in order to establish
covariance

YOU NEED COMPARISON GROUPS

Groups that can be used to prove covariance
Control, treatment, comparison, and placebo
Control group
a level of the IV that represents “no treatment” or a neutral condition
treatment group
levels of the IV that receive the treatment
comparison groups
including different conditions like green and black ink – not a control group because it’s not really the absence of red
placebo group
a control group that is exposed to an inert treatment, like a sugar pill
How experiments prove temporal precedence?
since the IV is manipulated, it comes before the DV.

The cause comes before the effect
Better then a correlational study where the variables are measured at the same time

How experiments have internal validity?
The most important!

Allows us to rule out other explanations for the change in the dependent variable

For example: In the Elliott anagram experiment to rule out other explanations for the difference in performance the researchers made sure that all conditions – the anagrams, the researchers attitudes, the test forms, even which condition the participants were in was unknown to the researcher – were kept the same

Internal validity issues
design confound

systematic variability

selection effects

design confound
a second variable that varies systematically with the independent variable

ex. , if Elliot had given the red ink group more difficult anagrams than the other groups the difficulty of the test would have been a design confound

systematic variabilty
when the variability is uneven across groups; like if only the unfriendly research assistants are assigned to the red ink condition
Unsystematic Variability (does not harm)
variability that is random across conditions

Does not harm internal validity but can cause other issues

Selection effects
When the kind of participants at one level of the independent variable are systematically different than at the other levels

Can happen then participants are allowed to choose which condition that they want to be in

Example of selection effects
Study designed to test new therapy for autism

One-on-one sessions with a therapist for 40 hours per week

Some children received new treatment, some their standard treatment

Families allowed to select condition

Clear selection effect – families who are willing to devote 40 hours are probably working more with their children

Selection is a clear confound in this instance

Avoid selection effects with _______ and __________
Avoid selection effects with _______ and __________
random assignment

matched groups

Types of group designs
Independent Group and Within Group
Independent Groups Designs
Different groups of participants are assigned to different levels of the independent variable

Also called between-subjects and between-groups designs

Within Groups Designs
Only one group of participants & each participant is assigned to all levels of the independent variable

Also called within-subjects design

Types of Independent-groups designs
Posttest only
and
Pretest/Posttest
Posttest-only
Posttest-only
Pretest/posttest design
Pretest/posttest design
Is pretest/posttest or posttest-only better?
They both support causation

In some situations, like the pasta experiment, a pre-test is not possible

Random assignment helps ensure that confounding variables are evenly distributed between groups – like appetite

Types of Within groups designs
concurrent-measures and repeated measures
concurrent-measures
concurrent-measures
participants are exposed to all levels of the independent variable at about the same time

Babies saw two faces simultaneously, and the experimenters recorded which face they looked at the longest.

repeated-measures
repeated-measures
participants are measured on the dependent variable more than one time – after exposure to each level of the independent variable

Oxytocin is a bio-chemical thought to be important in social bonding. Mothers oxytocin levels were measured with their own 2 or 3 year old toddlers and then again several days later with another infant. Levels were found to be higher with the new toddler.

Four basic designs summary
Interdependent-groups

– Posttest
– Pretest/Posttest

Within-Groups

-Concurrent
-Repeated

Pros and cons of Independent-groups designs
Pros: No contamination across independent variable levels

Cons: Require more people

Pros and cons of within-groups designs
Pros: Require fewer people
Individuals serve as their own control

Cons: Potential order effects, chance of experimental demand

Within-groups designs
another advantage
Power – the ability of a sample to show a statistically significant result when something is truly going on in the population

within-groups have more power

Noisy party – makes it hard to hear conversations even when they are going on

Within-groups are good for two tests for causality but pose a threat to one of them:
Good for covariance, temporal precedence

major threat to internal validity: order effects

Order effects
Order effects
Participants’ later responses are systematically affected by their earlier ones (fatigue, practice, or contrast effects)

Also called practice effects or carryover effects

Correction: Counterbalancing

Two methods of counterbalancing
Full and partial
Full counterbalancing
Full counterbalancing
Partial Counterbalancing
Partial Counterbalancing
Within groups

disadvantages

1. Order effects – fix with counterbalancing

2. People can change their behavior depending on exposure to previous conditions
2a. Could be demand conditions – participants try to act like “good participants”

3. Within-groups design might not be possible – two methods of training someone to ride a bike, cannot un-teach a person how to ride a bike
2a

Is the Pretest/Posttest Design a Within-Groups Design?
Is the Pretest/Posttest Design a Within-Groups Design?
no. Because in a true within-groups design, each participant gets every level of the independent variable
Interrogating Causal Claims with the Four Validities
Construct validity

External validity

Statistical validity

Internal validity

Construct Validity
How well were the variables measured and manipulated?

– Is the measure a good representation of the construct of interest?
– How well was the independent variable manipulated?

Manipulation Checks
an extra dependent variable that a researcher can insert into an experiment to help them quantify how well an experimental manipulation worked
Pilot Study
a simple study using different participants, usually before a full-blown study to test a manipulation
Theory Testing
– For construct validity, the operationalizations of both the independent and dependent variables are assessed

– The standard is typically the theory that the researcher is testing

– For the experiment about color and avoidance orientation, critics could argue that the experiment came out the way that it did because the color red is a warm color, not a threat. So Elliot and colleagues added a warm color – orange – to improve construct validity

External validity
To whom or to what can the causal claim generalize?

– Generalizing to other people – random sampling versus random assignment
– Generalizing to other situations – takes more than 1 study to generalize

Statistical validity
How well do the data support your causal conclusion?

Are the results statistically significant?

What is the effect size?

Internal validity
Are there alternative explanations for the outcome?

– This is the most important validity for experiments!

– Three fundamental internal validity questions:

1. Did the design ensure that there were no design confounds?
2. If an independent-groups design was used, were selection effects controlled for by random assignment or matching?
3. If a within-groups design was used were order effects controlled for by counterbalancing?

Categories: Assignment Writing