Experimental research designs are one of the classic approaches to empirical research—gathering research data in a way that is verifiable by observation or experience. But what exactly is an experimental research design, and how can you use one in your own research? In this in-depth guide, we’ll give you an overview of experimental research, describe the different types of experimental design, discuss the advantages and disadvantages of this approach and walk you through the four steps for completing experimental research.
Experimental research is scientifically-driven, quantitative research involving two sets of variables. The first set of variables, known as the independent variables, are manipulated by the researcher, in order to determine the impact on the second set of variables—the dependent variables. Using the experimental method, you can test whether, and how, the independent variables impact the dependent variable, which can help support a wide range of decisions in areas such as:
These are just a few different areas of consumer research that are suitable for experimental research. However, not all experimental research designs are equivalent. Let's take a look at the three different types of experimental design you might consider using, and some of the types of research questions they could be used for.
The simplest type of experimental design is called a pre-experimental research design, and it has many different manifestations. Using a pre-experiment, some factor or treatment that is expected to cause change is implemented for a group or multiple groups of research subjects, and the subjects are observed over a period of time.
Different types of pre-experimental research design include:
In this type of design, some type of treatment is applied to a single case study sample group. The group is then studied to determine whether the implementation of the treatment caused change, by comparing observations to general expectations of what the case would have looked like had the treatment not been implemented. There is no control or comparison group.
This type of design also involves observing one group with no control or comparison group. However, the group is observed at two points in time: once before the intervention is applied and once after the intervention is applied. For instance, if you want to determine whether concentration increases in a group of students after they take part in a study skills course, you might employ this type of experimental design. Any observed changes in the dependent variable are assumed to be the consequence of the intervention or treatment.
This type of design compares two groups. One that has experienced some intervention or treatment and one that has not. If any differences are observed between the two groups, it is presumed to be because of the treatment.
Build your audience, prepare your survey, get your results in minutes.
A true experimental research design involves testing a hypothesis in order to determine whether there is a cause-effect relationship between two or more sets of variables. Although there are a few established ways to conduct experimental research designs, all share four characteristics:
This type of approach might be used in concept testing, such as comparing the impact of changes of packaging design among a treatment group and a group that receives the original packaging.
Finally, a quasi-experimental research design follows some of the same principles as the true experimental design, but the research subjects are not randomly assigned to the control or treatment group. This type of research design often occurs in natural settings, where it is not possible for the researcher to control the assignment of subjects. An example of a quasi-experimental research design is a researcher presenting Saturday shoppers at a grocery store with a welcome banner and comparing their perceptions of how welcoming the store was to those visiting the store on a Tuesday when the banner was not present.
Now that you know what kinds of experimental designs are available, let’s focus on the steps you should take to set up your design.
In the first stage, establish your research question, and use it to distinguish between dependent and independent variables.
Independent vs. dependent variables
Independent variables are the variables that will be subjected to some kind of manipulation, and which are expected to impact the outcome. In contrast, the dependent variables are not manipulated, but represent the outcome and are expected to be impacted by the independent variables. For instance, if you are performing ad testing, you might have a research question like this:
From this research question, the independent variable will be different marketing messages, while the dependent variable will be product appeal.
Next, you should state your hypothesis. This should be a specific and testable statement that outlines what you expect to find, should emerge from your research question, and should be informed by the results of any previous research. For example, if you are comparing the impact of two different marketing messages on product appeal, you might state a hypothesis like this:
When stating hypotheses, there are a number of best practices to follow. The hypothesis:
Third, design your experimental treatments. This means manipulating your independent variable(s) in such a way that different groups of research subjects are exposed to different levels of that variable, or the same group of subjects is exposed to different levels at different times. For instance, if you’re interested in learning about whether trying a new eco laundry detergent impacts people’s views towards sustainability, you might provide some subjects (the treatment group) with the laundry detergent to use for a certain period of time, while a control group continues to use their regular detergent.
It is important to note that manipulation of the independent variable must involve the active intervention of the researcher. If differences in the variable occur naturally (e.g. if a researcher compares views on sustainability among households who already use eco detergents and those that use regular detergents), then an experiment has not been conducted. In this case, observed differences between the two groups might be because of some third, unknown variable that could impact the cause-effect relationship. For instance, households that contain one green activist may already use eco detergent, which makes it impossible to determine whether using the eco detergent impacts views on sustainability (or whether the relationship is in fact, the other way around). In some experiments, the independent variable can only be manipulated indirectly or incompletely, and in this case, it may be necessary to perform a manipulation check prior to testing the results: a statistical test that shows that the manipulation worked as expected.
Rely on quality data from respondents you can count on.
When manipulating your variables, you should be aware of the impact on internal validity and external validity. Internal validity can be understood as credibility and is largely concerned with answering questions such as, “Do the findings of the study make sense?” and “Are the findings credible?” External validity, on the other hand, is designed to examine whether the research findings can be transferred to another setting or context in which data collection did not take place. In other words, research findings that are externally transferable are generalizable beyond the parameters of the research setting.
A key question that you will need to address when constructing your variables is how broadly or finely you should test them. For instance, if you are measuring the appeal of a product, you could ask survey respondents to assess appeal on a three point measure, like Appealing, Neither Appealing, Unappealing, or on a finer tuned 10-point Likert scale measure. Both approaches have benefits and drawbacks and the approach you should take will depend on what you want to get out of the research. If you are only interested in whether a product is appealing (or not) and not by how much, it makes sense to use a broader approach.
In the next stage of the experimental research design, you should categorize your survey subjects into appropriate treatment groups. There are many ways that you can do this, but you should be aware that the approach you use can impact the validity and reliability of the results.
There are two main approaches to randomization: a completely randomized design and a randomized block design.
A completely randomized design places random subjects into the treatment or control group. The reason for randomization is that the experimenter assumes that on average, potentially confounding variables will affect each condition equally; so that any observed significant differences between the treatment and control conditions can probably be attributed to the independent variable.
Using the randomized block design, the researcher first looks for confounding variables, then assigns subjects to blocks based on that variable, before randomizing subjects to different groups. In our product appeal study, men and women might find a product appealing for different reasons, so a group of participants might first be assigned to gender-based blocks, and then randomly assigned to different treatment groups in order to ensure gender parity.
There are two ways of assigning your research participants to different conditions
Using the between-subjects research design, different people test each condition, so that each person is only exposed to a single treatment or condition.
Using the within-subjects, or repeated-measures design, the same group of individuals tests all the conditions, and the researcher compares the results across each condition.
For all true experimental research designs, there will be a control group: a set of individuals who are not subjected to any treatment, or who are instead given a placebo treatment, which enables the researcher to compare the impact of a treatment or intervention against a neutral group.
Let’s take a look at some of the strengths and limitations of experimental research designs.
The experimental research design offers you a wide range of advantages:
Research carried out in natural settings can be impacted by a number of confounding variables that have the potential to disrupt the relationship between the independent and dependent variables, and thus confuse the findings. In contrast, experimental research designs have higher levels of control, which can improve validity.
Although the experimental research design stems from the physical sciences, it can be used in a number of different subjects and disciplines, including in the social sciences, business and marketing.
Experimental research can help you to reach firmer conclusions than other types of research. That’s because the conditions of the experiment are carefully controlled and manipulated. As a result, the impact of other, extraneous variables can be minimized. In addition, as we discuss below, you can distinguish between correlational relationships and causal relationships using this type of research approach.
One of the main advantages of experimental research designs is that findings can be replicated time and time again, which increases the validity of the research, and can help to advance knowledge. That’s because the exact process used to conduct the research: from locating the research subjects, to applying the treatment and recording the results—is fully documented and described.
One of the main benefits of conducting experimental research is that it enables you to determine whether the relationship between two or more variables of interest is a causal relationship—something that is not possible with correlational or cross-sectional research designs. This is a major benefit if you’re interested in whether changes in one variable produce changes in another variable, as well as in predicting the likely direction of those changes. For instance, if you’re interested in the relationship between overeating and stress, you can determine which causes the other, rather than only observing that there is some relationship between both phenomena.
Of course, there are also disadvantages to experimental research designs. Before deciding on whether this type of design is right for you, you should consider aspects such as:
Opportunity for human error
As with all research, human error can occur. However, since the researcher actively manipulates the variables, the potential for error may be higher in experimental research.
Test conditions can fail to reflect real world conditions
Experiments are usually carried out in artificial conditions. This could impact external validity—the extent to which the findings are replicable in real world settings.
Time and resource intensive
Compared to other research designs, experimental research designs tend to be more time and resource intensive. This is especially
Potential for ethical and practical concerns
Since experimental research designs involve the manipulation of variables, there is the potential for ethical challenges to arise. In experiments involving a treatment group and a control group, one concern is that one of the groups may not receive the benefits associated with treatment or non-treatment. For example, in psychology or the medical sciences, a group of subjects who are exposed to a treatment for a particular complaint may experience the advantages of the treatment, while the control group does not receive such benefits. In addition, some participants may drop out of the experiment over time, which could impact the results.
We hope you have enjoyed our deep dive into experimental research designs. If you think you’re good to go, and are looking for subjects for your experiment, SurveyMonkey Audience can help. Also, take a look at How to do market research: the ultimate guide for more on how to use experimental designs in market research.
Collect market research data by sending your survey to a representative sample
Get help with your market research project by working with our expert research team
Test creative or product concepts using an automated approach to analysis and reporting