Liad Mudrik lab

RESEARCH METHODS

The experimental work in the lab relies on different techniques: behavioral experiments, EEG, fMRI and intracranial recordings. Below we outline our research protocols, in terms of designing experiments, running them and analyzing the data. These protocols are based on current literature (e.g., Cumming, 2014, Psych Science; Simons et al., 2011, Psych Science) and on the work done in the Adolphs’ lab. Comments and suggestions are most welcome: mudrikli@tauex.tau.ac.il.

Planning & Running experiments

  1. After defining the research question and deciding on the design, write the method section for the future paper. Be as thorough and as detailed as possible. The writing process will facilitate the detection of confounds in the design, and will constitute the first stage of documenting every step of the way. Each experiment will have an experimental diary detailing all changes/decisions, and their justification. One can never underestimate the importance of documenting. It can prevent future heartache over misremembered or lost information…
  2. Decide in advance about the sample size (based on previous literature, power analysis or previous experiments in the lab) and plan which analyses will be made. As a rule of thumb, try to always use the simplest kind of analysis where possible. Make sure the design allows making these analyses.
  3. Consider publicly pre-registering the proposed study (e.g., with the Center for Open Science ).
  4. Present the design and research questions at the lab meeting and get feedback. Modify the design if needed (and update the diary).
  5. Characterize your stimuli. If the experiment involves a new set of stimuli, try to assess their low-level (or other) features as best as you can before running it (e.g., when using real-life images, inspect the contrast, chromaticity, spatial frequency etc.).
  6. Run the experiment on yourself, on Liad and on someone else in the lab. Make sure to check that (a) the experiment runs as planned, (b) the trials are properly counterbalanced according to your design and (c) all the needed data is saved. Get feedback from others about what they thought and felt during the experiment. Analyze these preliminary data before continuing and make sure that the analysis can be performed as planned using the saved data.
  7. During data collection, check the quality of the data: make sure files are saved and backed up, assess the noise level (in EEG experiments) and look for potential problem that may affect future subjects. It could also be helpful to make sure subjects are correctly following the task instruction by defining an independent performance measure (orthogonal to the effects of interest), and analyze the data for that measure only. If subjects’ performance is too low, it may be helpful to revise the instructions or change the task.
  8. Do not analyze the data for the effects of interest before data collection is complete, according to the predefined sample size. Do not change sample size while running.

Analyzing & reporting behavioral data

  1. Before going into the predefined analysis, inspect the data visually, and see what you can learn from it (i.e., get a feel of the raw data rather than only rely on statistics).
  2. Always report actual numbers quantifying the effect (e.g., mean differences and SEMs) as well as effect sizes (e.g., eta squared for ANOVAs) and 95% confidence intervals (also in figures). Report exact p-values.
  3. Document and report all analyses that were performed.
  4. An important finding/study should ideally be followed up. The ideal paper has a replication study, in an independent subject sample, built into it. As a rule, when using a new experimental procedure the findings should be replicated before publishing.
  5. When possible, make all materials (e.g., stimuli, code) publicly available for the sake of (a) methodological transparency and (b) independent replication. In the same vein, make data available to others (perhaps by request).