Skip to main content
Skip table of contents

Best Practices for Designing and Implementing Pack&Go Studies

Below is a list of general tips and recommendations for Pack&Go experiments.

Preparing your experiment

  • Your main experiment file should be a script, not a function. If you don’t want to modify any code, your experiment file can always be a simple one-line script that executes the experiment function. For example, below is a valid experiment file:

    CODE
    my_experimentFunction()
  • Your main experiment file should be in the root folder of your experiment, not a subfolder.

  • TEST EVERYTHING. We do not reimburse credits for studies where coding errors or a failure to test results in data/session loss.

  • Your pathing must be relative. Below are examples of absolute and relative pathing, for a study whose main script is located in C:\Users\john\Documents\MyStudy

Location

Absolute path (DO NOT USE)

Relative path (USE)

Same folder as experiment script

C:\Users\john\Documents\MyStudy\cats.png

cats.png

Subfolder called images

C:\Users\john\Documents\MyStudy\images\cats.png

\images\cats.png

  • Avoid changing the working directory during the course of the experiment.

  • We recommend letting the experiment run for 10-15 seconds before collecting time-sensitive data, to allow the network time to stabilize.

  • We recommend using normalized spatial units as the participants may have different monitor resolutions/sizes.

  • If you are hoping to control for stimulus size, there are different ways to achieve this depending on your requirements for simplicity and tolerance for variability. See Brascamp (2021; https://doi.org/10.1167/jov.21.8.19) and Yung et al. (2015; http://dx.doi.org/10.3791/52470 ) for examples.

Time-sensitive, dynamic or moving stimuli

  • We cannot guarantee a specific stimulus presentation time during an online experiment. To understand why, it is helpful to consider the general steps from rendering images on Pack&Go to the output of your participant’s monitor:

    • Our Pack&Go service renders images at a rate of 60 Hz under ideal conditions.

    • This 60 Hz frame buffer is sampled at 30 Hz to generate the stream data that is passed to your participant’s browser. We plan to offer other sampling rates in the future.

    • The 30 Hz streamed data is then sampled repeatedly by the participant’s GPU to generate the output (anywhere from 60Hz - 240Hz on commercial displays). Each system will take its own approach to the exact sampling method, and we cannot guarantee or control this.

  • Presentation times may be measured post-hoc (see the section below on post-hoc latency corrections) to assess timing variability for a given study.

  • If you have dynamically changing or moving stimuli, you may want your stimulus change rate to account for render times > 1 frame. In other words, if it takes (or you suspect it takes) more than one video frame for your stimulus to be drawn, you should calculate the dynamic change rate based on the time between the two previous frame flips, rather than relying on absolute timing or the default frame rate of the system. For example,

    • Stimulus A is a complex image moving at a rate of 3 units/second. To update the location on the following frame, we use the formula A = A + 3 * deltaTime, where deltaTime is the difference between the flip times for the previous frame on which the stimulus was drawn.

Exclusion criteria

  • Consider including comprehension checks, attention checks and/or catch trials in your study. You can find some examples here. If your participants fail these checks, you can include conditional code in your study to terminate the session early. This will avoid wasting Pack&Go credits on poor data.

  • Some recruitment platforms like Prolific allow you to reject or decline to pay participants who fail checks or complete the session too quickly; rejection needs to be done manually on the recruitment site following the completion of the session. Rejecting a participant’s session will not affect the Pack&Go credits consumed during the session.

  • If you would like to exclude study sessions with very poor network quality, this can be assessed post-hoc by looking at the average jitter in the stream.json file for the session. The threshold for tolerable jitter in an experiment depends on the research design and the research question-- pilot data may be useful for determining reasonable exclusion criteria.

Saving experiment data

  • Pack&Go does not, by default, save any identifying participant data. To protect participant privacy, consider using anonymous ID codes in your studies, and collect only the minimum amount of personal details you need to conduct your research.

  • Consider saving intermediate data files often, in case subjects surpass the set time limit.

  • Saving large quantities of large files (images and videos) during a session can slow down the experiment. If you are generating images dynamically (for example, noise masks), consider saving the parameters of the image to regenerate later, rather than saving the entire image, to increase efficiency.

Post-hoc latency corrections

  • If you would like to correct for network latency, the relevant metadata is provided in the stream.json file of the results folder for each session. This metadata is sampled at 1 Hz so you may have interpolate for specific trialwise values. Below are some examples of post-hoc corrections for network latency:

    • Stimulus presentation time = Time of flip containing stimulus, as recorded by experiment + lastFrameDelay

    • Time of response = Response time recorded by experiment + networkLatency

    • Reaction Time = Time of response – Stimulus presentation time

    • Stimulus end time = Flip time that clears the stimulus, recorded by the experiment + lastFrameDelay

    • Stimulus duration = Stimulus end time - Stimulus presentation time

  • Corrected timing estimates will always be noisier than in-lab results, and this is unavoidable with online platforms.

Special instructions for participants to ensure a smooth session

  • Use a laptop or desktop. Pack&Go does not currently support studies on mobile or tablet.

  • Use a private internet connection. Generally speaking, public wifi can be unstable, and institutional wifi (e.g., University wifi) can have aggressive security measures that block our servers.

    • If your participants have no choice but to use an institutional network (e.g., students living or working on campus), you can ask the IT department to whitelist Pack&Go. See our tips on the Troubleshooting page.

  • Move the cursor offscreen during testing. Pack&Go cannot hide the participant’s cursor via code or the interface. If you are concerned this will distract your participant, please include instructions to have them move it off-screen.

  • Close other tabs and browsers. This can impact network performance, particularly if these other browsers include streamed data (e.g., Spotify, Youtube, downloads in progress)

  • Disable streaming, downloading and uploading. Streaming/downloading content during testing (e.g., torrenting) can affect network performance.

  • Be mindful of browser extensions. In rare cases, certain browser extensions may intercept keyboard activity and alter it; the development team has observed this with extensions that bind keyboard shortcuts to the browser. If your study has text entry and your participants report an issue, ask them to disable their browser extensions during the session.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.