Rapid Invisible Frequency Tagging (RIFT) using the PROPixx 1440 Hz mode
Frequency tagging uses flickering stimuli to evoke signals at the same frequencies in the brain. These signals, called steady state visually evoked potentials (SSVEPs) in EEG and steady state visually evoked fields (SSVEFs) in MEG/OPM, can shed light on visual and cognitive processes. For instance, the relative amplitude of SSVEPS can reflect processes like attention, and the progression of the signal across brain regions demonstrates the pathway of visual information processing.

EEG traces can be analyzed to reveal frequencies evoked by the stimulus display
Traditional frequency tagging paradigms use visibly flickering stimuli (e.g., < 60 Hz). Low-frequency tagging has some disadvantages:
they can irritate and make participants uncomfortable
they can obscure other low-frequency signals of interest
they can make it difficult to measure specific kinds of cognitive and perceptual processes (e.g., covert attention)
Researchers at the University of Birmingham have demonstrated that a smoothly modulated or sinusoidal flicker >60 Hz can reliably evoke SSVEMs while remaining invisible to the participant. This method, called Rapid Invisible Frequency Tagging (RIFT), opens up many new avenues for research; since its inception, it has been used to study brain-computer interfaces, language perception, attention and more.
A key advantage of RIFT is that experimental paradigms can be ‘hidden’ in naturalistic tasks without alerting the participant. This makes them ideal for more extended studies or testing special populations like children.
For an excellent review of RIFT from a research perspective, we recommend this article:
Methodologically, RIFT can only be achieved using a high-speed display. The display must update fast enough to generate a smoothly modulated flicker that maintains a target frequency > 60 Hz.
This is where the PROPixx comes in. The QUAD12X mode on the projector operates at 1440 frames per second in greyscale, allowing for high-speed frame transitions with the temporal resolution required to support RIFT applications. Indeed, to our knowledge, the PROPixx remains the only research-grade display solution used in RIFT paradigms where complex visual stimuli are needed.
The rest of this guide reviews QUAD12X mode and uses it to present a high-speed modulating stimulus with a custom frequency set by the user. We will measure the resultant display luminance changes using a photodiode to show the projector output is modulating smoothly, and we will discuss the relationship between synchronization signals from VPixx hardware and display timing.
QUAD12X: 1440 Hz in greyscale
In QUAD12X mode, the PROPixx receives full HD resolution video (1920 x 1080) at 120 Hz from your PC. It then ‘breaks down’ each frame of this video into 12 distinct subframes. It shows each subframe in a sequence; each subframe is shown full screen, greyscale. This produces a final display refresh rate of 1440 Hz (120 x 12).
The order in which a composite 120 Hz image is broken down is as follows:
The image is subdivided into four ‘quadrants’
Each quadrant is further divided into red, green and blue colour channels, and interpreted as 8 bit greyscale
The subframes are shown in the following order: Q1R, Q2R, Q3R, Q4R, Q1G …. Q4B
QUAD12x sequencer
For more details on QUAD12X, including links to other demos showing its implementation, see Quad12x: 1440Hz greyscale, half resolution.
Using QUAD12X to generate a sinusoidal mask
Following the example in Seijdel et al. (2023), we will present a static image in the center of the display, with a 68 Hz sinusoidal mask applied overtop, for 10 seconds. The code can easily be modified for different target frequencies.
Tiling the stimulus into quadrants
First, we must configure our ‘composite’ 120 Hz stimulus to format the 1440 Hz output correctly. We will use this as our stimulus:

As a first step, we must tile this stimulus to appear in the center of all four quadrants in our composite image. This will ensure it appears in the center of the 1440 Hz display on each subframe. Remember, the images in each quadrant will be shown full-screen, twice as large, in the final output.
.png?inst-v=c1836b50-a57d-4d99-98ab-65a57af7f3fc)
A simple helper function loads and tiles the 960 x 540 image and background. Click on the tab below to expand the section and view the code.
Because our stimulus is already in greyscale and static, this is all we need to do to format it. If this were a dynamic or colour image, we would need to generate a more complex image that ‘zeros’ the colour channels not associated with the target subframes. For instance, an image shown exclusively on the first four subframes (red colour channel) should not have any blue or green colour data. Composite images with data in more than one colour channel will be split into the relevant subframes:
.png?inst-v=c1836b50-a57d-4d99-98ab-65a57af7f3fc)
Red channel only
.png?inst-v=c1836b50-a57d-4d99-98ab-65a57af7f3fc)
Blue and red channels
Generating the mask transparencies for sinusoidal output
Now, let’s consider our mask. We need to identify each subframe's transparency level to generate a smooth oscillation between transparent and opaque. This is a slowed-down version of what we want the result to look like:

Mask with oscillating transparency from fully transparent → opaque
First, we generate a sinusoid of transparency values from 0 - 255 with our target frequency of 68 Hz using a helper function. Click on the expand tab to view the function code.
Here’s what the first 50 ms of transparency samples look like, plotted:

Intensities for a 68 Hz signal divided into 1440 frames/second
Formatting mask to ensure correct subframe assignment
The mask is a circle that covers the image and oscillates its transparency 0-255. We know the transparency value on each subframe, the final desired position of the mask (center of the frame) and the size (diameter = width of the stimulus image). To build our composite 120 Hz image we need to draw 12 of these masks in a single framebuffer, along with our tiled background, then pass everything to the display.
When drawing each of our 12 masks, we need to ensure:
The colour channel corresponds to the correct target subframe
The blend mode is set correctly (see the section below)
The position of the mask is offset to reflect its quadrant
The helper function drawCircularMask does all of these steps for us automatically. We need to pass it the mask properties and subframe number, and it will do the rest.
On blending functions and overlapping stimuli
In many cases, when building the composite image for our display, layers in the image will overlap one another. In this case, the software has to decide how to combine the layers; this process is called blending. Most software will, by default, treat opaque layers as obscuring the layers beneath them. If a layer is semi-transparent, the software typically computes a weighted average of this layer and the one below it (alpha blending). An alternative is additive blending, where colour channel data is independent and sums in the final display.
.png?inst-v=c1836b50-a57d-4d99-98ab-65a57af7f3fc)
Types of blending
In QUAD12X mode, we need to perform alpha blending within a colour channel and additive blending across channels to preserve all of the grayscale data in different subframes of the final 1440 Hz display.
Fortunately, this can easily be done in MATLAB with the Screen('BlendFunction')
command from Psychtoolbox:
#Alpha blend red channel, do not alter other channel data
Screen('BlendFunction', windowPtr, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, [1 0 0 0]);
This results in alpha blending the specified colour channel in the list [r, g, b, alpha] while leaving other channels (set to 0) unaffected.
To implement this in Python is a bit more challenging. Presently, PsychoPy windows do not support channel-specific blending options, so we must invoke a custom shader that does what we described above.
code coming soon!
Putting it all together
So far we have identified several discrete steps for generating a composite image in our RIFT display. They are:
Tile our static, greyscale 960 x 540 image into four quadrants
Generate a list of transparencies for our 68 Hz mask
Generate 12 masks for our composite image using our transparency list and our helper function
As a reminder, we want our final display to look like this (but much faster):

Final display, slowed down
Below is the entire MATLAB and Python code to achieve this output. You can download the stimulus image here: [download]. Make sure it is in the same folder as your test script.
Is it working?!
The whole point of RIFT is that the oscillation is invisible. If you see a static grey logo on the display, don’t worry! The code is working. Try changing the stimFreq to a lower value, like 5 Hz, to see the oscillation.