V 10.1, Oct. 2020 Go to latest version
Institute of Medical Psychology at Ludwig-Maximilian University München •
How well do we see? How can visual function be described and which mechanisms can be thought to underly? Visual psychophysics tries to give answers to these general questions, and research in this area needs specialised software. This is an overview on what is available.
Since I first compiled and sent the overview to a few colleagues in July 1994 (first posted on CVNet May 1995) there were uncountable revisions. But the general picture has not changed all that much – the three main platforms are still the PC, the Macintosh, and Unix variants. With the increased power of hardware the notion of special Unix workstations has disappeared in the 90s and has been replaced by public-domain Unix derivatives like Linux, Debian, Ubuntu etc. The three communities, still live happily in parallel, mostly unaffected by the boring developments in the other. There are a few signs of true integration, however, as is perhaps evidenced by cross links between the entries below. Many packages have been steadily supported and improved, and find more wide-spread use. Others have stagnated. I left the latter in the overview – by the regularity of updates one can easily assess whether a system has supporters. One development is new, however, and that is the ubiquity of mobile devices. We'll speak of that below.
The status of 1997 was captured in two special issues of the journal Spatial Vision, Vol. 10(4) and 11(1), named Use of computers and cathode-ray-tube displays in visual psychophysics, Part I & II. The tables of contents are on the web; they contain links to authors and pdf's provided by them. All papers are available online (pdf) through a subscribing library, some are available on authors' personal homepages. – This is now nearly two decodes ago, and an up-to-date scholarly compilation is still not around. Those were the times when monitors still did what the experimenter wanted them to do. Did you keep a CRT in your storage room? You didn't (like I) because they were just too bulky and we were unpleasantly surprised by the limitations of current technology. If you haven't by now, read the papers of Elze & Tanner (2009) on LCD monitor timing and (2010) on misspecifications of stimulus duration, and the CVNet discussions collected in the monitors section below (which is current as of 2016).
When I started this overview, I had a simple question: How should we best do psychophysical experimenting – including simple things like measuring a contrast-sensitivity function or visual acuity? When somebody starts anew, should one recommend a Unix base or a simple PC or Mac? Is there a certain package that is best suited? It turns out there is no simple answer. It also turns out there is a wealth of software and hardware out there to do the experiments.
The story of software development for psychophysics – like any other story of software development – is one of continuously reinventing the wheel. Hundreds of laboratories have set up powerful facilities. When I started out, software was mostly tied to specific hardware, perhaps custom designed or not easily available. This has changed quite a bit and hardware dependency is less of a problem today. But nowadays operating system dependency can be an equal hindrance, despite all the claims to the contrary. Good software does not catch on when it is simply not polished enough to be spread and used by others. Sometimes good packages get abandoned by their authors when they (have to) switch their field of work; support then becomes a problem. Wherever one looks, different packages represent incompatible solutions to overlapping areas of application. The platforms for psychophysical experimentation, PC, Macintosh and Unix, still are the focus of often separate user communities. Back in the 90s, however, Denis Pelli had started to turn the wheel by giving an inspiring example of sharing when he made his VideoToolbox available, a highly stable, well supported software basis for visual psychophysics on the Macintosh. The VideoToolbox had paved the way for the use of high-level languages like Matlab or Python and is the basis for the Psychtoolbox which has finally led to a convergence of platforms. Today there are systems in many high level languages. My aim with this web page is to contribute to cooperation by bringing together the developers of software and their users.
An important development was the maturation of general purpose psychological experimenting systems (Section F). Due to the inability of earlier versions of Windows to handle real-time applications, professional packages were first for the Macintosh only (an early system was PsyScope). Nowadays there is an impressive line of general-purpose systems for all platforms. To add flexibility, several can optionally be programmed with Python. Ironically, whilst timing with millisecond accuracy is no longer an obstacle for the current operating systems, it has become a problem with monitor technology.
Platforms ... What about the iPad and browsers?: The three main platforms for psychophysical experimenting and vision testing are still Windows, Mac, and Unix. However, both tablet computers like the iPad, smartphones, and browser technology start to become serious contenders. I thus added new platform keywords: "iPad", "browser-based", and "Android". For easier spotting, platforms are highlighted in bold font in the main text.
Scope: The overview covers both public domain material and commercial systems. A number of related fields of software have further been added because it is often not possible to draw a sharp line between areas.
The list is ordered into 15 sections:
(A) | Program libraries (public domain); |
(B) | Public domain applications, i.e., executable programs or program systems; |
(C) | Commercial systems for psychophysics; |
(D) | Psychophysical data analysis including programs for signal detection theory-based analysis; |
(E) | Programs from ophthalmology and optometry; |
(F) | Psychological experimenting systems, i.e., systems that are not specialized to psychophysics; |
(G) | Psychological experimenting systems that are geared towards cognitive neuroscience; |
(H) | Software for computational vision; |
(I) | More general experimenting systems; |
(J) | Plotting and data visualization; |
(K) | Analysis tools for fMRI and EEG; |
(L) | Programs from visual neuropsychology, i.e. geared towards brain-injured patients; |
(M) | Programs from visual neurophysiology, e.g. for single-cell recording; |
(N) | A pointer to the eye-movement list and a few examples; |
(O) | A few pointers to virtual reality software; |
(P) | A few hints on hardware matters like response boxes; |
(Q) | Some random examples for auditory psychophysics; |
(R) | Notes on Open Science, and |
(S) | A collection of museums and illusions. |
Since the times of Gustav Theodor Fechner, psychophysics was aimed at more than the understanding of (quantitative) relationships between external and internal world (physis and psyche). Fechner intended it to be a means of thereby also learning about the underlying, neural or other, mechanisms of perception. Exemplified is this when Fechner used the Weber fraction from physiology for quantifying the increment in Empfindungsstärke (intensity of sensation). Today we see the same when psychophysical techniques find use in the sciences that study perception in a broader sense, like sensory physiology and anatomy, biopsychology, or indeed the whole of visual and cognitive neuroscience. I therefore hope that the software shown here is seen as a core which will find more widespread use in all these neighboring sciences. Indeed it already does whenever perception is involved.
Where do you find this overview?
The Overview is available on the VisionScience
page (under Resources/Software or Guides&FAQs) and on my
personal homepage
in München (direct link). There are several links from other software- and methods overviews. It was
reviewed
in the i-Reviews section of i-Perception in 2012 (open access) by Damien Mannion. Note that all the links have been dusted-off since! Entries that are of historical interest only are marked by using past tense. Note also that the ordering in each category tries to group similar items (as far as that goes) and implies no ranking of resource quality. As of Version 8.0, styling is by CSS and the site is HTML 5.0 compatible.
Thanks go to all who have contributed, and in particular to my son Claus Strasburger, who has introduced me to the secrets of CSS and HTML5.
From the side of perception, i-Perception (which is open access) has a new section i-Reviews that publishes short reviews on perception related hardware and software (e.g. Kubilius on code-sharing, or Verstraten on Faces of Neuroscience). As mentioned above, this Overview was reviewed there by Damien Mannion in 2012.
A regular source of information on software from experimental psychology is the journal
"Behavior Research Methods" (formerly Behavior Research Methods, Instruments, and Computers, BRMIC).
There are occasional papers on perceptual software in
Journal of Neuroscience Methods, Frontiers in Neuroinformatics, and in Computers in Biology and Medicine (CBM).
There are my two special issues in
Spatial
Vision (10(4)
and 11(1)):
"Use of computers and cathode-ray-tube displays in visual psychophysics"
An overview of methodological issues in stimulus presentation is found in our paper
Bach
M, Meigen T & Strasburger H (1997). Raster-scan cathode ray tubes for vision research
— limits of resolution in space, time and intensity, and some solutions. Spatial
Vision 10, 403–414.
The issues discussed are to some part independent of CRT technology.
For monitor calibration there is a paper of ours
Strasburger
H, Wüstenberg T & Jäncke L (2002). Calibrated LCD/TFT
stimulus presentation for visual psychophysics in fMRI. Journal of
Neuroscience Methods 121, 103-110.
The basic procedure also works for monitors of newer technology.
A site that reviews LCD monitors for technical specifications (pointed out by John Pezaris).
Ghodrati M, Morris AP, & Price NSC (2015). The (un)suitability of modern liquid crystal displays (LCDs) for vision research. Frontiers in Psychology. Provides a comparison of several LCD monitors.
Wiens et al. (2004). Keeping It Short. A Comparison of Methods for Brief Picture Presentation. Psychological Science.
Tobias Elze & Thomas Tanner (2009). Liquid crystal display response time estimation for medical applications. Medical Physics 36(11). ... is on LCD monitor timing.
Tobias Elze (2010). Misspecifications of Stimulus Presentation Durations in Experimental Psychology: A Systematic Review of the Psychophysics Literature. PLOS ONE ... Check out Fig. 6!
Watson & Silverstein (2010). Light-Emitting Diodes (LEDs) and Organic Light-Emitting Diodes (OLEDs) (pp. 809-810) in: Watson & Silverstein (2010). Chpt. 10.5, Visual Displays (pp. 761-822); in: Human Integration Design Handbook (40MB), NASA (current editions).
... a short chapter on these monitors in an eleven-hundred pages handbook.
There was a thread on Visionlist (2009) on timing issues, with subject "CRT monitor solutions" that appears to be no longer online.
Peng Wang and Danko Nikolić wrote a report on a 120 Hz Samsung Monitor in Front Hum Neurosci (2011) (enhanced pdf). Answers by by Michael Bach, Vincent Ferrera, and later Peter Scarfe.
Lagroix HEP, Yanko MR, Spalek TM (2012). LCDs are better: Psychophysical and photometric estimates of the temporal characteristics of CRT and LCD monitors. Attention, Perception & Psychophysics 74: 1033-1041. (pdf)
Elze T & Tanner TG (2012). Temporal Properties of Liquid Crystal Displays: Implications for Vision Science Experiments. PLoS ONE 7(9): e44048. doi:10.1371/journal.pone.0044048. (Take a rather critical but realistic stance)
See also Ghodrati, Morris, & Price (2015) on LCD monitors cited above.
Informal comparison of Unix, MacOS and Windows wrt timing (2011) by Mario Kleiner
Five papers on OLED monitors look at their timing properties:
A poster by Poth et al. on the ECVP 2017 describes the use of a gaming monitor for ultra-high temporal resolution. More details and source code for the implementation are in a full paper by Poth et al. (2018) in Behav. Res. Meth..
Two further papers on OLEDs:
Electro-Diagnostic Imaging, Inc. (EDI) manufactures an RGB microdisplay color stimulator where each frame is updated all at once synchronously (1280 x 1024 px; frame rates: 60, 75, and 85 Hz; response time 40µs symmetrical; field size 45 deg; spherical refractive correction > 20 dpt). There is a version with a built-in camera for pupil monitoring and an additional IR fundus camera for fixation control.
The VIEWPixx monitor system described below has a version with only 1 ms delay. In particular, however, note that there is now an ultra-fast DLP projector (ProPixx) with an elephantastic refresh rate of 1440 Hz!
Since the advent of what was then called "true color" and VGA in the 80s, computer color channels are limited to 8 bit — which is fine for everyday use but not for psychophysics. Higher grayscale resolution on the Mac could be achieved by using Denis Pelli's Video Attenuator (but that has died with monochrome monitor technology). It worked by combining colour channels to achieve 12 bit on a black-and-white monitor. Current hardware for higher than 8 bit grayscale resolution is CRS's Bits++ box, and on the software side dithering (cf. our paper), Tyler's color bit stealing technique, and Allard and Faubert's noisy-bit method.
Even though operating systems and computer hardware have become much faster over time, accurate timing in experimental setups can still be an issue, even if specialized systems like E-Prime, SuperLab, Inquisit etc. are used (due to multitasking, monitors, input devices, and other factors). A system for checking on timing accuracy, Black Box ToolKit, is presented below under Hardware Hints.
This section is at the moment mostly a stub. Comparisons are important but presently I am aware of only one independent paper. I will add more (hopefully) with time.
Rolf Kötter (2009) (based on Peirce, 2007, and Straw, 2008) briefly compares the two Python-based packages Vision Egg and PsychoPy, also touching Psychtoolbox, and Presentation.
Ehrenstein WH, Ehrenstein A (1999). Psychophysical Methods. In: Windhorst U, Johansson H (Eds): Modern Techniques in Neuroscience Research, pp. 1211-1241, Springer-Verlag, Berlin.
Gescheider GA & Gescheider. Psychophysics: The Fundamentals. Lawrence Erlbaum Ass., 1997
Norton, Corliss, & Bailey (2002/2007). The psychophysical measurement of visual function. Previously published by Butterworth-Heinemann - now available from Richmond Products (direct link).
Rose D (2006). Psychophysical methods. In Breakwell GM, Hammond S, Fife-Schaw C and Smith JA (eds.) Research Methods in Psychology, Third Edition, Sage, London, pp. 160-181. (The paper mentions psychometric functions and 2AFC – without any of those tedious boring details.)
Schwartz (2004). Visual Perception: A clinical orientation. 3rd ed. McGraw-Hill. The book has a chapter on "Psychophysical Methodology".
Kingdom, F.A.A. & Prins, N. (2010). Psychophysics: A Practical Introduction. Academic Press, an imprint of Elsevier, London. It's (currently) the most recent treatise on psychophysics, and provides details about the Palamedes toolbox.
Macmillan & Creelman (2004). Detection Theory: A User's Guide. 2nd ed. Lawrence Erlbaum Associates. Also available as an electronic version.
Wickens (2001). Elementary signal detection. Oxford University Press. (Best introductory text to my opinion)
McNicol: A primer of signal detection theory. Lawrence Erlbaum Assoc, 2004 (Nice little book)
Brian Wandell's "rules of thumb", is a most useful list of important quantities related to vision — the essence of vision research on a single page. It's also in the front cover of his book Foundations of Vision.
Educational software and materials are now listed in Section B — Educational:
Here are a few further teaching materials that I came across:
Scott Steinman from Southern College of Optometry made his REALbasic classroom material available (2007). To quote him, these are "70 demo programs for eye exams and vision science. Nothing fancy, but they work for the classroom. They're available to vision science instructors for both Mac OS X and Windows (although the Mac versions work a little better)" (website discontinued).
"Precision and Accuracy with Three Psychophysical Methods" is an interactive web-site for students developed by Hiroshi Ono, Kenzo Sakurai and others. It covers the method of adjustment, method of constant stimuli, and method of limits. Remarkably, it is available in Englisch, French, Japanese, and Chinese.
Explaining what simple cells do: A Matlab script by Daisuke Kato & Izumi Ohzawa performing a 2D Gabor Wavelet Transform on an arbitrary image, and back. Slowly and sequentially you can visualize how the original image is reconstructed by summing many Gabor wavelets.
Three classical illusions with adjustable parameters modified from Java applets of Tony Azevedo:
Ponzo Illusion;
Poggendorf Illusion;
Müller-Lyer Illusion.
On a different topic, Robert Jacobs from Rochester made available his lab's "Computational Cognition Cheat Sheets". These (currently 22) notes provide brief introductions to several computational methods, mostly stemming from multivariate analysis (like Bayesian estimation, Factor analysis etc).
On still another topic – Interactive visual demos and games – we came across a little demo by Amit Patel called Blind spots while driving. It shows how the driver's position in a car will interact with the adjustment of the rear mirrors to give a complete or not-so-complete rear view. Simple and instructive.
PsychToolbox Forum: The forum for discussions on the PsychToolbox.
It was previously on Yahoo Groups but since Jan. 2020 is here on Discourse.
To get started at the new site, click the blue sign up button
in its upper-right corner. For more information, see the banner at
the top of the home page.
Encyclopedia of Psychology: A comprehensive portal to psychology software
Visiome is an impressive Japanese vision archive with many original contributions
The Color & Vision Research Laboratory, London, maintains a large, very well organized repertory of vision related materials.
NeuroDebian is a platform and software archive (see below)
Neuro-Ophthalmology Nuggets, Lars Frisén’s website on Neuro-Ophthalmology and vision, including a list of free browser-based clinical tests
AVA Vision research and its applications |
The International Society for Psychophysics |
When I was young and pretty I learned about public domain software and believed it was a great idea to give your software away for free. I wrote a package for the measurement of visual evoked potentials on the PDP-11 — in Fortran. My next package was for visual psychophysics (R_Contrast), also for the PDP-11 and also in Fortran. There were what was then called Special Interest Groups of people who shared their software. That spirit was what got me started doing this Overview. But the fascinating thing is how it has caught on! At some point there came Open Source, then Open Access publishing, and now Open Science. So for me it's really coming full circle with OpenVisionScience. What can I say! Join and enjoy! Here is a photo of me playing Zork on my home-built CP/M Z80 machine. You see the state-of-the-art 8" floppy disk on the right in a wooden housing that nicely fit my shelf? I built the computer in wire-wrap technology; it had WordStar, a RAM disk, and was really! fast (not like Windows 10 which goes asleep every now and then). Still works today.
When you come to Europe (with your kids), don't miss the little perception museums:
Bernd Lingelbach's Scheune, i.e. "The Barn"! • 3sat Video • Phoenix Video |
|
"Turm der Sinne" in Nürnberg |
|
Explora (Erlebnismuseum, Frankfurt) |
|
Illusoria-Land in Switzerland. Check out the dolphins, supposedly visible only to kids |
|
Technorama in Winterthur, Switzerland, known for its "Wonderbridge". The "hair" experiment is fun! Kopfwelten is about perception. |
|
A fun perception museum in Edinburgh/Scotland is the Camera Obscura, and further north in Kirriemuir |
|
Also in Edinburgh is the natural science museum Dynamic Earth. |
I wanted to limit the fun part to Europe. Oh well — what the heck; here are more — elsewhere. It probably all started with the Exploratorium in San Francisco, which you all know. Here are some that you don't know:
PuzzlingWorld is in New Zealand. It has the largest Ames room of all and a great suite of tilted rooms, according to Robert O'Shea. |
Arts exhibition at the Torre di Sulis during ECVP 2012 in Alghero
There is now a regular arts and vision science conference, VSAC |
New Optical
Illusions.com |
|
Martin Mißfeldt from Berlin plays with classical illusions. Check out his Old/young woman video |
|
Marcel de Heer's YouTube channel of visual illusions |
(>> back to table of contents)
Here is my portrait in 3D.
Still a little sketchy but in gold!
Click image to rotate
(by Korjoukov@OkazoLab)
For maintaining this site, I need coffee. Here is your chance to buy me some:
The PsychToolbox (PTB) (previously Psychophysics Toolbox) is a (free) package of interface routines to do visual psychophysics from within Matlab under Mac OSX, Linux and Windows. It evolved out of the desire to create a high-level environment for Denis Pelli's VideoToolbox on the (pre-OSX) Macintosh (by David Brainard) and now probably is the most comprehensive and has the largest followership of all psychophysics packages. Main developer since around 2006 is Mario Kleiner.
To become independent of commercial Matlab, PTB supports GNU/Octave, a free and open-source replacement for Matlab. The PTB further ships with MOGL (Matlab OpenGL), a thin wrapper around every OpenGL function as known from C. This makes developing pure OpenGL in Matlab easy. One can switch between this lower level functionality and the PTB functionality to compose a screen. Full-3D scenes are easy to do. The PTB supports the Bits++ box.
The
PsychToolbox Forum is for questions and discussions (it was previously on Yahoo
Groups but since Jan. 2020 is here on Discourse).
To get started at the new site, click the blue sign up button in its
upper-right corner. For more information, see the banner at the top of the home page.
A set of 40+ tutorials and demos for the PsychToolbox was compiled by Peter Scarfe. They should be of help to people who wish to learn how to use Psychtoolbox. Each tutorial has a picture or movie showing what the tutorial should produce. All code should work "as is" simply by cutting and pasting it into Matlab or Octave.
The most general one for a long time was Denis Pelli's (formerly Syracuse U., now New York U.) VideoToolbox for the Mac. It was a collection of 200 subroutines written in C, using Apple's Macintosh (pre-OSX) Toolbox, compatible with Metrowerks and THINK C, 68k and PowerPC. Things which were not hardware specific, like threshold estimation, were in standard C and could be used on other machines. The VideoToolbox was free and was available on Denis Pelli's web site. There was a mailing list for users and prospective users (email). Higher than 8 bit grayscale resolution on the Mac could be achieved by using Denis Pelli's Video attenuator. It worked by combining colour channels to achieve 12 bit on a black-and-white monitor. (Other ways to achieve higher than 8 bit grayscale resolution are the Bits++ box, Tyler's color bit stealing technique, or the noisy-bit method. There are now also the 12-bit monitors by VPixx. Go to the section on monitors.)
The VideoToolbox is superseded by the Psychophysics Toolbox or just PsychToolbox.
While we're at it: Check out Denis' tips on scientific writing!
The Psychophysics Toolbox, written by David Brainard in collaboration with Denis Pelli, is a package of interface routines to use the VideoToolbox from within Matlab on the Macintosh. It is superseded by the PsychToolbox.
Denis Pelli provided a general advice page, centered around the VideoToolbox, on how to set up psychophysical experiments on the Macintosh. Is seems to be discontinued?
As computers got faster, interpretative high level languages like Mathematica and Matlab became particularly attractive. A number of packages bridge the gap between these "slow" languages and the time-critical task of stimulus presentation.
See above
ShowTime, developed by Andrew Watson, James Hu, Cesar Ramirez, and Denis Pelli, was a package for presenting calibrated dynamic stimuli from high-level languages (Mathematica, Matlab, Visual Basic), using Apple's QuickTime technology. As such it ran on both the Macintosh and the PC. The package is no longer supported because of OS changes.
MatVis by Thom Carney of Berkeley was a package for presenting precisely-timed stimuli from within Matlab for the PC. It was extended considerably into WinVis (see below). Both are no longer available.
Cogent is a (now) freeware Matlab Toolbox for stimulus generation and presentation, and response recording on the PC, in an fMRI environment, initially (1995) written to allow a novice in the Functional Imaging Laboratory (University College London) to do brain imaging and psychophysical studies. It consists of Cogent 2000 for the precisely timed interaction with peripherals, and Cogent Graphics for the graphics.
Cogent 2000, besides presenting stimuli and recording responses with precise timing provides additional utilities for the manipulation of sound, keyboard, mouse, joystick, serial port, parallel port, subject responses and physiological monitoring hardware. For fMRI experiments, Cogent 2000 can be configured to receive synchronization pulses from a scanner allowing experimental timing to be tightly coupled with image acquisition.
Cogent Graphics, written by John Romaya (in Zeki's Lab) is a graphics toolbox for Matlab on the PC. It can generate realtime graphical animations for use as stimuli in visual experiments. Cogent Graphics provides commands for generating animations with millisecond accuracy at refresh rates up to 160 Hz as well as mouse and keyboard support for user input.
The project is funded by the Wellcome Trust; groups involved in the development are the Laboratory of Neurobiology, the Functional Imaging Laboratory (the place where SPM was developed), and the UCL Institute of Cognitive Neuroscience.
Palamedes, written by Nick Prins (Mississippi University) and Fred Kingdom (McGill Univ.) is a free toolbox of Matlab or GNU Octave routines for the analysis of psychophysical data. It allows
All user-end routines include elaborate help comments, including example code that will execute when typed into command window. Also provided is a set of heavily commented demonstration scripts which guide the user through all of the major analyses. Palamedes now also calculates measures for assessing stimulus summation in detection tasks.
Palamedes requires either basic Matlab, needing no additional toolboxes (it is compatible with older versions of Matlab, going back to at least release 14 [2004] and likely much further). Alternatively it works with GNU Octave, making it available to folks who want to be independent of Matlab. Kingdom & Prins' textbook on psychophysics makes use of Palamedes.
MGL is a library for Matlab and Octave that allows easy programming of visual (and some auditory) stimuli using OpenGL. Current versions run on Mac OS X and Linux; a Windows port is underway. MGL was developed jointly by Justin Gardner and Jonas Larsson at New York University, and others. A mirror of the documentation and a Linux version can be found here.
Psychophysica by Andrew Watson (formerly NASA) & Joshua Solomon (City University, London) is a suite of Mathematica Notebooks containing functions for collecting and analyzing data and presenting stimuli in psychophysical experiments. The components (in 2015) were Psychometrica.nb, Quest.nb, and SimulateQuest.nb. The package was unavailable for a time because of OS changes but was partly resuscitated by Josh Solomon in Feb. 2019. The most useful is currently probably Psychometrica.
The three components – Psychometrica, QUEST, and and SimulateQuest – were designed to work together but could each be used on its own. Each included a brief tutorial. The package was described in detail in the mentioned special issue of Spatial Vision (Part I). For Quest+ there is a new article in Journal of Vision
The Vision Egg aims at addressing many of the points raised in my introduction above. It is a cross-platform programming library under Windows/Linux +SGI Irix/Mac OS X, to produce visual stimuli for vision research experiments on standard graphics cards. The primary author is Andrew Straw. Citing from the web page, "The Vision Egg allows the vision scientist (or anyone else) to program these cards using OpenGL, the standard in computer graphics programming. Potentially difficult tasks, such as initializing graphics, getting precise timing information, controlling stimulus parameters in real-time, and synchronizing with data acquisition are greatly eased by routines within the Vision Egg." A popular and fast supported graphics card on the PC is from nVidia. The 10 bit DACs/LUTs available on most recent video cards are supported. (The advantage of using an SGI is that it has true 10-bit grayscale resolution, i.e. 10-bit framebuffers. Other video cards which support 10-bit framebuffers in OpenGL will work similarly once available). The Vision Egg comes with demo applications. Because it is an open-source project (GNU LGPL), it is free to download and use.
Shady by Jeremy Hill is "an open-source Python-based visual stimulus rendering engine/toolbox – perhaps somewhat similar to VisionEgg in scope, but with particular performance advantages in multi-tasking applications and/or sub-optimal hardware implementations." Jeremy hopes "that it will be as useful to the vision-science and wider neuroscience community as it has been to us in our efforts to translate laboratory measurement techniques into challenging clinical contexts."
On the web page it says, "a general-purpose visual stimulus toolbox filling a similar role to Psychtoolbox, VisionEgg, or PsychoPy. It is for programmers who work in neuroscience, especially vision science, and addresses their need for high timing precision, linearity, high dynamic range, and pixel-for-pixel accuracy."
"It takes its name from its heavy reliance on a shader program to perform parallel pixel processing on a computer’s graphics processor. It was designed with an emphasis on performance robustness in multi-tasking applications under unforgiving conditions. For optimal timing performance, the CPU drawing management commands are carried out by a compiled binary engine."
Here is the full documentation.
If you use Shady in your work, please cite:
Hill NJ, Mooney SWJ, Ryklin EB & Prusky GT (2019). Shady: a Software Engine for Real-Time Visual Stimulus Manipulation. Journal of Neuroscience Methods. DOI.
See below under Psychological Experimenting Systems.
See below under Psychological Experimenting Systems.
Ken Knoblauch
from Lyon, France has written three libraries for R that are aimed at psychophysics
and scaling, psyphy, MLDS, and MLCM
(See also the section on psychophysical data analysis below;
statistics, psychometric function fitting,
SDT), and the new book on modeling psychophysical data in R.
psyphy is an assortment of functions useful in analyzing data from pyschophysical experiments. It includes functions for calculating d' from several different experimental designs, routines for fitting the (m-afc) psychometric function using the generalized linear model, and selfStart functions for estimating gamma values for CRT (and possibly other RGB) screen calibration data (a selfStart function is a special kind of function in R that includes a nonlinear model specification as well as a function to estimate initial values for the fitting process directly from the data). See their paper in Behav. Res. Meth. Instr. Comp. mentioned below
MLDS implements analyses for Maximum Likelihood Difference Scaling. Difference scaling is a method for scaling perceived super-threshold differences. The package contains functions that allow the user to fit the resulting data by maximum likelihood and to test the internal validity of the estimated scale. There are also example functions that might be used to design and run a difference scaling experiment.
MLCM provides tools for analyzing conjoint measurement experiments. Conjoint measurement is a psychophysical procedure in which stimulus pairs are presented that vary along two or more dimensions and the observer is required to compare the stimuli along one of them. The package contains functions to estimate the contribution of the n scales to the judgment by a maximum likelihood method under several hypotheses of how the perceptual dimensions interact. It is described here and here. The MLCM package is available from the CRAN archive, and once it is installed, a manual will be on the machine. For questions contact Ken Knoblauch.
Once you have loaded R from CRAN, you can get any of the packages from within R using the function install.packages() (provided that you are connected to the internet). A direct link is above. Any suggestions, criticisms, bug-reports, etc. are always welcome.
Hans Irtel's PXL was superseded by his web-based PXLab, which was developed as part of a Virtual University. The experiments in Irtel's book (Stroop effect, Sternberg paradigm etc.) were available in PXLab.
PXL: For visual perception, including visual psychophysics, on the PC and on Unix machines, there is Hans Irtel's (formerly Regensburg, now Mannheim) library PXL. It is written in C and runs on PCs under MsDos (including the DOS in Windows95/98) and on Unix machines under X-Windows. The library supports 8 bit per color channel on many SuperVGA Boards and there are special compiled versions for Texas Instruments TIGA boards and for miro SuperVGA boards based on S3 graphics accelerator chips. It also has support for 8-bit monochrome gray-scale (important for contrast-sensitivity measurement) on some SVGA boards including the popular Matrox Mystique and later Matrox boards. There is further an interface to the Cambridge Research Systems VSG2/x boards, see below), so applications developed for standard VGA can use a high-quality back end. About 100 well-known psychology experiments have been developed as applications running on top of the library, some of which are described in a book (see below). Some might know Irtel's demonstration program CVD (see below) which has been written with this library. The library, documentation, and demos are available from the web page. The complete PXL manual is on the web. The main disadvantage of PXL is the dependence on specific graphics boards (stemming from the DOS architecture) and this is the reason why it is not further developed. The succesor is the Java based PXLab.
There is further an interface package to Fortran, FORPXL written by Martin Jüttner (now Aston University), that provides access to PXL from applications written in the Fortran language, thus allowing to use these older applications on current systems without the need for a rewrite. FORPXL is described in the special issue of Spatial Vision (Part I).
A further set of C routines for psychophysical experimentation under DOS is the library exp by Don McLeod, developed out of earlier work by Jeff Mulligan, John Krauskopf, and Walter Kropfl. The documentation of how to use is contained in the program sources. Interaction with the user is by parameter files. The library contains a 'staircase with "conservatism" to limit excursions from previously presented values'. There is further a set of Matlab scripts for the analysis of data acquired with exp.
Peach, written by Yury Petrov (then of Northeastern University), is a large C++ library for visual psychophysics, running under Linux or Mac OS X. It contains an extensive collection of visual primitives (Gabors, Disks, Bars, Text, Images, etc.) as well as a set of experimental functions, which are said to allow coding a complicated experiment in a few dozen lines. It features subpixel resolution, gamma correction, 10-bit pixel depth (by dithering) and 13-bit pixel depth (via CRS Bits++ box), stereoscopic presentation (anaglyphic, mirrors, or shutters), Method of Constant Stimuli and the Kontsevich & Tyler (1999) adaptive thresholding algorithm, graphical user interface, OpenGL support, command line scripting (Pearl), and more. Only basic knowledge of C is required.
(For software concerning the underlying psychometric principles see Section D below, and a special issue of Perception & Psychophysics (Vol. 63) on the topic.)
Adaptive threshold measurement procedures, also often referred to as staircase methods, are (for stationary thresholds) the quickest and most efficient way of measurement. QUEST was not the first, but the most widely known. My own favorite package was ML-PEST because of its excellent documentation and thereby ease of use. A (now classic) review on the matter is Treutwein's "Minireview" (below).
QUEST (Watson, A.B. & Pelli, D.G., 1983, Perception & Psychophysics 33, 113-120), probably the best known of all, is available in a number of implementations, including as a Mathematica Notebook and in Palamedes. See Psychophysica (above) for the former.
SimulateQuest simulates QUEST. It runs in Mathematica. See Psychophysica.
QUEST+ is a new (2017) and extended version and implementation that, to quote the authors, "overcomes many shortcomings of previously described procedures, including the original QUEST."
With it one can:
It is described in Watson, A. B. (2017), QUEST+: A general multidimensional Bayesian adaptive psychometric method. Journal of Vision, 17(3), 10-10.
A Mathematica implementation, and a standalone demonstration are available online with the journal article.
The Matlab implementation of QUEST+ has been produced by David Brainard. Its code is fully compatible with PsychToolbox. The code is available at https://github.com/brainardlab/mQUESTPlus. Comments on the method should go to Andrew Watson, comments on the code to David Brainard.
Lew Harvey's (Boulder, CO) library MLPest (previously called ML-TEST) is notable for its excellent documentation (in the code and in a paper) that eased my way of using it immensely. It implements a maximum likelihood staircase procedure, is avaliable in standard C and C++ and should be mostly machine independent. For the Mac and the PC, executable files can be downloaded from Harvey's web page. There is a good treatment in Harvey (1986), Behav. Res. Meth. Instr. Comp. 18, 623-632, and in Harvey (1997) which appeared in Spatial Vision 11, 121-128.
Bernhard Treutwein has a package YAAP for adaptive staircasing, based on Bayesian statistics, which has certain refinements over the maximum-likelihood approach. The package is written in Component Pascal but if requested can be provided machine-translated to C.
For a comprehensive overview of adaptive procedures see his review: Treutwein, B. (1995). Minireview: Adaptive psychophysical procedures. Vision Research 35, 2503-2522.
Pentland's (1980) best PEST has been implemented by Hans-Jörg Zuberbühler as a browser-based "application for non-expert users" which allows easy calculation of individual thresholds. It can be downloaded from the ETH Zürich.
Kontsevich & Tyler's thresholding algorithm (Vision Res. 39:2729-37, 1999) is implemented e.g. in Yury Petrov's Peach C++ package, in PsychToolbox, and in Palamedes.
Palamedes (see the entry above) contains several adaptive thresholding algorithms: up/down, bestPEST, QUEST, Psi Method
FAST, written by Ed Vul, is a Matlab toolbox for running advanced staircases. "Functional" refers to the fact that FAST operates over two-dimensional functions: (e.g., the contrast sensitivity function), where the probability of response depends on stimulus strength (e.g., contrast) and another variable (e.g., spatial frequency). You might use it when estimating thresholds as a function of another variable, rather than running multiple independent staircases (a la the method of 1000 staircases).
Here are the manual (Vul & MacLeod) and the toolbox files.
Arthur Lugtigheid (Birmingham) has written a relished Matlab toolbox implementing the Transformed Up-Down method as specified by Levitt (1971). It allows for interleaving of multiple staircases and "is (hopefully) developed to be as versatile as possible."
Treutwein's (1995) Minireview: Adaptive Psychophysical Procedures. Vision Res. Vol. 35. No. 17, pp. 2503-2522, is in my opinion still the best scholarly review on adaptive procedures.
Colorlab is a Matlab toolbox for color computation and visualization in general-purpose quantitative colorimetric applications like color image processing or psychophysical experimentation. It uses colorimetrically meaningful representations of color and color images (tristimulus values, chromatic coordinates and luminance, or, dominant wavelength, purity and luminance), in any system of primaries. Colorlab relates these colorimetric representations to the usual device-dependent discrete-color representation, i.e. it solves the problem of displaying a colorimetrically specified scene in the monitor within the accuracy of the VGA.
A number of interesting color representations are further provided, as CIE uniform color spaces (as L*a*b* and L*u*v*), opponent color representations, and color appearance representations (RLab, LLab, SVF, ATD and CIECAMs). All these representations are invertible.
Colorlab includes visualization routines to represent colors in the tristimulus space or in the chromatic diagram of any color basis, as well as an advanced vector quantization scheme for color palette design. An extensive color data base is further included, with the CIE 1931 color matching functions, reflectance data of 1250 chips from the Munsell Book of Color, McAdam ellipses, normalized spectra of a number of standard CIE illuminants, matrices to change to a number of tristimulus representations, and calibration data of an ordinary CRT monitor.
The program is developed by Jesús Malo and José Luque at the University of València. It can be downloaded and tested for free from the VISTA web site. A comprehensive user guide (220 pages) is included.
For a different purpose, but closely related to psychophysical interests, are libraries for image processing. Of them, the IPRS system was particularly interesting because it had been developed by people who also worked in psychophysics. IPRS stood for "Image processing and recognition system"; the primary responsible was Terry Caelli (previously in Edmonton/Alberta and Melbourne/Australia) but quite a number of people had contributed. It was a collection of 500+ routines, covering a wide range. It was written in C and ran on a range of platforms including Suns, SGIs, (under X-Windows) and PCs (under Linux). There came a 450 page user manual with it, written in LaTeX. Although the system was user friendly once you had it up and running, the installation was non-trivial. The system is described in the 1997 special isue of Spatial Vision; it was public domain.
Fiser & King have a more specialised image-processing software for Gabor-wavelet decomposition-based filtering of images. The package is described in the special issue of Spatial Vision (Part I). and is available from the authors.
A large package for image processing is QuIP. Since it does more than that it is listed as separate entry below.
An interactive website for image processing based on natural scene statistics from Bill Geisler & Jeff Perry at U Texas allows using its built-in algorithms or one's own, getting results in real time. Though its primary aim is to compare algorithms between labs its also useful for processing one's own images. Implemented are currently algorithms for noise reduction and upsampling (more will be added). It is described on an associated website which also provides image examples, algorithm comparisons, and downloadable databases of natural images.
Two interactive little teaching projects (2D Fourier transform and Image filtering) are found on Ignacio Serrano-Pedraza's website.
QuIP (QUick Image Processing) (@GitHub) is a complete environment for image processing, pschophysical experimentation, and data analysis written by Jeff Mulligan from NASA. Since QuIP is interpretative it can be used interactively, line-by-line. Large programs are typically written directly in the scripting language. Graphical user interfaces may be constructed in the scripting language, and linkage is provided to a number of external libraries such as OpenCV, GSL, etc. Scripts can be executed with very little modification on nVidia GPU's on systems where CUDA is installed. The system has recently been ported to Apple's iOS for iPad and iPhone (released soon). The system builds on UNIX systems such as Linux and Mac OSX using standard tools.
The Grouping Elements Rendering Toolbox (GERT) developed by GestaltReVision of Leuven, Belgium, is a Matlab and Octave Toolbox for constructing perceptual grouping stimuli. It is open source and free and comes with a manual and demo files. A paper on it has been published by Demeyer & Machilsen (2011) in Behavior Research Methods. On the right are some example stimuli.
The R packages from the Open Perimetry Initiative provide a free-to-use, exciting Open Source library for psychophysical research in peripheral vision and clinical neuro-ophthalmology and neuropsychology. It allows external control of commercial visual-field testing machines (perimeters) such as the Octopus 900 (Haag-Streit) and Heidelberg Edge Perimeter (Heidelberg Engineering), using the R language. These commercial machines have built in gaze tracking, calibration, and allow wide fields of view, so are useful for vision tasks that require peripheral vision. The project was recently described in a Methods paper in Journal of Vision.
The BioMotionToolbox by Jeroen van Boxtel is an "easy-to-use tool to extract, present and manipulate motion capture data in a Matlab environment". The toolbox allows you to import motion capture data in a variety of formats, to display actions using Psychtoolbox 3, and to manipulate action displays in specific ways (e.g., inversion, three-dimensional rotation, spatial scrambling, phase-scrambling, and limited lifetime) – i.e., to code experiments that use biological motion stimuli with a minimal level of Matlab programming skills. The toolbox is described in a paper in Journal of Vision
VCRDM by Maria Mckinley is a library of codes for the PsychToolbox for creating random-dot motion stimuli of variable coherence. It is based on the dots patches originally created by Tony Movshon.
ImaGen by the Computational Systems Neuroscience Group in Edinburgh is a free Python package for generating two-dimensional patterns useful for vision research and computational modeling. It provides highly configurable, resolution-independent input patterns (directly visualizable using HoloViews) but does not require any plotting package, so that the patterns can be incorporated directly into the user's computational modeling or visual stimulus generation code. With ImaGen, any software with a Python interface can immediately support configurable streams of 0D, 1D, or 2D patterns, without extra coding (see also Topographica below).
(>> back to table of contents)
The Freiburg Visual Acuity Test FrACT (Freiburger Visustest) is a widely-used, free multi-platform visual test battery developed by Michael Bach in Freiburg. Initially its purpose was measuring Landolt-C acuity on the Macintosh but it has long since been generalized to be fully platform independent and extended to include contrast sensitivity to Landolt-Cs and gratings, as well as Vernier acuity. It uses some skillful programming to overcome pixel and grey-value resolution limits (Bach, 1996). It is available without cost from Michael Bach's web site.
MORPHONOME is a stand-alone Mac application for the generation and presentation of 2-D grayscale, motion and color images for psychophysical testing of perceptual limits, developed by Tyler, McBride and Miller. It is written in Code-Warrior C and requires a color monitor including latptop machines. No additional hardware is required. The program allows the input of any set of user-defined static grayscale images of any size for randomized sequential presentation. Paradigms include psychophysical detection threshold, contrast masking, contrast discrimination, contrast matching, adaptation, lateral interactions, etc. Psychophysical techniques include rapid Y/N or 2AFC staircases and constant stimulus methods. Output contrast can be varied in increments as low as 0.2%! (by use of their software "bitstealing" technique). A variety of temporal presentation profiles is available. Initial cost is $300 including email support but the software may be distributed within the lab. See Tyler's web site (www.ski.org/cwt) and Part I of the special issue of Spatial Vision.
qCSF developed by Luis Lesmes at Salk and Zhong-Lin Lu at USC is a member is a member of a newly developed set of "quick" thresholding procedures, where adaptive threshold measurement is combined with heuristic, timesaving assumptions. The 2afc Matlab version for measuring the spatial contrast sensitivity function is (was?) free to use for non-commercial purposes. Since only four key CSF parameters are measured, a full CSF is obtained in ~ 100 trials. The qCSF is described in a 2010 paper in Journal of Vision, 2010 in IOVS, and further more recent ones. Commercial versions for the iPad were said to be underway.
People at City University of London provide a browser-based test for screening for severe colour deficiency. The test is freely available and works on most colour monitors balanced at ~9000K. See Barbur et al. (1994), Proc. Roy. Soc.B., 258, pp 327-334, for the full test.
Four free browser-based emulations of color vision deficiency tests are are available on a site called Colblindor:
The tests do not replace commercially available professional hardware versions due to the principle limitation that a computer display cannot emit monochrome light. They appear to give a good indication of a user's deficiency, however (the best one, according to the author, is the Android version listed below). The tests are part of an interesting blog on color vision deficiencies (CVD) named Colblindor by Daniel Flück from Zürich/Switzerland that he started in 2006 and that touches on many everyday aspects of CVD.
The site also has two useful color-related online tools, one that simulates color blindness (Coblis), and one that helps color-deficient people name a color's hue.
Color Blind Check (Android): Daniel Flück further developed the first psychophysics Android application that I am aware of! It is said to measure type and severity of color vision deficiency in a few minutes, takes statistics of how many tests have been taken, has a link to most recent result, etc. It is free, and available on the website or in Google Playstore.
Eugene Wist, Walter Ehrenstein (†), and Michael Schrauf from Düsseldorf developed a test measuring the sensitivity to motion contrast: "The Düsseldorf Test of Dynamic Vision (DTDV) allows the assessment of visual function solely based on motion contrast. In a computer-generated random-dot display, completely camouflaged Landolt rings become visible only when dots within the target area are moved briefly, while those of the background remain stationary. Detection of gap location relies on motion contrast (form-from-motion) instead of luminance contrast. Task difficulty is graded by changing the percentage of moving dots within the target. The standard version of the computer program easily allows for various modifications. These include the option of a synchronizing trigger signal to allow for recording of time-locked motion-onset visual-evoked responses, the reversal of target and background motion, and the displacement of random-dot targets across stationary backgrounds."
Wist ER, Ehrenstein WH, Schrauf M (1998)
A computer-assisted test for the electrophysiological
and psychophysical measurement of dynamic visual function.
Journal of Neuroscience Methods 80: 41-47
Here is a paper on an application (age dependency).
StimuliApp is a free app on the iPad and iPhone to create psychophysical tests with precise timing (developed by Rafael Marin, Daniel Linares, Barcelona).
Features:
Two projects using the app are described in papers:
Linares, D., Marin-Campos, R., Dalmau, J., & Compte, A. (2018).
Validation of motion perception of briefly displayed images using a tablet. Scientific reports, 8(1), 1-6.
Linares, D., Amoretti, S., Marin-Campos, R., Sousa, A., Prades, L., Dalmau, J., Bernardo, M. & Compte, A. (2019).
Perceptual spatial suppression and sensitivity for motion are weakened in schizophrenia. bioRxiv, 799395.
Little Stimulus Maker by John P. Kelly (Washington) is a freeware DOS program for stimulus generation and presentation on standard graphics cards (Windows 98 and lower). You see features and sample stimuli on the web page (circular gratings, gabor gratings, random dot patterns...).
Pixx, coded by Peter April in conjunction with Michael von Grünau and Rick Gurnsey (Concordia U.) is for motion experiments on the PowerMac and is – I assume – a precursor to VPixx described below. It is available for free from Rick Gurnsey.
ActiveSTIM is a commercial system but there is a fully functional free test version available. For more information on the package see the entry below.
Digital Embryo Workshop written by Mark Brady from NDSU in North Dakota creates strange looking, novel objects.
Here is a list of what it can do:
The method is described in a JOV 2003 paper and more recently in a JoVE 2012 paper. It is available as Download 1 from the Hedgé Laboratory website (see there for more details).
There is second line of software, developed by Evgeniy (Eugene) Bart and Jay Hegdé working under the tutelage of Dr. Kersten:
Download 2. Loose collection of Digital Embryo tools for Cygwin
"Treisman"-like stimuli for studying search and saliency can be generated by a little interactive, web-based program called PIG (Psychophysical Image Generator), developed by Toni Kunić at the Tsotsos Lab in York/Canada.
Background is found in: Calden Wloka, Sang-Ah Yoo, Rakesh Sengupta, Toni Kunic, and John Tsotsos (2016). Psychophysical Evaluation of Saliency Algorithms. Proc. Vision Sciences Society (VSS). Poster, F1000 Research. Two examples are shown on the right.
Jeda (Jean EDition Animation), developed by Jean Lorenceau, is a free standalone Windows software for editing static and moving visual stimuli and performing psychophysical experiments. It uses a Graphic User Interface, with no need for knowing a programming language; a user guide and on-line help are available. Jeda is easy to install (and to remove!) by just copying files in a Jeda directory. A large number of stimuli, including examples and illusions, are available.
Jeda is written in C++ with a Borland compiler (CBuilder) and runs under 64-bit Windows (XP, W7, and W10); it uses DirectX (DirectX7, Microsoft) and some of the TDxLibrary developed by Daren Dwyer (http://tdxlibrary.org/Welcome.htm). Jeda is interfaced with several types of EyeTrackers (EyeLink, TheEyeTribe, SMI Red 250, LiveTrack, Pertech), cerebral imaging devices (EEG/MEG/fMRI), as well as a Joystick or tablet (Wacom). Jeda can also edit and export images (bmp files) and movies (avi files) that can be easily convert to any format using other tools (e.g. VLC). All movies and images on Jeda’s website were edited with Jeda. In addition to the graphics and animation editing tools, it is possible to make device contingent interfaces (Mouse, Gaze) and to edit Games and/or Psychophysical Experiments (all of Lorenceau’s scientific papers rely on Jeda, he says). As Jeda includes Serial Port Objects, it can easily be interfaced with different devices, such as Arduino boards.
To get an idea on what Jeda can do for you, download the UserGuide and demos from the web site.
You can freely download Jeda Software together with many example files and a documentation here: https://osf.io/yx4hg/ , and run it on your computer (PC Windows XP, W7, W10). (See the readme.txt file for installing Jeda, or send an email).
The Jeda C++ source code and CBuilder compiler are available upon request.
The main structure and principles underlying Jeda are described in Lorenceau & Humbert, 1990.
— Perceptual Experimenting & General Psychology —
FlashDot
FlashDot is a stand-alone program for generating and presenting visual perceptual experiments that require high temporal precision. Developed by Tobias Elze, it "aims to be an alternative to such popular libraries or packages as the Psychophysics Toolbox, VisionEgg, or PsychoPy". It runs on many platforms (Linux, Mac OSX, Windows). Experiments are controlled either by a specific scripting language or by XML; it includes a large number of mathematical and statistical functions.
Over 100 different experiments from all areas of Experimental Psychology
ran on top of Irtels PXL library, each representing
a whole class of experiments since controlling parameters were read from
a file. Six of these constituted a course in experimental psychology and
were described in a nice (German) book:
Hans Irtel: Experimentalpsychologisches Praktikum.
Springer 1993. ISBN 3-540-56330-X.
See the PXL manual and the entry on PXL above.
PXL was replaced by PXLab. Since Hans Irtel has left us 2008, and commercial packages exist, I'm afraid nobody will take over the maintenance
PsyScope was a powerful, easy-to-use and free general purpose program for psychological experiments on the Macintosh that could be used for research in visual perception. The development of the original PsyScope has ceased long ago (around 1998) but it appears to still work smoothly on Macs running OS7 to OS9.
The current PsyScope X for OS-X (development started 2009) is also available free of charge (if without support). On the web page you also find the manual, a tutorial and more. For support, there is an email list and an email archive. The program's general purpose and function are described in detail in Cohen, J., MacWhinney, B., Flatt, M. & Provost, J. (1993). PsyScope: An interactive graphical system for designing and controlling experiments in the Psychology laboratory using Macintosh computers. Behav. Res. Meth. Instr. Comp. 25, 257-271. There is a short description in Part II of the special issue of Spatial Vision.
Citing from the abstract in Behav. Res. Methods, "PsyScope is an integrated environment for designing and running psychology experiments on Macintosh computers. The primary goal of PsyScope is to give both psychology students and trained researchers a tool that allows them to design experiments without the need for programming. PsyScope relies on the interactive graphic environment provided by Macintosh computers to accomplish this goal. The standard components of a psychology experiment – groups, blocks, trials, and factors – are all represented graphically, and experiments are constructed by working with these elements in interactive windows and dialogs. In this article, we describe the overall organization of the program, provide an example of how a simple experiment can be constructed within its graphic environment, and discuss some of its technical features (such as its underlying scripting language, timing characteristics, etc.). PsyScope is available for noncommercial purposes free of charge and unsupported to the general community."
Support for a commercial version was discontinued when programs for the PC took a large market, but as the authors say "until some of these become available for the Macintosh PsyScope remains as possibly the best way to build experiments on the Macintosh".
For further integrated systems for psychological experimenting see the section Psychological Experimenting Systems below.
nrec, developed by Friedemann Bunjes and Jan Gukelberger at the Neurology of Tübingen University, is a package for stimulation, data acquisition and experiment control in the flavour of cognitive fMRI research, which has been expanded to include flexible biosignal acquisition and sensory stimulation. It runs under Linux on PCs with standard graphics cards, produces Matlab-readable files and generally puts an emphasis on standards like using the comedi Linux data acquisition library, OpenGL, XML parameter file format and hdf5 data storage files format. See the screenshots and sample setups, and the YouTube video for a quick impression.
NeuroDebian provides a free turnkey software platform and repository of maintained, free and open-source software for Cognitive Neuroscience and Experimental-Psychology/Psychophysics, for Debian GNU/Linux and Ubuntu. The NeuroDebian virtual appliance makes it easy to start using this powerful Linux system under other operatins systems (Windows, Mac OS, etc.). Here is a list of psychophysics packages that are currently included, and here is a list with more details. Neurodebian is also available as Docker and Singularity image(s).
... is a free "test of mid-level vision for stroke patients". It is listed in Category (L) Visual Neuropsychology. Go here.
The browser-based Online Gabor-patch generatoris an interactive website that, as the name implies, allows you to create Gabor stimuli with various envelopes on the fly.
... from visual-field-software.com is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
... is a free visual-field analysis program. It is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
... is Lars Frisén's free perimetry/campimetry, based on his rarebit principle. It is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
... is a free and open-source software for visual field examination in humans (Windows, Linux and Mac). It is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
... are two free browser-based visual field tests by Lars Frisén, based on his rarebit principle. They are listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
... is a free iPad-based central-visual-field tests by Lars Frisén for self-monitoring of AMD, based on his rarebit principle. it is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology. Go here.
Project LITE: Light Inquiry Through Experiments by Kenneth Brecher, Boston University, is a project for software, curriculum and materials development. You get free applets, colorful high-quality images, surface patterns for spinning tops, and other materials which are fun to use at home or in the classroom. Here is an example, Nick Wade's "Chrysanthemum".
Hans Irtel had also developed a PC program called CVD (Colour Vision Demo) for demonstrating colour illusions. It is especially useful for didactic purposes.
The iNSIGHT color mixing lab made by John Baro is a free iPad app for demonstrating additive and subtractive color mixing. It is one of a series of apps for the iPad described below.
Ronald Hübner, Konstanz, developed a similar program, VIWO, also for the PC. It covers a range of well-known visual illusions and allows user interaction. Its main use is for didactic purposes but it can serve as a starting point for experiments.
Both these programs are * V E R Y C O L O R F U L *
... is a free app for the iPad which simulates visual field defects. It is listed in Category (E) Ophthalmology/Optometry/Neuroophthalmology Go here.
PsyPad is an open source app for the iPad for creating staircase and method-of-constant-stimuli (MOCS) procedures without writing code. The user just generates the images and configures the tests. Results are sent to an external server with full logged information. A server is provided for that purpose in the land down under. Here is a video showing how to create a MOCS procedure with PsyPad.
Ignacio Serrano-Pedraza's Laboratory of Visual Psychophysics in Madrid offers interactive teaching materials: Signal detection theory, pure-tone masking, 2D Fourier transform, Image filtering, Visual system organisation, random-dot stereograms, CIE 1931 color space. Three examples are on the right.
The VisionScience page has a useful link list for vision science demos. The list is valuable because of its comprehensiveness but note that some links may be outdated.
A beautiful collection of perception demos are provided by Andrew B. Watson on the VisionScience page. The demos use the interactive CDF format from Wolfram Research so that one can play with parameters and see the effects immediately. To run the demos one needs to download the (free) Wolfram CDF Player.
The textbook Sensation and Perception by Bennett L. Schwartz and John H. Krantz (published 2015 by Sage) has a free Companion website that offers a wide range of perception demos. The TOC lists Methods, Eye, Brain, Object perception, Color, Depth, Movement, Attention, Auditory system, Sound localization, Speech, Music, Touch and pain, Olfaction and taste. The demos work in a standard browser but using them might at places be a little confusing without the book (which as of Nov. 2018 was unavailable).
Educational material on SDT and ROC analysis is found under (D) Psychophysical data analysis, SDT
FechDeck is a tool for learning psychophysical methods. It is an ordinary deck of playing cards that has been modified in certain ways. E.g., card backs are printed with noise patterns that span a range of densities; faces are augmented with line segments arranged in "L" patterns, etc. Fechner's threshold measurement methods of adjustment, limits, and constant stimuli are covered, as well as scaling experiments using Thurstone's ranking, paired comparison, and successive categories methods, and Stevens's magnitude estimation.
It is described in
Ferwerda, J. (2019). The FechDeck: A Hand Tool for Exploring Psychophysics.
ACM Transactions on Applied Perception, April 2019 Article No.: 9.
What are iGLWidgets? They are self-contained and interactive HTML5 widgets to generate visual stimuli in real-time using WebGL. Each widget includes background information, references and a mathematical description of the stimulus.
All iGLWidgets work with any WebGL-compatible browser on any desktop or mobile platform. They can be also downloaded to run offline on your computer, and even be installed as standalone app on iOS devices like iPads.
This iGLWidgets collection is still small, but we are planning to add more soon so it can become a useful resource of modern online material for teaching visual perception remotely. New widgets can be requested too, those regarded as essential for teaching will be prioritized though.
Freely available to all at: https://www.psykinematix.com/widgets/.
(>> back to table of contents)
Cambridge Research Systems builds excellent and affordable PC-based systems for visual psychophysics. For years it had been the well known VSG 2/5 (which itself had superseded earlier models like the 2/3) which is now followed by the ViSaGe visual stimulus generator. The ViSaGe hardware is a sleek external box, that of the VSG 2/5 a graphics board (informally called the "Robson board"). Both support high frame rates (ViSaGe up to 200 Hz, VSG 2/5 up to 500 Hz; unfortunately there are no monitors that go that fast; CRS no longer officially offers the 250 Hz model they once had) and high and colour/gray-scale resolution (14-15 bit!). There is a range of support software: A library of 250 routines (probably more now) that can be linked to Matlab, C, C++, Pascal (i.e. Delphi), Visual Basic, and Windows DLL; use of a script language. There is now also a Matlab toolbox. There are further complete applications like PsychoWin contrast sensitivity software, stereo stimuli, the Optima visual evoked potential recording system and more. Full library source code is provided; the library is expanded continuously. The price was in the range of 5000 English pounds for the board including Optical gamma correction system and the library sources. Contact information is at their home page which has been newly designed.
Among their many products they also have "a little box" containing an external signal processor Bits#) (the successor to the Bit++ box) to get much improved grey scale resolution on a standard monitor (>14 bit) and other nice things. Apart from CRS's own software, the box works together with other software systems listed in this overview (Psychophysics Toolbox and PsychoPy). CRS also offer calibration systems and response boxes.
Irtel's PXL (see above) and VisionWorks are alternative libraries to drive this board.
Michael Dorr (then at Schepens) and Luis Lesmes (then at Salk) developed a set of fast Bayesian thresholding procedures for psychophysical measurement on the iPad (see also qCSF above). Viewing distance is automatically detected by the iPad's camera after a simple calibration. Contrast resolution is improved to 0.1% Michelson contrast (11.5 bits) through efficiently implemented spatio-temporal dithering. Reliable CSF estimates are obtained by qCSF after ~ 7 min. A further member of the set is quick Surface for spatio-temporal CSF measurements (i.e., simultaneously vs. spatial and temporal frequency). Other members are in the works. Pricing is not decided upon yet, but there will be academic options. It is described in an ECVP 2012 poster, and a paper in IOVS (2013) (54(12), 7266-7273).
Another high-end system is Vision Research Graphics' (Durham, NH, USA) PC-based VisionWorks. It supports Cambridge Research System's VSG system and provides a large range of sophisticated stimulus generation and psychophysical testing procedures. There is further support for display calibration, and they have products for stereo vision. An older version of the system is described in Part II of the special issue of Spatial Vision. For information, contact vrg@curtech.com, Tel.: (603) 868-2090, Fax (603) 868-1352.
WinVis for Matlab, written by Tom Carney of Berkeley, was a nice package for stimulus generation and presentation on the PC, running as an extension to Matlab. There was a web-based stimulus data base where users could donate their stimuli. It was marketed by Neurometrics and it superseded (the pubic domain) MatVis.
For Psykinematix see the section on Psychological Experimenting Systems below.
ActiveSTIM, by Danko Nikolic at the MPI in Frankfurt, was a Windows software for building experiments in vision, behavioral and cognitive sciences. It provided management of bitmap files in memory, their accurate presentation of the screen, digital I/O and measurement of response times (10 µs accuracy). ActiveSTIM used ActiveX for communication with client applications; the latter could be written in any programming language that supported ActiveX. There was a web site from where to download the package, including example client applications written in different languages.
VPixx Technologies, founded by Peter April , makes high-grade monitors for vision research (VIEWPixx), the DATAPixx video processing and data acquisition system, precise button boxes, and psychophysics software. In particular, however, they now also make an ultra-fast projector (PROPixx) with a stunning 1440 Hz refresh rate! At last, precise measurement of temporal characteristics (a matter of course in the 1960s) is back to psychophysics!
The PROPixx is a 1920×1080 DLP LED color projector with an astounding 1440 Hz refresh rate with deterministic timing. Contrast ratio: 2000:1. It is available with multiple projection-lens options, including short-throw lenses for CRT-replacement applications, and long-throw lenses for MRI/MEG applications. For the first time since (literally) many decades, visual psychophysics of temporal characteristics can be studied again and now with off-the-shelf modern apparatus. (We would have loved to have one for Poggel et al., 2012!)
The VIEWPixx, to quote from the website, "is a complete display toolbox which has been conceived specifically to replace CRTs in vision science labs. The VIEWPixx features high-performance industrial LCD glass and a panel controller designed specifically to support vision research. Our innovative LED backlight design features superior display uniformity, and a wide color gamut exceeding that of any CRT. The VIEWPixx also includes an array of peripherals which often need to be synchronized to video during an experiment. These include a stereo audio stimulator, a button box port for precise reaction-time measurement, triggers for electrophysiology equipment, and even a complete analog I/O subsystem."
All monitors have a 22.5" display (1920×1200 px) with 120 Hz frame rate, and 12 bit gray scale. There is a version with 1 ms response time for use in EEG and a 3D version.
Further features:
DATAPixx is a hardware toolbox for the vision community. Some features, provided by the manufacturer, include:
"The DATAPixx (http://www.vpixx.com/datasheets/ds_datapixx.pdf) is field upgradable to add new features requested by the vision science community. A new low-latency gaze-contingent display mode to the DATAPixx Toolbox was recently implemented. Eyelink Toolbox users can now pass forground/inset images to the DATAPixx as left/right halves of a single OpenGL texture. The DATAPixx dynamically combines the forground/inset images as defined by the Region Of Interest programmed into the DATAPixx registers. This reduces the gaze-contingent latency from 1-2 video frames to 2-3 milliseconds. The DATAPixx Toolbox will continue to evolve through input from the vision community."
VPixx (direct link) is a commercial package for the Intel Mac for stimulus preparation and presentation and psychophysical paradigms. It has an Excel interface and in particular has direct support for the EyeLink eyetracker.
Here are features, provided by the author:
"Stimulus Presentation: Frame-synchronized rectangle, oval, arc, text, and imported PICT shapes containing static or dynamic gratings, windmills, concentric circles, looming circles, uniform fields, gaussian blobs, checkerboards, binary/uniform/gaussian noise, drifting/rotating/loomingdots, and custom patterns defined by general "C" expressions or imported from PICT files. Multiple patterns can be combined additively or multiplicitively generating plaids, gabors, second-order stimuli etc. Stimulus metrics can be specified in pixels, cm, inches, or degrees of visual angle. Stimulus chromaticities can be specified in RGB, LMS (cone contrast), or CIE XYZ, Lxy, Lu'v', L*u*v* and L*a*b*. Can also present most sound files."
"Interface: Graphical Interface, Drag-and-Drop. Complex animations can be sequenced using a high-level graphical scripting language. Also has an interactive receptive-field mapping mode."
"Testing: Method of adjustments, method-of-constant-stimuli, subject events, reaction-times and event durations. Classical staircase with up/down or 2/3/4AFC, or VPEST adaptive staircase w 2/3/4AFC. All data stored to MicroSoft Excel files for easy graphing or further analysis. Also can export dynamic stimuli as QuickTime movies for teaching or web publishing."
"Intended Users: Researchers and teachers in the fields of psychophysics, electrophysiology, cognition, and fMRI. Free demo, VPixx User Guide, and sample stimuli available at VPixx Technologies. Custom programming and hardware design/interfacing services are also available."
Vision Shell 3.0, formerly Shell & Mac Glib, was a commercial "visual psychophysics and neurophysiology experimentation environment for the Macintosh computer," made by Micro-ML, St-Hyacinthe, Quebec, Canada. The full package price was $425. The web page which also had a list of users is discontinued.
An image processing package written by visual psychophysicists is HIPS. It was originally developed at New York University and is now commercial "but relatively inexpensive for academics" ($800). It runs under Unix on a range of platforms. It contains around 200 image processing routines and can also handle movies. It is described in Computer Vision, Graphics, and Image Processing (Vol. 25, 1984, pp. 331-347), and in Behav. Res. Meth. Instr. Comp. (Vol. 16, 1984, 199-216). Contact Michael Landy of SharpImage Software at landy@nyu.edu.
Another high-end system was Neuroscientific Corporation's VENUS (Farmingdale, NY, USA). Their colour configuration once sold for $28,000. The company went out of business about 1998 as an owner of the system (John Kelly) told me.
iNSIGHT is a multimedia curriculum supplement for vision science and experimental psychology lab classes made by John Baro. iNSIGHT labs are either experiments or interactive demonstrations. Experiments are real psychophysical procedures (not simulations) that generate real data. Demonstrations illustrate a visual phenomenon and allow students to manipulate the effect. Each lab is complete with a quick-start guide, detailed instructions, background information, and one or more guided exercises. Each app costs ∼$3 except for the color mixing app which is free.
Currently available on the iPad are:
General-purpose psychological experimentation systems have become quite powerful and in many ways are direct competitors to more specialised psychophysical systems. Psychophysicists should be aware of these tools. See the accordant section on these program systems below.
(>> back to table of contents)
General-purpose statistical packages (like SPSS, SAS, Statistica, SysStat, WinStat, Origin, Matlab libraries, etc.) are not the focus here; the section concentrates on specialised statistical packages that are useful in psychophsics.
However, given the emphasis on freely available software, the – freely available – general purpose statistical package R must be mentioned here. Ken Knoblauch from Lyon, France has written three libraries for R that are aimed at psychophysics, psyphy, MLDS, and MLCM, explained above and below. In particular there is now a comprehensive book on modeling psychophysical data by Knoblauch and Maloney. (Here is a preview and Chapter 1)
To cite from the web page, R – a GNU project – is a language and environment for statistical computing and graphics, providing a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
Psychtoolbox:Psychometric: Psychtoolbox has a wide variety of psychometric-function analysis tools, collected in the package Psychometric. See there for further details.
Apart from that there are a number of dedicated packages/approaches for fitting the psychometric function (i.e., the sigmoid function, introduced by Urban 1908, 1910, that relates performance to a physical stimulus variable. Currently, I am aware of ten such packages. Three maximum-likelihood based procedures are
Psignifit (classic) includes bootstrap reliability estimates of the psychometric function parameters and appears particularly versatile (see also Hill's thesis and a paper by Wichmann & Hill (2001) in the P&P Vol. 63 special issue on psychometric functions) (for the new version see below).
Four Bayesian programs which implement constraints on the parameters (thus allowing more parameters to be assessed simultaneously):
A model-free (i.e. non-parametric) approach of psychometric-function estimation based on local linear fitting was developed by Zychaluk and Foster. Except for smoothness, no assumption is made about the shape of the true function underlying the experimental data. It is described in a paper in AP&P (pdf for personal use only). The package is implemented in R and Matlab. An earlier program for assessing the reliability of threshold estimates by a bootstrap approach is available from Foster & Bischof; it is described in the Special Issue of Spatial Vision (Part II).
Psyphy: Yssaad-Fesselier & Knoblauch (2006) Behav. Res. Meth. Instr. Comp. 38 show how to use the language R for psychometric function fitting. It is implemented as part of their package psyphy. Ken Knoblauch and Laurence Maloney have now also written a comprehensive book on modeling psychophysical data, again using the language R. Here is a preview.
Quickpsy is an R package developed by Daniel Linares and Joan López-Moliner to quickly fit and plot psychometric functions for multiple conditions. More examples are on Joan López-Moliner’s web page.
Palamedes described above is an extensive Matlab psychometric analysis package also comprising Psychometric Function fitting.
Psychometric function slopes are tricky to compare across studies since slope parameters have different meaning in the various analytic representations used for the PF. In a paper in P&P (Strasburger, Percept Psychophys 63, 2001, 1348-1355) I advocate a slope measure (%corr/log-unit) that can be used somewhat more generally. I also provide conversion equations between the measures there.
d' (d-prime) is the standard measure of sensitivity within Signal Detection Theory (SDT). Douglas Creelman and Neil Macmillan at U. Toronto, the authors of the well-known "Detection Theory, A User's Guide" (2004 edition); ISBN 0805842306), had offered a free program for calculating d’ that is described in the Special issue of Spatial Vision (Part II). Today, the calculation of d’ is part of statistical software packages.
d' in R: d' can be calculated in the statistical software package R as part of Ken Knoblauch's psyphy.
Palamedes, described above, implements Signal Detection analyses including the calculation of d'.
RScore+, developed by Lew Harvey (Boulder, CO), is a well-documented and improved version of the original RScore by Dorfman & Alf. It allows the typical analysis of the receiver operating characteristic (ROC) in a number of useful ways. It is available for the Mac and the PC from Harvey's homepage.
Browser-based Calculator for ROC Curves by John Eng from Johns Hopkins University.
Excel spreadsheet for ROC analysis by Ignacio Serrano-Pedraza (website) (mail) (previously Newcastle now Madrid). Look at the animated gif on the right.
Web page on ROC-analysis software. A web page by Elizabeth Krupinski (radiology/Arizona) listing a number of programs for analysis of the receiver operating characteristic (ROC) (thanks to Neil Macmillan for the hint).
softROC (Matlab) is "a user-friendly graphic-based tool that lets users visually explore possible ROC tradeoffs." It is described in a paper by Moore et al. (2014) in Computers in Biology and Medicine (→ ResearchGate).
ROC demo. A nice little interactive demo of how the ROC and the Gaussians are related is found on the Schwartz & Krantz Companion website. Here is the direct link. Click on the "Illustration" tab to see the demo.
ROC and SDT demos, with R code. Tom Wallis (Blog) made several animated gifs for teaching SDT and the ROC. They show the effect of (1) changing criterion, (2) changing sensitivity and (3) changing equal-variance assumption, respectively. The movies were made in R. You can find the R code here. The axis scaling could be improved. Click on the thumb nails on the right to load (beware the large file sizes). Here are also links to the files on VisionList: Demo 1 Demo 2 Demo 3.
General Recognition Theory (GRT) is an extension of Signal Detection Theory to multiple dimensions (Ashby & Townsend, 1986; Ashby & Gott, 1988), to study the human categorization process.
GRTools (R) by Fabian A. Soto & F. Gregory Ashby is a GRT toolbox in R, said to be easy to use for non-experts. A quick summary of what it does and references are given on its website.
John McDonnell's Toolbox (Python) is a port of a toolbox written (in Matlab) by Leola Alfonso-Reese. From the original description: "The typical user of this toolbox designs experiments for two-category tasks, where the categories are specified by multi-variate normal distributions."
(>> back to table of contents)
Testing software in Optometry, Ophthalmology and Neuroophthalmology can be classified into the testing of foveal vision – mostly acuity – and of vision outside the fovea like perimetry and other visual field tests. Similar tests are found in other medical and non-medical areas of application, most notably in Visual Neuropsychology and Psychological testing.
Ophthalmic screening of visual well-functioning is another important area of applied vision software. Such systems usually measure acuity or other visual functions that are of interest to optometrists, ophthalmologists and neuroophthalmologists. Here are a few examples:
The UMIST Eye System developed by Chris French at Manchester was a software system on the PC under DOS for general optometry. It had a comprehensive range of acuity tests including upper and lower case letter sets, digits, symbols, pictograms, gratings, and vernier. Presentation was from single-character to multiple-line, with variable contrast, duochrome and stereogram modes, and full format control. A variety of optometric displays was included along with an 18-bit colour vision test and single static, multiple static and kinetic field testing procedures. The system was described in Part II of the special issue of Spatial Vision. The price was 250 Pounds/single user, 500 Pounds/site licence.
There was further the Computer Book of Colour by the same author which simulated the Munsell Book of Colour on the PC under DOS and allowed doing colour experiments with geometrical illusions.
In the late 90s, David Thomson at City University London developed vision tests for the PC under Windows. Test Chart 2000 already included a wide range of test charts including LogMAR, Snellen and Single Letter. Various optotypes including letters, Landolt Cs, Tumbling Es and Pictures could be displayed in each format and with varying contrast. It also included a Contrast Sensitivity test and a range of other optometric tests. He founded Thompson Software Solutions which markets this chart and its successors, as well as a wide range of other products for ophthalmic use.
Lea Hyvärinen from Helsinki concentrated on child vision early on and created the well-known Lea symbols: Go to her homepage or to Lea-Test for more. Lea Symbols charts are available from Good-Lite.
Precision Vision markets a wide range of printed charts for vision testing, including the Low Vision Chart by A. Colenbrander.
Ridgevue Publishing, a producer of educational material on iBooks, has a series of vision tests for the iPad (logMAR acuity, Tumbling E), and among them a validated Bailey-Lovie letter contrast sensitivity test.
From the web page description, it “is similar in principle to the well-established Pelli-Robson Chart and Mars Letter Contrast Sensitivity Test. The letters have all been calibrated for the Retina Display. The iPad test presents two letters per page and in 0.1 log unit steps. Thus scores should be equivalent to those obtained with the Pelli-Robson Chart. The test uses the same 10-letter set as the Bailey-Lovie Visual Acuity Chart.”
It’s commercial (sold on iTunes) but like other tablet apps is very affordable.
Visual Acuity XL for the iPad is KyberVision's mobile solution for vision care specialists: it brings computerized versions of "gold-standard" acuity testing to the clinical environment, for literate &illiterate people as well as preschool children. It promotes the use of the logMAR-chart design recommended by the National Eye Institute (NEI) and the International Council of Ophthalmology (ICO) to address design flaws in the Snellen chart.
An optional companion app, VISUAL ACUITY REMOTE for iPad, iPhone, and iPod Touch is also available, to remotely control Visual Acuity XL; either for entering the subject responses when standing a distance away from the iPad or during self-administration of the test.
The User Guide is available online as pdf or as an iBooks textbook for iPad.
Background information on non-foveal, i.e. indirect and peripheral, vision is found in our review paper Peripheral vision and pattern recognition in the Journal of Vision (2011).
The Macular Mapping Test (MMT) by Manfred MacKeben and August Colenbrander (The Smith-Kettlewell Eye Research Institute, San Francisco) serves to map residual vision in patients with macular vision loss (maculopathies). The goal is to find intact areas of the retina that can be used for eccentric viewing as a means of visual rehabilitation. Using diminished contrast, the test can also be used for screening purposes to detect the earliest signs of function loss, while visual acuity and morphology do not show those signs yet.
The MMT runs under Windows on a standard PC and can be conducted by anyone with only basic computer skills. It is available in English and German. It has been used on hundreds of subjects, many of them patients with central field loss. The test duration was, on average, just above 3 minutes. The cost is $500. For more information see MacKeben's web page or send inquiries to mm@ski.org.
MMT 2.0: A new concept for the user interface was developed in 2015 under the direction of Werner Eisenbarth, Munich U. of Applied Sciences in cooperation with Manfred MacKeben. The aim was adapting the MMT to the day-to-day requirements of optometry and ophthalmology. The result is the new Macular Mapping Test 2.0 (MMT 2.0) (available in English and German).
The R package from the Open Perimetry Initiative is a free-to-use, exciting package for research in (neuro-) opthalmology. It is described under the psychophysical packages in Section (A) here.
RareBit Perimetry (RBP) by Lars Frisén, is a campimetry that uses "microdots" to improve sensitivity to minor damage and decrease variability of conventional perimetry. The technique depends on minute stimuli ("rare bits") and replaces the conventional thresholding ("How well do you see here?") with simple checks for the presence of function ("Is there a receptive field here?"). Rather than gauging the level of function, RBP thus probes the integrity of the neural matrix.
RBP runs under Windows and needs an LCD monitor at 50 cm test distance (1 m for the central-most field). See a paper in Vision Research for the principles. The software is free to use (write to the author). Here is a Demo.
There is extensive literature on the rarebit princliple together with independent validation of the related tests. Here is a publication list.
Lars Frisén also has an interesting website, Neuro-Ophthalmology Nuggets, covering a wide spectrum of topics around – well – neuroophthalmology and vision in general. Things like an essay on super resolution, or "Making an iPhone ophthalmoscope", dichoptic reaction times, and more. Note in particular his list of validated, free browser-based clinical vision tests.
Lars Frisén is also the inventor of ring perimetry aka high-pass resolution perimetry (HRP) (Demo here).
The rarebit principle is also implemented in two free, validated, browser-based, clinical vision tests: DigitScreen for visual field screening, and DigitStep for visual field assessment. The tests are more fully described in two journal papers, in Ophthalmology and Acta Ophthalmolologica, respectively. See here for further literature.
MultiBit for the iPad is a new free smartphone-based rarebit test by Lars Frisén, which is the first test ever that allows efficient self-monitoring of age-related macular degeneration (AMD) (paper in J. Ophthalmol.). AMD creates enormous logistical and economical problems that only can be solved by efficient self-testing. Presently, there are four multicenter studies afoot. See here for further literature.
Bartiméus' Zien ("seeing") from the Dutch Accessibility Foundation is a nice little app for the iPad which simulates visual field defects with a live camera image. It's free, but currently available in Dutch language only.
Visual-field-software.com is a website and company selling perimetry software for flat screens (i.e., campimetry) and other software for ophthalmologists. The website offers detailed information on visual-field testing and glaucoma (unfortunately without stating their sources). They also offer a free screening test for glaucoma which can be downloaded after disclosing one's e-mail address. There is also a site www.visual-field.com which looks similar and might be a precursor. People behind the site are not disclosed.
EyesCream, developed by Marc Mueller in Germany, is a free Visual Field Analyzer with software fixation control, staged as a computer game. It is essentially a high-resolution gradient campimetry with short measurement time. It was designed for laser safety applications but due to its high resolution might eventually also be useful in general visual field testing (note, however, that there are no clinical evaluations).
No installation nor registration is required; data are encrypted for privacy. There is a forum for comments, and English and German language support.
Specvis, developed by Piotr Dzwiniel in Warsaw, is a free and open-source software for visual field examination in humans (Windows, Linux, and Mac). It is validated as regards reliability, and a paper describing the software appeared in PLOS ONE (Dzwiniel et al., 2017). Results are said to be comparable to professional perimeters, but independent confirmation of that is not yet there (too new).
(>> back to table of contents)
Besides the more specialised psychophysical experimenting systems there are also a number of excellent general-purpose psychology experimenting systems. For many applications they are direct competitors to the psychophysical systems: PsyScope, Presentation, E-prime, SuperLab, Inquisit, OpenSesame. Most of these systems also lend themselves to fMRI control, and some to EEG control.
With the earlier versions of Windows, precise temporal control of events or synchronization was not possible, and professional packages on the PC (like MEL) had to run under DOS. in the 90s, the only temporally precise mouse/windows-based system was PsyScope on the Mac. Nowadays, precise temporal control is considered standard in these experimenting systems, although opinions about whether that promise is always fulfilled still diverge.
Listed below are: one system for the Mac: PsyScope X; four systems for Windows: Presentation, E-Prime, SuperLab, and Inquisit; two platform-independent, icon-driven systems: Experiment Builder and OpenSesame; one browser-based system: Cognition Lab aka ERTSLab, based on the ERTS scripting language, and, for nostalgic reasons MEL which was a system for DOS.
For applied, professional psychological diagnostics, the Wiener Testsystem is a popular and robust tool.
...for the Mac is free of charge. See above.
Presentation is a major stimulus delivery program for psychological and neurophysiological experimentation for Windows 95/98/2000/XP/Win7 on any standard PC. It is powerful and flexible enough to handle almost any behavioral, psychological or physiological experiment using fMRI, ERP, MEG, psychophysics, eye movements, single neuron recording, reaction time measures, etc. It is well supported, with web based user forum, manual, and regular updates with email alert. It was free until fall 2004; the last free version I am aware of is 0.71 which might still be around. Since then it has been expanded continuously and is available under a yearly (moderate) pricing policy. Newer versions have a Python interface, Presentation now has a huge user community and a large library of scripts.
Mid 2008 NBS started to offer the VisGen Toolkit which, as of version 14.4, is integrated in Presentation. The toolkit allows for easy generation of a large variety of visual stimuli (sine wave gratings, Gabor patches, checkerboards, radial checkerboards etc.). As of version 14.4 the toolkit is included in Presentation.
PsychoPy, written by Jon Peirce, is an easy-to-use open-source package for psychological experimenting running on all standard platforms (Windows, Mac OS X and Linux) and using Python (a free alternative to Matlab) as the scripting language. It aims to be flexible enough for neuroscience, precise enough for psychophysics, and intuitive enough for undergraduate teaching. It is now used for a wide variety of experiments ranging from cognitive psychology to visual psychophysics and neuroimaging.
PsychoPy offers two user interfaces, one for non-programmers who want their study up and running, and one for experienced programmers who need precise control of everything. Programmers can write code in the editor of their choice or with the integrated development environment ("Coder"). Those not proficient in Python can use the intuitive graphical "Builder" interface.
OpenGL is used to generate and render stimuli in real time and can be accessed directly, as can a variety of hardware via serial and parallel ports. PsychoPy supports the Bits++ box. As seen from the above it is fully free. When PsychoPy first appeared, Jon wrote to me in Feb. 2004 saying "It is still a work in progress but it's coming on very rapidly and I am now using it for my own experiments." As of early 2017, there are now over 15.000 installations! Here are the usage statistics.
PsychXR by Matthew Cutone and Laurie M. Wilcox is an interface for the Oculus Rift head mounted display (HMD) for use with PsychPy.
It is a collection of Python extension libraries for research in neuroscience and psychology, and in particular the vision science community, removing the "black-box" proprietary games engine between your application and the HMD's API. The libraries are written in Cython, providing high-performance API access, leaving more headroom per-frame for the application code. PsychXR can be used on its own to add HMD support to OpenGL applications. However, it's considerably easier to develop experiments using PsychoPy, which then uses PsychXR to provide the HMD support.
PsychXR is released under the MIT license that allows to distribute, inspect, and modify the code. For attribution see the end of the Readme file.
Paradigm is a new psychology experimenting system for the PC which focuses on ease-of-use and simplicity – "to make building experiments a less daunting task for researchers and students." It has millisecond accuracy, scripting facility in Python, supports button boxes from PST (E-Prime) and Cedrus, and has detailed help files and video tutorials. The single user licence costs around $700 with tiered discounts based on the number of licenses. There is a free trial version.
PyEPL (Python Experiment-Programming Library) is a Unix-based open source library for coding psychology experiments (including virtual reality) in Python. It supports presentation of visual and auditory stimuli, and supports both manual (keyboard/joystick) and sound (microphone) input as responses. It runs on Unix variants (Mac OSX, Linux, Ubuntu, Debian); there is a user forum. The system is also described in a paper in Behavior Research Methods 2007, 39 (4), 950-958
E-Prime is the successor to MEL, running under Windows Like MEL, it is made by Psychology Software Tools Inc. The single user licence for educational institutions was ~ $700; discounts were given to MEL and PsyScope (consortium) users. A disadvantage of E-Prime is that it uses a hardware dongle for copy protection. The dongle is - to our experience - also required for the development of an experiment. They have a newsletter which contains hints on how to use E-prime. Here is an example.
There is also a European E-Prime 2.0 website
SuperLab (Pro, LT), for Mac and Windows, is an experiment generator for psychology for Macintosh and Windows. It is made by Cedrus Corporation. The version 1.0 that I had used many years ago had a nice user interface and enjoyable booklet but was not useful for well-calibrated stimulus display. It seems to have thoroughly improved, however, or anyway the web page has an impressive list of publications that used it. They also offer response boxes including an fMRI-capable version - supported by the major software packages (E-Prime, Inquisit, SuperLab, MEDx, MEL), shutters, a video splitter, and other hardware. The single-user educational price is ~$600, student version $125 (as of 2012).
Psykinematix from KyberVision Japan LLC is an OpenGL-based software package dedicated to visual psychophysics running on Mac OS X 10.4 (Tiger) to macOS 10.15 (Catalina) for both PowerPC- and Intel-based Macintosh computers and in 32-bit and 64-bit versions. It consists in a stand-alone application that does not require any programming skill to create and run complex experiments. Easy to use, subject-friendly, powerful and reliable, Psykinematix runs standard psychophysical protocols, presents complex stimuli, collects subject's responses, and analyzes results on the fly. Psykinematix is also a great learning tool to introduce visual perception in classroom and to illustrate psychophysical concepts to students.
Psykinematix features (link to flyer):
Here are some tutorials (orientation-discrimination, contrast sensitivity, visual acuity, etc.). The whole documentation is also available as an iPad's iBooks Textbook (download link).
All licenses are perpetual and transferable. Regular pricing is as follow:
The Student Edition costs $149 (US Dollars); the Standard Edition costs $375; a single-user license for the GPU Edition costs 750$; a special Bits#/Display++ Edition is also available from Cambridge Research Systems, as well as a Clinical Edition (Metropsis 2) (check out Cambridge Research Systems’ promotional video for Metropsis). Flexible licensing schemes and support packages are available. The 30-day trial version can be used as a free educational tool. For more information visit the Psykinematix homepage. A comparison chart between the different editions is available here.
To help with the circumstances related to the Covid-19 pandemic, KyberVision are also offering free individual licenses for the Student Edition, free complementary licenses to users of the Standard and GPU Editions, as well as a 50% discount to new users. For more details, see their official announcement: https://www.psykinematix.com/covid19/.
Inquisit, to cite from the web page, "is a flexible, open-ended, high performance psychological experiment generation tool for Windows (95, 98, Millenium Edition, NT 4.0, 2000, XP, Vista, and 7). Inquisit supports a wide variety of psychological data collection methods, ranging from simple questionnaires to reaction time tasks where every millisecond counts. ... it also makes it easy to selectively demo tasks, trials, and blocks for presentations and lectures." Like the other major Windows packages is uses DirectX to achieve temporal precision. There is a scripting language. It supports the Cedrus response box. The educational price is $345.
Experiment Builder by SR Research is an icon-driven programming environment for psychological experimenting under Windows and MacOS X. It offers complex yet precise visual and auditory stimulus delivery, thorough support of the EyeLink series of eye trackers, support of button boxes, touchscreens, etc., and for advanced users Python scripting. There is a free demo version including sample projects (change blindness, smooth pursuit, a pro-saccade task, Stroop, programming demos); further paradigms and updates are available through the SR Research support website. Experiment Builder is simple enough for a novice but rich enough to serve advanced experiment paradigms.
There is also DataViewer which provides visualization (scanpaths, heatmaps, movie playback and 2D graphs) of EyeLink eye movements data, and allows extracting a large number of dependent measures for all areas of eye movement research.
OpenSesame is a platform-independent, free, graphical experiment builder. OpenSesame provides an easy to use, point-and-click interface for creating psychological experiments. In addition to a powerful sketchpad for creating visual stimuli, OpenSesame features a sampler and synthesizer for sound playback. For more complex tasks, OpenSesame supports Python scripting, using the built-in editor with syntax highlighting. OpenSesame is freely available under the General Public Licence. There are download packages for Windows, Mac, and Linux. There are a support forum and discussions about timing and Python.
The Wiener Testsystem made by Schuhfried (South of Vienna) is a versatile, comprehensive system for professional psychological diagnostics including some of visual function testing, for application in driving fitness, aviation, personality, human resources, sports, etc.
NeuroScan is mostly dedicated to EEG software but they do have visual stimulation software for EEG and fMRI.
MEL (Micro Experimental Laboratory) was the precursor to E-Prime, a general-purpose psychology experimenting system for DOS developed by Walter Schneider, Pittsburgh and marketed by Psychology Software Tools Inc. (511 Bevington Rd., Pittsburgh PA 15221, phone 412-244-1908). It is described in Behav. Res. Meth. Instr. Comp. 20 (1988), 206-217. MEL came in a classroom and professional version; the latter had similarities with the Pascal language. Since MEL was a DOS program, it ran on Win 95/98 but not on NT/2000 systems. I know happy users of it but support has ceased as of Dec. 1999, last version 2.01). (The price of MEL 2.01 was $199).
ERTS (Experimental Runtime System) by Joerg Beringer (BeriSoft, Redwood City, CA) was a general-purpose psychology experimenting system for DOS (and thus Win 95/98/NT/XP). It used a scripting language and, like other DOS applications, had millisecond accuracy. Even though DOS is long gone, however, by its dedicated user base and large library of cognitive paradigms, ERTS lives on happily as a scripting language for psychological experiments. (For an early description see Beringer, J. (1994), Behav. Res. Meth. Instr. Comp. 55(4), 1-1.)
ERTSLab was the classroom version with prefabricated experiments. The successor (since 2014) is ...
Cognition Lab, which is browser-based. All the files – configuration, subjects, session schedule, results, reports – are now in the cloud and their management happens via a console in a web browser (HTML5/Javascript). Basically, the Cognition Lab Library is a collection of ready-to-use experimental-psycholgy paradigms that can be configured by the user. Typical examples are the Stroop test, N-back task, Mueller-Lyer length estimation, visual search, Sternberg paradigms, Simon effect, Spatial attention, Mental rotation etc.
Cognition Lib. The new development (2018) is now the upcoming ERTS Script Community, allowing to create ERTS scripts with an online editor and sharing the scripts with others. Scripts can be run in the web browser for testing and demoing to the scripting community or student classes.
Jspsych. There is further an integration of external web applications, so that, for example, experiments based on the open-source JavaScript psychological-experimenting library jspsych can be conducted (see here).
The Psychology Experiment Building Language (PEBL), developed by Shane Mueller (Ohio) and others, is a free cross-platform system for computer-based experiments (Gnu: free to use, free to change, free to give). It runs under Windows, Linux, and Macintosh. Experiments are created by a simple scripting language for creating displays (graphics, shapes, text, etc.), collecting responses (mouse and keyboard), experimental design (randomization and counterbalancing), and data recording.
It includes a large number of ready-to-use standard experiments from cognitive psychology. There's a WIKI, a user manual, a blog, technical reports, etc. Here are some screenshots.
PEBL achieves its cross-platform compatibility by using the open source cross-platform multimedia library SDL (Simple DirectMedia Layer) as its base.
It seems particularly easy to use, as a user wrote me.
EventIDE by OkazoLab in Delft/NL is a general-purpose psychological experimenting system that seems both particularly easy to use (with Ribon GUI, wysiwyg stimulus editor supporting vector graphics, light-weight scripting, etc.) and powerful, seamlessly integrating hardware that delivers time-critical events as, e.g., required for visual evoked potentials/event related potentials/neurofeedback, eyetracking, musical midi data, etc. On-line videos demonstrate such highlights (check out the 3D animation). Here is a demo video showing a complete process of designing a task.
Zep 2.x
is a free, open-source application for implementing and running psychological experiments. Development began at the
Utrecht Institute of Linguistics, Utrecht University, and is now conducted at Beexy - Behavioral Experiment Software. While it is mainly being used by language and speech researchers, being script-based it is not restricted to a particular research area. It comes with many experiment templates which can be used out of the box or serve as a basis for new experiments; it runs under Linux, Windows and Mac OS X. It works together with ZepMan which provides a user-friendly interface for managing experiment databases and running experiment sessions.
Here is the Wiki and the download area.
jsPsych, created by Josh de Leeuw and others, is a (free) JavaScript library for running behavioral experiments in a web browser. The library provides a flexible framework for building a wide range of laboratory-like experiments that can be run online.
To use jsPsych, one provides a description of the experiment in the form of a timeline. jsPsych handles things like determining which trial to run next, storing data, and randomization. jsPsych uses plugins to define what to do at each point on the timeline. Plugins are ready-made templates for simple experimental tasks like displaying instructions or displaying a stimulus and collecting a keyboard response. Plugins are flexible and support a wide variety of experiments. With some experience in JavaScript programming it is easy to create one's own plugin.
If you use jsPsych for academic work please cite: de Leeuw, J. R. (2015). jsPsych: A JavaScript library for creating behavioral experiments in a web browser. Behavior Research Methods, 47(1), 1-12. doi:10.3758/s13428-014-0458-y.
Experimental Psychology Software is a European distributor for several of the experimental psychology systems, including E-Prime, SuperLab, Presentation, and EventIDE. They also resell button boxes (see there). They are part of the SciencePlus commercial reseller group who carry scientific software like EndNote etc., or the Wiener Testsystem (Vienna Test System).
(>> back to table of contents)
Most general purpose psychological experimenting systems nowadays allow acquisition of fMRI and/or EEG data and are thus suitable for cognitive neuroscience. So it's hard to draw a distinction (and will become harder with time). The systems listed here are perhaps more streamlined for cognitive neuroscience. There is further overlap with the section on fMRI and EEG analysis tools below.
MonkeyLogic is (to cite from its webpage) a Matlab toolbox for psychophysical tasks with high temporal precision. It allows construction of sensory, motor, or cognitive tasks, based upon the interaction of a subject with visual stimuli (static and movies) through the use of eye-position, joystick, button, lever, and/or keyboard responses. An experiment is constructed using by a conditions file and a timing script. The conditions file is a tab-delimited text file which enumerates each possible sequence of stimulus presentations that can occur within any given trial of the experiment. The timing script is a Matlab program that determines when and under what conditions those stimuli are presented. Have a look at the main menu.
Designed by Wael Asaad at Brown and David Freedman & coworkers at U Chicago, it is in some ways a successor to CORTEX, developed by Thomas White at NIH (now Columbia U). It has already been used in quite a number of published studies in the area of cognitive neuroscience.
PLDAPS (pronounced "Platypus") is a system for cognitive neuroscience with high temporal precision. It integrates Psychtoolbox and the DATAPixx input/output control system with Plexon's recording apparatus. "Despite its capabilities, it is simple enough for someone with basic Matlab programming skills to design their own experiments". It is described in an abstract. Here is the download link at GitHub.
Topographica, Param/ParamTk, and ImaGen are a set of free-to-use software packages for neural-systems modeling released by James Bednar and his group from U. Edinburgh that has been applied to modeling the primary visual cortex. A review paper "shows how these separate projects add up into what is hoped to be the most complete model of the development of functional properties of V1 neurons to date." Here are links to 31 publications using this software so far. The packages are written in Python and run under Linux, Mac, and Windows.
(>> back to table of contents)
There should be much more here but there isn't. Suggestions welcome.
The MemToolbox is a collection of Matlab functions for modeling visual working memory. It includes implementations of popular models of visual working memory, real and simulated data, Bayesian and maximum likelihood estimation procedures for fitting models to data, visualizations of data and fits, validation routines, model-comparison metrics, and experiment scripts. There is a Google user support group. The MemToolbox is described in a companion paper in the Journal of Vision
(>> back to table of contents)
IGOR, by WaveMetrics, is a powerful interactive software environment for experimentation with scientific and engineering data and for the production of publication-quality graphs and page layouts (Mac and Windows).
LabVIEW (short for Laboratory Virtual Instrumentation Engineering Workbench) is a platform and development environment for a visual programming language from National Instruments. The graphical language is named "G". Originally released for the Apple Macintosh in 1986, LabVIEW is commonly used for data acquisition, instrument control, and industrial automation on a variety of platforms including Microsoft Windows, various flavors of UNIX, Linux, and Mac OS. Wikipedia entry.
Veusz – pronounced as "Views" – is a nice and free scientific plotting package written in Python. It is cross-platform (Windows, Mac, Unix), reads standard data file formats (e.g. csv) and produces plots in vector format (pdf, ps, svg). Here is the Wikipedia entry.
Citing from the web page, matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala Matlab®* or Mathematica®†), web application servers, and six graphical user interface toolkits.
HoloViews is a free Python package for scientific and engineering data visualization. To paraphase from the announcement, HoloViews provides composable, sliceable, declarative data structures for building even complex visualizations of any scientific data very easily. Data can be seen as publication-quality figures almost instantly. Even complex multi-subfigure layouts and animations are easily built.
Bokeh is a free Python interactive visualization library for modern web browsers providing elegant and concise construction of novel graphics in the style of D3.js. It can be used to create really cool visual demos, I was told. Here is a Gallery.
MeVisLab is a framework for image processing and an environment for visual development, published by MeVis Medical Solutions AG and Fraunhofer MEVIS in Bremen, Germany. It is available for all major platforms (Windows, Mac OS, and Linux) and is free for use in non-commercial organizations and research.
The tool is described in an OA paper "Newe, Axel (2016). Enriching scientific publications with interactive 3D PDF: an integrated toolbox for creating ready-to-publish figures". PeerJ.
(>> back to table of contents)
Here are pointers to a few well-known EEG analysis packages. For more see the above-mentioned two websites 1, 2.
Note also that some of the general purpose psychology packages like EventIDE have some EEG processing already built in.
EEGLAB (by Arnaud Delorme & Scott Makeig) is a Matlab toolbox from UCSD's Swartz Center for Computational Neuroscience. Amongst others it contains the Neuroelectromagnetic Forward-Modeling Toolbox (NFT) which includes the Boundary Element Method (BEM)
Harvard MNE Suite (Minimum Norm Estimates software); created at the Athinoula Martinos Center for Biomedical Imaging
(BrainVision) Analyzer 2 by BrainProducts
CarTool software from U. of Geneva's Functional Brain Mapping Lab
NeuroElf (formerly BVQXtools) by Jochen Weber from Columbia University, New York, is a Matlab-based toolbox for working with neuro-imaging data. NeuroElf is not meant as a replacement for any of the more traditional and full-fledged neuro-imaging tools (such as BrainVoyager QX, SPM, AFNI, FSL, FreeSurfer, …), but rather to be used in addition to those. It augments some of their functionality, facilitating a few of the tasks that might otherwise be more difficult to achieve for those researchers who are not as familiar with coding their own routines in Matlab.
Here is the wiki and a YouTube tour.
Simo Vanni of Helsinki has written a multifocal fMRI stimulus generation toolbox, described in Vanni et al. (2005), Neuroimage 27, 95-105. He writes "The utility works without modifications with Matlab 7 (tested with 7.3), and contains also a function for automated estimation of data with SPM2. You can freely modify the tool for your needs." It can also be accessed from the SPM website.
Vanni continues "I personally find multifocal fMRI good for V1 mapping and region-of-interest localization, especially when you do not have surface analysis tools. We continue working with this tool to develop it further for versatile purposes. All feedback is welcome, or if you want to have data from the study above for comparison, contact me.
Steve Luck's lab – in conjunction with others – put together a package of freely available, open source Matlab routines for basic and advanced ERP analyses called ERPLAB Toolbox. It is tightly integrated with EEGLAB.
Psycware (A. Beer, Germany) has put together a comprehensive site listing analysis packages for EEG, fMRI and MEG. Another comprehensive page from the viewpoint of imaging has been collected by Michael Rotte, Univ. Magdeburg.
(>> back to table of contents)
Visual Neuropsychology, i.e., the area of diagnosing and treating patients who have neurological disorders of the visual system, is a field of application for vision science. Disorders can show a wide variety of, often unexpected, symptoms, and diagnistic programs need to cover that wide range. Treatment typically involves some kind of sensory training and happens as part of a rehabilitation program. Progress, however, is often frustratingly slow, with recovery not always guaranteed, and it demands endurance and determination on the patient's side. Training software can help along. Training approaches fall in one of two categories: restoration of lost function and compensation for the deficits.
Background information on indirect and peripheral vision is found in our review paper Peripheral vision and pattern recognition in the Journal of Vision (2011).
PeriMat by Erich Kasten (his homepage) was a program for qualitative perimetry that offered particularly fine resolution (1 deg) in the center visual field (~ +-25 deg). It was intended for precise circumscribing of visual field loss from brain injuries that might be amenable to functional recovery and therefore specifically offered a perceptual training. The program was for the PC under DOS, to run on simply equipped systems. It was described, together with two further perimetry DOS programs, in the special issue of Spatial Vision, Part I. There was a successor for Windows (copyright owned by NovaVision) but this program seems to be no longer available (cf. e.g. Poggel et al., 2012). So currently there is no program for high-resolution perimetry available that I am aware of ... but it could be done using OPI.
Open Perimetry Initiative: The open source R package OPI is a free-to-use, exciting package for clinical and basic psychophysics research in the visual field. It is described under the psychophysical packages in Section (A) (here).
VS by G. Kerkhoff & C. Marquardt was a DOS based program for the analysis of visuo-spatial perception in brain injured patients (e.g. parietal lesions). It offered the assessment of the subjective vertical and horizontal, a number of spatial discrimination and bisection tasks, the assessment of the influence of optokinetic background stimulation, visuospatial short-term memory and certain simple visuomotor tasks by use of a touch screen input. For more information contact the authors.
L-POST: The Leuven Perceptual Organization Screening Test, developed by Lee De-Wit and coworkers, is a test of mid-level vision for stroke patients, examining higher stages of visual perception. To cite from the GestaltReVision web site, "in 20 minutes you will be able to test whether you can organize moving dots into coherent objects, detect patterns of oriented lines that form objects, and more. At the end of the test, you will see your score, which you can compare with our previous participants". L-POST consists of 15 subtests, e.g. figure-ground segregation, contour integration, embedded figure detection, biological motion, perceptual grouping, local and global processing, scene segmentation and texture segmentation. To reduce cognitive load, a matching-to-sample task is used throughout where participants indicate the alternative that is the most similar to the target. The online test can be administered in 20–30 minutes, and a neglect-friendly version is available.
L-POST is non-commercial and free to use. It is available in English, Dutch, and Italian (with more to come). Here is a 2012 abstract, a poster at the ECVP 2013, and a full publication (2013).
Alison Lane and her group in Durham developed a training program, the Durham Reading and Exploration Training DREX, which is computer-based and self-adjusting, allowing people to train themselves in their own home. It involves a series of tasks that encourage visual exploration. These gradually get more difficult thereby promote the development of more efficient eye-movements and increased visual awareness. Half of the training is also specifically tailored towards improving reading, a common problem associated with visual field loss.
dob is a software package for perceptual training to improve basic visual and visuomotor skills, developed by Doris Schärz, Beatrice and René Fehr-Biscioni, Harriet Bünzli and others in Switzerland. Key commands or gestures on a touchscreen enable the learners to change object properties and background while using the program, and therefore match their individual needs. Exercises can be combined into lessons and allow supervisors/caretakers a personalized support. dob is suited for children aged 2 to 6 years, and people with developmental disorders, visual impairment, and multiple disabilities, for use in educational institutions or at home as well as for general rehabilitation.
The authors offer training for schools and institutions. All functions and contents are described in detail in a user manual as well as in a short video. The software is available in German, French, Italian, English, Spanish, and Portugese.
There are further the two commercial packages Birmingham Object Recognition Battery (BORB) and the classical Visual Object and Space Perception Battery (English, German) (VOSP).
Perceptual function is intimately linked to attentional functions, so (even though it is not purely visual nor purely psychophysics) a test on attentional functions is listed, that is widely used in German speaking countries and elsewhere (and which we have used ourselves): the TAP, by Zimmermann & Fimm. Attentional functions tested are
(>> back to table of contents)
Visual neurophysiology is, of course, a whole area in itself. I just list a few packages I have come across. There is also heavy overlap with Section G on cognitive neuroscience.
VSApc was a C++ package for quantitative extracellular single-cell electrophysiology, designed by David L Bohnsack and John B Troy from Northwestern U., IL. It was a suite of DOS programs for measuring spatiotemporal frequency responses and collecting maintained discharge from retinal ganglion cells. The package is described in the special issue of Spatial Vision (Part II, p. 95ff).
"Pep++ is a system for visual neurophysiology experiments. It is available for free to anyone that wants to give it a try..." A brief description can be found on the Web. Contact Dario Ringach (Center for Neural Science, New York Univ.)
RFspotter, developed by Nicolas P. Cottaris from U Pennsylvania, is an app for the iPad 2 to quickly and interactively find receptive fields and explore their tuning properties in single cell recordings. It is found in the app store.
(>> back to table of contents)
Part of psychophysics is the problem of recording and controlling eye movements. The solutions are sufficiently non-overlapping with the foregoing so that for ease of maintenance it is useful to survey them separately.
For such surveys there are:
Nonetheless, here are a few links that I came across:
OpenEyes is a freely available Matlab and C toolkit. Citing from their web site, "openEyes provides hardware designs and software useful for the tracking of human eye movements. The development of openEyes stems from the recognition that while the cost of hardware used in eye tracking systems has precipitously dropped, there is lack of freely available software that implements even long-established eye-tracking algorithms."
"The openEyes toolkit includes algorithms to measure eye movements from digital videos, techniques to calibrate the eye-tracking systems, and example software to facilitate real-time eye-tracking application development. Contributions to the openEyes Toolkit have been made by (alphabetically) Jason Babcock, Dongeng Li, Derrick Parkhurst, David Winfield. Questions should be directed to Derrick Parkhurst (derrick dot parkhurst - at - gmail dot com)."
SPIC (Saccadic latency measurement and analysis), developed by Roger Carpenter at U. Cambridge (email), allows acquisition, analysis, display and printing of saccadic latency data. The acquisition part requires the ViSaGe system. Files from an Ober Saccadometer can be imported, however. The latter process can be highly automated. The software is very useful for measuring saccadic latencies to a broad array of stimuli and modelling the responses using the LATER model.
Experiment Builder by SR Research is a commercial psychological experimenting system supporting the EyeLink series of eye trackers. A companion program is DataViewer for eye-movement data analysis and visualization. See the entry above.
FixationAnalysis is a free Windows tool for analyzing eye movements, computing saliency/heat maps and computing the similarity between predicted saliency maps and eye movements. The implemented similarity metrics are described in Meur & Baccino (2012). Email Olivier Le Meur for the password.
Vassilios Krassanakis and collegues from Athens developed an open source Matlab toolbox for post-analysis of eye movement recordings. EyeMMV (Eye Movements Metrics & Visualizations) is based on a two-step spatial dispersion for fixation identification and is useful for eye movement analysis, fixation identification, generating eye movement metrics, and eye-tracking data visualization. The source code is found on GitHub. The package is fully described in the open access paper
Krassanakis V, Filippakopoulou V & Nakos B (2014). EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. Journal of Eye Movement Research, 7(1): 1, 1-10.
LandRate is an extension of EyeMMV; it is a new, conmprehensive MATLAB toolbox for post-experimental eye movement analysis. In one step it generates a full analysis report from the eye tracking data. The toolbox supports the computation of a new aggregated index (LRI) for rating characteristics of landscape photographs. The index combines quantitative eye tracking metrics and expert opinions; it can be easily adapted in similar fields (MATLAB R2017a or later).
Krassanakis V., Misthos M. L., Menegaki M. (2018). LandRate toolbox: an adaptable tool for eye movement analysis and landscape rating, In P. Kiefer et al. (eds.): Proceedings of the 3rd International Workshop on Eye Tracking for Spatial Research, Switzerland, Zurich, pp. 40-45.
LandRate conference presentation.
Table 1 in Krassanakis et al's (2014) open access paper lists freely available eye movement analysis software. It gives references to ILAB, Eyelink Toolbox, iComp, openEyes, eyePatterns, ASTEF, iComponent, OGAMA, ITU Gaze Tracker, GazeAlyze, EHCA Toolbox, GazeParser.
A bio-inspired model for the control of vergence eye movements was developed in the Dept. of Informatics (DIBRIS) at the U. of Genoa, Italy. To quote the authors (Gibaldi, Vanegas, Canessa, Sabatini), "The robustness and adaptivity of the bio-inspired approach allows a control that is easily portable to a stereo head with different kinematic characteristics, and that is robust to mechanical imprecision, as well as to changeable and unpredictable lighting condition of real environments."
The software can be used
Here are two videos for illustration:
Video of the demo
Video of the algorithm on an iCub stereo head
Publications
Gibaldi, A., Vanegas, M., Canessa, A., & Sabatini, S. P. (2017). A portable bio-inspired architecture for efficient robotic vergence control.
International Journal of Computer Vision, 121(2), 281-302.
Gibaldi A, Canessa A, Chessa M, Sabatini SP, & Solari F (2011).
A neuromorphic control module for real-time vergence eye movements on the iCub robot head.
In Humanoid Robots (Humanoids), 2011 11th IEEE-RAS (pp. 543-550).
eyeTrackR is an R package for users of the Eyelink eye-trackers that helps organising eye-tracking data, avoiding mistakes in data cleaning, and generally aims to make processing those data easier. It can be downloaded from CRAN, and there’s also a detailed guide with plenty of examples and a walkthrough available.
In addition, there is a chat channel for asking questions, discussing future plans, or talking about anything related to eye-tracking data.
(>> back to table of contents)
Vision Research had always had close links to virtual reality and robotics. This is not the place to give even a rough overview on what is available in these areas, but I will include a few pointers to packages that could be of interest to vision researchers.
Vizard by WorldViz is an impressive, powerful generator of virtual reality which I first saw at VSS 2003. See their website for colourful demonstrations.
Seeing Machines is an Australian-based company developing vision-based human-computer interfaces and tracking systems. One product called faceLAB provides head, pose, gaze tracking and eyelid closure tracking. See their website for more information.
(>> back to table of contents)
Psychophysics software is often tied to specific hardware, and one point where this becomes particularly obvious is when it comes to acquiring subject responses with high temporal accuracy. Given the nowadays exclusive use of non-real-time operating systems, the once straight forward approach of just using a sturdy switch no longer suffices. Quite a number of software/hardware systems now promise millisecond accuracy. Here is a rather sketchy list as a starter into the secret world of "button boxes". Please feel free to point to other boxes you have seen.
SciencePlus is a European reseller for psychological experimenting systems who also distributes some of the response boxes mentioned here.
James, He & Liu (Neuroimage, 2005) reviewed two plastic keyboards (GrandTec's "Virtually Indestructible Keyboard" and Adesso's "Foldable Keyboard") and reported that GrandTec's keyboard appeared suitable for use in MRI.
Standard graphics hardware offers only 8 bit gray-scale resolution which is not enough for psychophysics. There are software and hardware solutions to the problem (Bach, Meigen, Strasburger, 1997). By software, gray-scale resolution can be improved in trade-off to spatial resolution, by dithering. On the hardware side the trade-off is against color. Go to the section on monitor grayscale resolution in the Introduction for more information.
Pelli's video attenuator was the popular solution for the Macintosh and was supported by lots of software (Section Grayscale resolution)
See the entry on Cambridge Research Systems above, or go here
A similar device is the Video Switcher from LOBES that is based on the same principle as Pelli-Zhang video attenuator but can drive both monochrome and color monitors (analog). A trigger can be used to measure reaction time or synchronize response recording accurately. The design and test results are described in Journal of Neuroscience Methods, 2003, 130(1): 9-18. In later versions the switch between grayscale and color displays can be controlled by software (so it works as if it did not exist). Price as of 2012 was $280.
Phosphor properties of the stimulus presentation monitors (luminance, contrast, and decay times) are no longer an issue now that CRTs are mostly obsolete. The new display technologies pose different problems for vision research that are addressed in the section on monitors above.
Sometimes, technological advancement means loosing options. We can (literally) no longer reach the moon (at least for many years to come). In the 80s, timing could be controlled on a CRT with extremely high precision, down to sub-microsecond accuracy. The method was called 'point-plot technique' where graphical data was fed through a special interface to what was called an 'x-y-z display' or to an oscilloscope which both were devices where the horizontal deflection had no raster generator (see e.g. Treutwein's work). An orderly janitor(authorized by the head of department) had cleaned that out in our lab so that we can no longer replicate certain work. Michael Herzog's lab in Lausanne still has a modern variant of the technique.
For many years, now, LCD displays have surprised the credulous experimenter with unreliable timing due to hidden screen buffering. Here are two threads from the noughties on CVNet, one on fast CRTs (2007) and one on timing (2006). After that, things became even worse.
As of 2015 (?), monitor technology for vision research appears to catch-up again, however: VPixx Technology produces 120-Hz monitors with 1 ms latency. And in particular they now also make an extremely fast DLP projector, the PROPixx, with an impressive 1440 Hz refresh rate (greyscale) that is on par with the point-plot technique of the 80s (data sheet).
For literature on the topic go to the section Monitors and timing.
The Black Box Toolkit is a hardware/software system that allows verifying timing accuracy in complex experimental setups. From the website: "To improve replication and enhance credibility researchers should self-validate, or self-certify, their own studies in terms of millisecond presentation, synchronization and response timing accuracy." The system is described in a number of publications by Plant and coworkers in Behavior Research Methods.
There was a nice little discussion on CVNet on how to obtain low-price neutral density filters.
Aaron Schurger describes an inexpensive MRI-compatible dichoptic visual stimulation system in a paper in J. Neurosci. Methods.
(>> back to table of contents)
The title of this psychophysics software overview limits it to visual stuff. However, I couldn't resist to include a few pointers to other psychophysics software when it was pointed out to me.
The MLP Toolbox and STAIRCASE toolbox, developed by Massimo Grassi of Padua, are a Matlab toolboxes for auditory threshold estimation, implementing the maximum likelihood adaptive procedure (Green 1990, 1993) and staircase procedures, respectively. The latter implements the method of limits (Fechner, 1889), the simple up-down (von Békésy, 1947; Dixon & Mood, 1948) and the transformed up-down method (Wetherill & Levitt, 1965; Levitt, 1971). The toolboxes come with many built-in experiments and a graphical interface. A precursor of MLP is described by Grassi & Soranzo (2009) in Behavior Research Methods 41, 20-28.
(>> back to table of contents)