Skip to content

Our implementation of the Memory Game used to quantify large sets of images on memorability

License

Notifications You must be signed in to change notification settings

LoreGoetschalckx/MemoryGame

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Figure taken from the GANalyze paper: http://ganalyze.csail.mit.edu/

Memory Game

This is my latest implementation of the memory game used to collect (human) memorability scores for large sets of images, as first proposed by Isola et al. (2011) in this paper.

It's very much based on the code we used for GANalyze and to a lesser extent MemCat. However, I did some considerable improving, cleaning, and reorganizing and can no longer guarantee that it works 100% the same.

Please test it thoroughly before using it for your own memory studies.

Overview

Testing locally

A first thing you might want to try is to run the game locally. What follows is an example of how to do that with the MemCat images.1

First clone the repo.

git clone https://github.com/LoreGoetschalckx/MemoryGame.git
cd MemoryGame

1Note: The current version of the code presents all the images in the same size and aspect ratio (square). This wasn't the case in the original MemCat study. If you want to change this behavior, you will have to adjust this file and possibly this one too.

Adding images

The commands below will download the MemCat image set and reorganize it into the right folder structure. If you're using your own image set, it's easiest if you organize it in the same way.

cd stimuli
python download_memcat.py
cd ..

Do your images come in sets of which you want to show no more than one image per participant? For example, in the GANalyze study, we had multiple modified clone images of the same seed image and wanted to make sure a single participant would never see more than one. For this clustering, you will have to add one level of nesting in the folder structure. Make sure images belonging to the same set or cluster are placed in the same sub-folder.

stimuli
|---your_image_set
|   |---targets
|   |   |---one_cluster
|   |   |   |   *.jpg
|   |   |---another_cluster
|   |   |   |   *.jpg
|   |---fillers
|   |   |---one_cluster
|   |   |   |   *.jpg
|   |   |---another_cluster
|   |   |   |   *.jpg

Pregenerating game tracks

Every participant will be assigned a game "track". A track consists of multiple sequences of images. One sequence is similar to a block (or sometimes I call it a "run"). Images are never repeated across sequences. When a participant completes a sequence, they can continue with the next one in their track, or return to the game at a later time. The game remembers which track a participant was assigned and where (i.e., which sequence) they left off.

Each game track is fully described and defined by a .json file, called a sequenceFile here. The sequenceFiles, unfortunately, are not created on the fly but need to be pregenerated.

Generating

Run this to pregenerate sequenceFiles. Note: this was tested with Python 3.8.5. You will want to make sure that you have at least as many as you expect to have participants. Set --num_workers high enough (10 is just an example, too low).

cd sequences
python initializeWorkerSequences.py --num_workers=10

If you have clustering (see above), do:

cd sequences
python initializeWorkerSequences.py --num_workers=10 --clustering=True

If you plan on using Amazon Mechanical Turk (AMT), you will need an extra track for the preview. It's a dummy track with images you don't use in the real game, so that AMT workers can try the game before accepting the HIT.

python initializeWorkerSequences.py --target_dir=preview --filler_dir=preview --preview=True

You should now see a file named previewSequence.json in this folder.

Checking

You can visualize and explore the tracks to see if they match your expectations.

cd stimuli/memcat
python -m http.server 8000

Open a second shell, navigate to the MemoryGame folder and run:

cd sequences
python -m http.server 7000

Then go to http://localhost:7000/ in your browser and explore the tracks. Kill the http servers when you're done.

You can also run the commands below and check the repeat_probabilities.png file it produces.

cd sequences
python inspectSequenceDiagnostics.py

Playing the game!

To play the game locally, you'll have to serve the images, the front-end of the game, as well as run the server.py script to take care of back-end tasks, like assigning participants to a track, blocking participants if they've completed the maximum number of sequences or failed the vigilance performance criterion, etc.

Open 3 shells and run:

cd stimuli/memcat
python -m http.server 8000
cd server
python server.py
cd memorygame
python -m http.server 7000

Then go to http://localhost:7000/ in your browser. You should see a page with instructions.

Look at the data

Data will be stored after every sequence, even if the submit button hasn't been clicked yet. The submit button is just a way for participants to let you know that they're done playing for now and that they'd like to be compensated for the number of sequences they've just completed.

Have a look at the following data files:

  • data.csv: This is the raw data you will want to analyze. One line is one trial.
  • data_sandbox.csv: If you are running the game from within AMT's sandbox, the data will be stored here.
  • assignedSequences.csv: This is used to keep track of who has been assigned which track, what their next sequence will be, whether they've been blocked, etc.
  • submittedRuns.csv: When a participant clicks submit, it will add a line to this file, so you can keep track of who needs to be compensated. A participant can return to the game later and submit more sequences, so it's possible they have more than one line in this file.
  • dashboard.json. A rough indication of how many sequences have been completed and how many passed the vigilance performance criterion. It's only rough because it will likely include your test runs and debug runs too.

Going online

Once you've thoroughly tested the game locally, you can start testing it online. You will need a python server for the back-end and a webserver for the front-end. If you don't have any, I can recommend PythonAnywhere (there is also an EU version).

Adjust the serverURL and the baseURL settings in this config.json and this config.json accordingly.

You will then have a URL for your game that you can share with participants directly or you can embed it in AMT.

One thing you might want to test is if participants finishing a sequence around the same time won't overwrite each other's data. The filelocks in the server.py are meant to prevent that, but it's better to be sure.

AMT

If you'd like to recruit participants through AMT, you will want to embed the game as an iframe inside the AMT page. Check Anelise Newman's awesome notebook for pointers on how to do that.

The main.js should automatically recognize when a participant is participating through AMT and it will handle the required adjustments (e.g., read the instructions from instructions_mturk.js instead of instructions.js, detect the workerId automatically rather than having them type their participationId, adjust the submission behavior, etc.).

Please test it thoroughly in the AMT sandbox first.

Note The way it is set up, you will need to set the standard reward for a HIT to the compensation for 1 sequence. If a worker completes more than one sequence within a single HIT, they can be paid for those additional sequences through a bonus. The game will send information on how much bonus a worker should receive to the AMT server. It's then up to you as the requester to extract that information and assign the right bonus. Check the Boto3 Documentation.

Customize

If you'd like to change some aspects of the game, the first places to check are:

Also have a look a the different arguments that can be passed to initializeWorkerSequences.py.

Note Please always test it thoroughly to be sure that the changes you make have the desired effect.

Acknowledgements

Reference

This implementation of the memory game is a reworked version of the one used for the GANalyze paper. If you use this code or work, please cite:

Goetschalckx, L., Andonian, A., Oliva, A., & Isola, P. (2019). GANalyze: Toward visual definitions of cognitive image properties. IEEE International Conference on Computer Vision, 5744-5753.

The following is a BibTeX reference entry:

@INPROCEEDINGS{goetschalckx2019ganalyze,
  author={L. {Goetschalckx} and A. {Andonian} and A. {Oliva} and P. {Isola}},
  booktitle={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, 
  title={GANalyze: Toward Visual Definitions of Cognitive Image Properties}, 
  year={2019},
  volume={},
  number={},
  pages={5743-5752},
  doi={10.1109/ICCV.2019.00584}}

About

Our implementation of the Memory Game used to quantify large sets of images on memorability

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published