This is the Emotion in Motion experiment framework from the Music, Sensors and Emotion research group. For more information, contact us at emotion.in.motion@musicsensorsemotion.com.
Complete documentation is available here.
-
Clone the EiM git repository:
git clone https://github.com/brennon/eim.git
-
If you don't have git installed, and don't want to install it, you can download a zipped archive of the repository.
-
Install MongoDB. If you're on a Mac and use Homebrew, use:
brew install mongodb
Otherwise, installers are available for various platforms.
-
Start
mongod
on the default port. If you installed with Homebrew:mongod --config /usr/local/etc/mongod.conf
If you're on Windows, you'll need to first create the default data directory, then run
mongod
:mkdir C:\data\db\ "C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe"
-
Install Node.js. The framework currently runs best on Node.js v0.12.7. We recommend downloading the Node.js installer from this link, instead of installing with Homebrew.
-
Install app Node dependencies using
npm
in the root of the repository:npm install
-
To get started using with the default Emotion in Motion MongoDB databases, import the data from the
mongodb-dump
directory. From the root directory of the repository:mongorestore -d emotion-in-motion-dev --drop ./mongodb-dump/emotion-in-motion-dev /noIndexRestore mongorestore -d emotion-in-motion-test --drop ./mongodb-dump/emotion-in-motion-test /noIndexRestore mongorestore -d emotion-in-motion-production --drop ./mongodb-dump/emotion-in-motion-production /noIndexRestore
The first of these three databases represent the databases that are used in the development, test, and production Node environments.
-
Start the Max helper project located at
MaxMSP/EmotionInMotion/EmotionInMotion.maxproj
. You'll need Max 6 or later. -
Install the
grunt-cli
package globally:npm install -g grunt-cli
If your machine is running Windows and attached to a domain, you may need to add the following to your path:
%USERPROFILE%\AppData\Roaming\npm
-
Start the server. In the root directory of the repository:
grunt
-
Browse to http://localhost:3000/.
Many commands described here behave differently depending on the Node environment, which is set on the command line by prepending the command with NODE_ENV=environmentname
. For instance, to run the command listed in the sixth step of the Installation sequence in the 'development' environment, use NODE_ENV=development node_modules/grunt-cli/bin/grunt
.
As an example of the differences that occur between environments, when the above mentioned command is run in the development environment, the original versions of all of the scripts in the framework are used when running the server. In the production environment, however, a 'minified' version is used (all 'extra' information is removed from all script files, and they are all glued together to be one long file). It is much more efficient for the web server to send this one minified file to a client than all of the individual scripts that this application uses. This is one example of how the production environment, in general, starts up a much faster, more efficient server. The development environment, on the other hand, is the environment you'll likely want to use when making changes to the framework. We will attempt to be clear in these documents when it is important to choose one environment over another.
Of particular note here are the several database you'll see in your MongoDB database after loading the demo app data. Either the emotion-in-motion-dev
, emotion-in-motion-test
, or emotion-in-motion-production
databases is chosen for use according to the current Node environment.
There are a few things you'll likely want to change about your installation straight away (the terminal numbers of the machines on which you install your app, the default language, etc.) All of these changes can be made in the file at config/custom.js
. The options for configuring this file are described both in comments the file itself, as well as in the documentation for the CustomConfiguration
module.
As you dig into customizing the framework for your own use, you should be aware of debugging mode. When viewing any page in your application, pressing the 'D'
key twice will toggle debugging mode. This enables two things:
- View the current
TrialData
document. As you'll read below, this document is where all information about the current experiment session is stored. In debugging mode, you can see it at the bottom of the page as it updates in real time. - Advance through the experiment without impediment. Some sections of your design may require the user to, for instance, answer questions before being allowed to proceed. In debugging mode, the right arrow key will advance you to the next slide irrespective of these impediments.
A study using Emotion in Motion is described by a MongoDB document (much like a JSON file) stored in the MongoDB database. Specifying study structures in this way essentially means that only knowledge of JSON is required in order to create a new study that requires only the modification of components already present in the provided demonstration study. JSON is a simple, textual format for representing structured data--see this site for a gentle introduction.
By default, the application looks in the experimentschemas
collection in the database for study specification documents. If more than one of these documents are present, one is chosen at random for presenting your study to the participant. (Thus, if only one of these documents is present, the structure described by this document will be the structure that is always used.) The demo application contains and presents only one study with the following structure:
- Welcome screen
- Consent form
- Several instructions screens (including audio tests and sensor placement)
- Several preliminary questionnaires
- Playback of a 'control' sound
- Questionnaire about the control sound
- Playback of a randomly selected sound excerpt
- Questionnaire about the previous sound excerpt
- Final questionnaire
- Emotion indices
- Thank you screen
This structure is completely customizable, which we describe below. This default study, like any other study, is described in a MongoDB document. This MongoDB document has the following basic structure:
{
"trialCount" : 2,
"mediaPool" : [
ObjectId("547c92686577a50a2ebde518"),
ObjectId("547c92956577a50a2ebde519"),
ObjectId("547c92cb6577a50a2ebde51a"),
...
],
"sensors" : [
"eda",
"pox"
],
"structure" : [
{
"name" : "consent-form"
},
{
"name" : "start"
},
{
"name" : "sound-test"
},
{
"name" : "eda-instructions"
},
{
"name" : "pox-instructions"
},
{
"name" : "signal-test"
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "media-playback",
"mediaType" : "fixed",
"media" : ObjectId("547c92416577a50a2ebde517")
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "media-playback",
"mediaType" : "random"
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "questionnaire",
"data" : { ... }
},
{
"name" : "emotion-index"
},
{
"name" : "thank-you"
}
]
}
At the top level of the JSON object, we have four properties: trialCount
, mediaPool
, sensors
, and structure
.
The trialCount
property takes an integer for its value that specifies the number of media excerpts that will be presented over the course of the session. Here, we specify that two media excerpts will be played:
{
"trialCount" : 2,
...
}
The mediaPool
property takes an array as its value. This array holds the ObjectId
s of MongoDB documents stored in the MongoDB database that represent all media files that are available for presentation during a session. The framework will randomly draw as many media files for presentation from this array as are specified by the trialCount
property. This property is required.
The sensors
property specifies those sensors that will be used during the study session. This property is currently not observed by the framework: the Max helper application records the sensor data directly to files that are stored on disk.
The structure
property describes the structure of the study itself. It takes as its value an array of nested objects. Each of these nested objects describes a slide that will be presented as the participant advances through the session. The order of slides in the structure
property matches the order of slides as they are presented to the participant. This property is required.
The objects nested under the structure
property each describe one slide in the study session. Objects that provide only a name
property represent a slide that has been hard-coded into the Emotion in Motion framework. For instance, this example structure
property presents only the consent form, EDA and POX sensor instructions, and thank you screens:
{
"structure": [
{
"name" : "consent-form"
},
{
"name" : "eda-instructions"
},
{
"name" : "pox-instructions"
},
{
"name" : "thank-you"
}
]
}
It should be clear how the names of these slides correspond to the sections referenced at the beginning of Study Specification Structure. To change the text or design of any of the slides that only include a name
property, edit their corresponding HTML files in public/modules/core/views/
. For instance, to change the text of the consent form, simply edit public/modules/core/views/consent-form.client.view.html
.
Other slides in the structure
property's array may also have a data
property. Such slides represent slides that are dynamically constructed automatically based on the information you provide in the data array. These often represent slides that present questionnaires, etc., where you may want to ask different questions than those included in the base Emotion in Motion application. In fact, at present, the only type of slide object that supports a data
property is the questionnaire
slide.
For now, we expect that most people's needs will be met with the ability to design experiments that involve presenting some static information to participants, presenting them with a number of pre-selected or randomly selected media excerpts, and asking them questions at various times. We'll describe below how to configure media slides, and we've already discussed how to edit 'static' slides (see Adding Slides for instructions on how to add your own custom, static slides.) Here, we describe the third big piece of the puzzle, how to design questionnaires using the questionnaire slide type.
As noted previously, questionnaire slide objects support an additional data
property:
{
"structure": [
{
"name": "questionnaire",
"data": { ... }
}
]
}
Much like the outer JSON object describes an overall study session, the data
property of a slide with the a name
of 'questionnaire'
describes a questionnaire-based slide itself. The data
property takes an object as its value, and supports three properties on this object: title
, introductoryText
, and structure
.
The title
property of a questionnaire
's data
object takes a string that will be displayed as the title heading of the screen. This property is optional.
The introductoryText
property of a questionnaire
's data
object takes a string that will be displayed below the title heading. This property is optional.
The structure
property of a questionnaire
's data
object takes an array of objects that represent, in order, the individual questions presented on the questionnaire. This property is optional. The allowed properties of the individual question objects are described here.
questionType
A question object can be of one of four types: a Likert-type scale question, a group of radio buttons (from which one choice is allowed), a group of checkboxes (from which zero, one, or more choices are allowed), or a dropdown selection list. To specify the type of question, supply one of the following strings as the value for the questionType
property: 'likert'
, 'radio'
, 'checkbox'
, or 'dropdown'
. This property is required.
questionId
The questionId
property is used to dynamically associate the various parts of the question together. This identifier must be a string and must be unique among all questions included in the questionnaire. This property is required.
questionLabel
The questionLabel
property takes a string that is used as the question text itself. This property is optional.
questionLabelType
If the value of questionLabelType
is 'labelLeft'
, the label of the question will be above the question an justified left. Otherwise, the label will be set in a larger font and centered above the question. This property is optional.
questionLikertMinimumDescription
The questionLikertMinimumDescription
takes a string that is used as a description at the left-most end of the Likert-type scale. This property is optional.
questionLikertMaximumDescription
The questionLikertMaximumDescription
takes a string that is used as a description at the right-most end of the Likert-type scale. This property is optional.
questionStoragePath
The questionStoragePath
property represents a dot-delimited path into the trialData
object that is generated as the participant completes the session. This trialData
object holds all information about the participant's session. Typically, all input received from the participant is stored in a top-level property of this object named "data". So, to store the participant's response to a question about their musical experience, we might specify the questionStoragePath
property for this question as data.questionnaire_two.musical_expertise
. This would result in a trialData
object that looks something like the following (with other distracting information removed):
{
\\ Other properties here
data: {
questionnaire_two: {
musical_expertise: \\ Participant's response for this question as this value
\\ Perhaps other responses here
}
}
}
The questionStoragePath
property is required.
questionRadioOptions
When "radio"
is specified as the value for the questionType
property, the application expects a questionRadioOptions
property to be provided, as well. The questionRadioOptions
property contains the information to be used for the individual radio buttons. This is an ordered array of objects: each object represents one radio button. Each object should have a label
property that is used as the label for the radio button, and a value
property that is used for the value stored in the trialData
object when the participant selects this particular radio button:
[ {"label": "Yes", "value": true}, {"label": "No", "value": false} ]
\\ Providing this value for `questionRadioOptions` would result in two radio buttons. The first would be given a label in the on-screen questionnaire of "Yes" and the second would be given a label of "No". Should the participant select the first button, the Boolean value `true` will be stored for their response. The Boolean value `false` will be stored should they select the second button.
[ {"label" : "Male", "value" : "male"}, {"label" : "Female", "value" : "female"} ]
\\ Providing this value for `questionRadioOptions` would result in two radio buttons. The first would be given a label in the on-screen questionnaire of "Male" and the second would be given a label of "Female". Should the participant select the first button, the string "male" will be stored for their response. The string "female" will be stored should they select the second button.
[ {"label" : "Male", "value" : 1}, {"label" : "Female", "value" : 2}, {"label" : "Not Specified", "value" : 3} ]
\\ Providing this value for `questionRadioOptions` would result in three radio buttons. The first would be given a label in the on-screen questionnaire of "Male", the second would be given a label of "Female", and the third would be given a label of "Not Specified". Should the participant select the first button, the number 1 will be stored for their response. Similarly, the numbers 2 and 3 would be specified for the selection of the second or third buttons, respectively.
questionOptions
The questionOptions
property is used and required by all question types. In general, each question tpe requires that this be an object that has a choices
property, the value of which is an array. Each entry in the choices array should be an object that represents a single question choice. Each of these objects should have both a label
and a value
property. The value of the label
property should be a string that is used for the display of this question type. The value of the value
property should be the value that should be stored when this choice is selected by the user for their answer.
// questionOptions example:
{
choices: [
{
label: 'Strongly disagree',
value: 1
},
{
label: 'Somewhat disagree',
value: 2
},
{
label: 'Neither agree nor disagree',
value: 3
},
{
label: 'Somewhat agree',
value: 4
},
{
label: 'Strongly agree',
value: 5
}
]
}
questionLikertSingleImageSrc
The optional questionLikertSingleImageSrc
property takes a string as its value that represents the path to an image to be displayed above and centered over a Likert-type scale:
{
...,
"questionLikertSingleImageSrc": "/modules/core/img/scale-above-positivity.png",
...
}
questionIsAssociatedToMedia
The optional (default = false
) questionIsAssociatedToMedia
property specifies that a question corresponds to a particular media excerpt by taking a Boolean true
or false
as its value. When this value is set to true
, multiple responses for questions with the same questionStoragePath
property are stored in an ordered array that is used as the value for the questionStoragePath
property in the final trialData
document.
For example, in the demonstration study provided with the framework, two media excerpts are played. We present the same questionnaire to the participant following each media excerpt. In order to specify that responses on the questionnaire following the first excerpt are associated with the first excerpt (and the same for the second questionnaire and excerpt), the values for questionIsAssociatedToMedia
for all questions in these questionnaires are all set to true
. So, for those questons with their questionStoragePath
property set to "data.answers.positivity"
, the corresponding section of a participant's trialData
document might look something like this:
{
data: {
answers: {
positivity: [2, 5]
}
}
}
Here, the participant chose a value of 2
when responding to the positivity question following the first excerpt. Likewise, they chose a value of 5
when responding to the positivity question following the second excerpt.
questionRequired
Specifying true
as the value for the questionRequired
property indicates that the participant must answer this question before being allowed to proceed. Correspondingly, specifying false
for this value indicates that the participant may proceed without answering this question. This property is optional (default = true
).
Media slides are created by giving a slide object the name 'media-playback'
. In addition to the name
property, their objects take two more properties: mediaType
and media
.
mediaType
The mediaType
property is required, and specifies that this media excerpt is either pre-specified, or that this media excerpt is to be selected randomly from the media pool specified in the top-level mediaPool
property. To specify that the media is to be a set, pre-selected excerpt, give mediaType
the value "fixed"
. Use the value "random"
to specify that the excerpt should be randomly selected from the media pool.
media
If mediaType
is "fixed"
, you must specify the ObjectId
of the media to use for the media excerpt. This ObjectId
does not necessarily need to be included in the mediaPool
array.
These are example slide objects for fixed and random media excerpts, respectively:
{
"name" : "media-playback",
"mediaType" : "fixed",
"media" : ObjectId("547c92416577a50a2ebde517")
}
{
"name" : "media-playback",
"mediaType" : "random"
}
The first screen shown during the study is easy to customize. Simply edit the file at public/modules/core/views/home.client.view.html
to suit your needs.
The simplest means of changing the visual design of a study is by editing the CSS files that govern the visual styling of the website. The main CSS file is located at public/modules/core/css/core.css
. You may also provide your own CSS file. All CSS files (that end with the .css
extension) placed in the public/modules/**/css/
folders will be included.
Adding your own slides is straightforward, and involves three separate steps:
- Construct the HTML file for your slide.
- Add an entry to the routing file.
- Include your slide's name in the structure design document for your study in the MongoDB database.
The easiest way to go about doing this is by following the example of one of the existing HTML files in the public/modules/core/views/
directory. There are a few important points to note about these files.
First, the Emotion in Motion framework uses the Bootstrap front-end framework for its visual styling. Most of the outer structure of each page is defined for you. Therefore, the HTML file for an individual slide needs to only contain what is to be presented as the actual slide content (there's no need to worry about the header, menus, etc.) For instance, this is (most of) the content of the file at public/modules/core/views/pox-instructions.client.view.html
:
<div class="row">
<div class="col-md-12">
<h1 translate>Heart Sensor</h1>
<p translate>Now, insert your index finger into the grey plastic clip. Your finger should touch the rubber stopper at the end.</p>
</div>
</div>
<div class="row">
<div class="col-md-12 text-center">
<img src="/modules/core/img/hand-pox.png">
</div>
</div>
<div class="row">
<div class="col-md-12">
<p><button class="btn btn-primary btn-lg" data-ng-click="advanceSlide()" translate>Continue</button></p>
</div>
</div>
Of note, each 'section' of the slide is surrounded with a div
styled with the row
class provided by Bootstrap. This particular slide is made of three rows. The first row gives the header title of the slide and a paragraph with instructions. The second row contains a centered image. The final row presents the button that allows the user to advance to the next slide. All styling (and layout) is accomplished through classes provided by Bootstrap; get to know its simple collection of classes--they are your friends for easily accomplishing even the most complex of layouts. Of course, nothing is stopping you from simply entering text here--you're free to include whatever you'd like.
Take particular note of two things in this HTML. First, the tags that surround all text provided on this slide use the translate
AngularJS directive (e.g., <h1 translate>Heart Sensor</h1>
). Any tags that include this directive will be automatically translated to the selected language of the user, provided that you have provided the necessary translations. Second, the button
in the bottom row of the slide includes the data-ng-click="advanceSlide()"
attribute. Including this attribute on any clickable element on your page will advance the user to the next slide as you have specified in the study specification structure. Without this attribute, there will be no way for participants to advance through your study.
With your HTML slide complete, add it to the routing file at public/modules/core/config/core.client.routes.js
. Here's the end of that file that ships with the framework:
.state('thank-you', {
url: '/thank-you',
templateUrl: 'modules/core/views/thank-you.client.view.html'
});
}
]);
To add an HTML file that you've created in the public/modules/core/views/
directory called credits.client.view.html
, and for this file to be accessible at the /credits
URL, we'd simply add an entry to the end of the file as follows:
.state('thank-you', {
url: '/thank-you',
templateUrl: 'modules/core/views/thank-you.client.view.html'
})
.state('credits', {
url: '/credits',
templateUrl: 'modules/core/views/credits.client.view.html'
});
}
]);
The very first argument to this call to the state
function (here, 'credits'
) is the name by which you will refer to this slide in your study specification structure.
Finally, note that adding new questionnaires to your study does not mean that you must create a new questionnaire HTML file. Simply use the "questionnaire"
name for your slide object in your study specification structure and use the data
property of the slide object to describe the questionnaire. Everything else will be handled by the framework.
By whichever means is easiest for you (either through a GUI or the command line), edit your study specification structure to include your new slide. As noted above, use the name you provided in the routing file to refer to your new slide in the structure document.
We are currently developing a means of video playback, but for now, only audio media excerpts are supported. The Max helper application controls the playback of these media files. In order to add new audio files for use in your study, you must:
- Add information about the file to the MongoDB database.
- Place the file in the location in which the Max application looks for media files.
Information about media files are stored in the MongoDB database just like study specification documents. Each media file has its own document in the media
collection in the database. A typical file looks like this:
{
"_id" : ObjectId("538b777e2212e1eda2ff48ab"),
"artist" : "Minnie Riperton",
"bpm" : null,
"comments" : null,
"emotion_tags" : [
ObjectId("538bd9002212e1eda2ff5299")
],
"excerpt_end_time" : 205.54,
"excerpt_start_time" : 125.53,
"genres" : [
ObjectId("538bd1e52212e1eda2ff5297")
],
"has_lyrics" : true,
"key" : null,
"label" : "H005",
"source" : null,
"title" : "Reasons",
"type" : "audio",
"year" : ISODate("1974-01-01T00:00:00.000+0000"),
"file" : ObjectId("538b8ae7352f20fbd59e20d2")
}
The only required properties that a media document in the database must have are artist
, title
, and label
. artist
and title
are straightforward. The label
is the a string that the Max application will use for finding and playing back the media excerpt. Here, then, Max will look for a H005.wav
file if this media excerpt is selected for use in a session. In order to add your own media files, simply add a document to the media
collection in the database (either through a GUI or the command line), with at least the following information:
{
artist: "Artist Name Here",
title: "Media Title Here",
label: "First Part of Excerpt Filename Here (without extension)"
}
The Max application looks for files in the EiMpatch/media/
directory. If you add an excerpt to the database with the label "Bananarama"
, then, Max will expect to find the file EiMpatch/media/Bananarama.wav
.
In the demo application, two types of data are collected, recorded, and saved during a session: the data in the trialData
document/object, and the data recorded from the sensors.
trialData
objects contain metadata about the experiment session as well as responses given by the participant during the session. The metadata contain, for instance, the date and time of the session, as well as a copy of the study specification structure that was used to generate the session. Every trialData
object is associated with a UUID (universally unique identifier) generated by the applicaiton. Upon completion of a session, a JSON file containing the trialData
object for the session is stored in the trials/
directory.
Sensor data is collected, recorded, and saved by the Max helper application. A sensor data file is generated for every media excerpt playback, and these files are stored in the EiMpatch/data/
directory. As an example, if the UUID 92a7d913-63d7-4e37-af01-a6ca19ae2be3
was generated by the application for a specific participant, the following files would be generated from the session:
# The sensor data files for media excerpts labeled "C002" and "T018":
.\EiMpatch\data\92a7d913-63d7-4e37-af01-a6ca19ae2be3_C002.txt
.\EiMpatch\data\92a7d913-63d7-4e37-af01-a6ca19ae2be3_T018.txt
# The trialData JSON file:
.\trials\92a7d913-63d7-4e37-af01-a6ca19ae2be3.trial.json
In the demo app, sensor data is recorded into space-delimited CSV files with a single-line header giving the names of each column.
The framework uses the Max software to handle playback and data capture for the experiments. The patches are enclosed in the project named EmotionInMotion.maxproj
. Opening this project will open the MAIN.maxpat
and the EiMsensors.maxpat
patches.
The MAIN
patch is dived in three subpatches:
p EiM_BRAIN
: This subpatch handles OSC communications with the server, receiving instructions for playback, experiment labels, language, and sending feedback regarding the development of the expreriment. It also has settings for configuring the different folders that contain the media and sensor data files. Additionally, it is possible to configure the header labels for the sensor data columns.p soundTest
: This subpatch checks the configuration of the sound parameters to make sure the soundcard is turned ON and that the headphones are correctly placed.p PLAYBACK&RECORD
: This subpatch handles de playback of MEDIA and records sensor data into text files. The sensor data must be sent through an OSC message, with the label eim/sensorData followed by a list with each column representing a different channel of data.
Included in the project, there is a DEMO patch with an example of the configuration used in our experiments with Electrodermal Activity (EDA) and Pulse Oximetry (POX) sensors connected to an Arduino. In the patchers
folder, the following patches are available:
ArduinoEiM_EDA.maxpat
: Patch configured to receive data from an EDA sensor connected to an Arduino (see Arduino script).ArduinoEiM_POX.maxpat
: Patch configured to receive data from a POX sensor connected to an Arduino (see Arduino script).EDAtool.maxpat
: Patch that processes and extracts features from an EDA signal. It also evaluates and reports the signal's status.HRfromPOX.maxpat
: Patch that processes HR from the POX signal. It also evaluates and reports the signal's status.testServer.maxpat
: Demonstration patch to test communication with the server.
An Arduino Script is available to be used with the DEMO patches. This script configures the Arduino to capture sensor data connected to an analog channel and stream it over the serial port to Max.
#define SAMPLING_INTERVAL 10 // 10ms (100Hz) for EDA, 4ms (250Hz) for POX
#define ANALOG_PIN 0 // Analog pin connected to sensor
#define ARDUINO_HEADER 255 // Arduino Header to be identified by Max Patch. EDA=255, POX=254
int analogData;
// ************************************************************************
void setup() {
Serial.begin(57600);
}
void loop() {
long periodCheck = millis()%SAMPLING_INTERVAL;
if (periodCheck == 0)
{
SampleAndSend();
// Serial.println(millis()); // report time elapsed between SampleAndSend
}
}
void SampleAndSend() {
analogData = analogRead(ANALOG_PIN); // Read current analog pin value
Serial.write(analogData >> 7); // shift high bits into output byte
Serial.write(analogData % 128); // mod by 128 for the small byte
Serial.write(ARDUINO_HEADER); // end of packet signifier
delay(SAMPLING_INTERVAL*0.5); // delay to prevent multiple SampleAndSend per interval
}
The use of the translate
directive in any of your HTML code will automatically translate the enclosed text to the language that the participant has selected. For this to work correctly, several things must be in place. We'll discuss these in terms of adding new text and translations for which the target language is already available as an option in the header menu, and adding a new language option to the menu itself.
If your target language is already available in the header menu, you'll need to generate a translation file for all strings in the application for your target language. We use the angular-gettext tool for integrating internationalization into the application. angular-gettext provides an extraction tool for extracting all strings in the application that require translation, and a compilation tool for including translations in the application once the strings have been translated.
To extract strings for translation, simply run
node_modules\grunt-cli\bin\grunt nggettext_extract
This will produce the file po\template.pot
. The .pot
file contains entries for every string present in the application and can be used with a number of software tools or web services (we use Crowdin) to generate a .po
file. a .po
file contains the translations of all strings into a specific language.
The .pot
file will extract all strings from all HTML pages included in your application. If, however, you have included text that is not directly part of an HTML page, but is included in one through your modifications of the study specification document (e.g., checkbox labels, etc.), you'll need to make a couple of changes before running the command to extract strings. To include these strings, they must be present in the public\modules\core\config\core.client.missing-keys.js
file. This is what the bottom of that file may look like
gettext('119');
gettext('120');
gettext('121');
gettext('Begin Playback');
}
]);
Here, '119'
, '120'
, '121'
, and 'Begin Playback'
are all strings that are not directly written into an HTML file in the application. If we also need to include the string, 'Good Morning!'
for translation, we would simply add it to the bottom of the file as follows
gettext('119');
gettext('120');
gettext('121');
gettext('Begin Playback');
gettext('Good Morning!');
}
]);
This change will now include 'Good Morning!'
for translation when node_modules\grunt-cli\bin\grunt nggettext_extract
is run.
If you're wondering whether or not you've included all such strings in the core.client.missing-keys.js
file, they are easy to find. By default, angular-gettext's debug mode is enabled. When this is the case, when a user has selected a particular language from the header menu and a string is used in the application for which there is not translation, the string will be prepended with [MISSING]:
. This indicates that the string that follows [MISSING]:
should be included in the core.client.missing-keys.js
file. To make sure you've included all 'missing' strings for translation, simply select the target language and go through the study, looking for any [MISSING]:
indicators.
Once you've generated a .pot
file and used software or a web service to translate all the strings it contains, you'll be able to export a .po
file. To compile these translations into your application, simply put the .po
file in to the po/
directory, and run
node_modules\grunt-cli\bin\grunt nggettext_compile
If and when this command completes successfully, the translations you provided in your .po
file will be available to the application.
If your target language for translation is not available in the dropdown menu from the header, it is straightforward to edit the application to make it available for your participants. To do so, edit the public\modules\core\views\header.client.view.html
file. The <li>...</li>
sections represent the languages available in the dropdown menu. Here is the line that makes Taiwanese available as a selection
<li><a ng-click="setLanguage('zh_TW')">中文</a></li>
Here, the string inside the parentheses should match the name of the .po
file (you'll see in the demo app that a translation file is available for Taiwanese at po/zh_TW.po
.) The text inside of the <a ...></a>
tag gives the textual label that the participant will see in the dropdown list itself (here, '中文'). To add a language option for Zulu, then, we would need to add the file po/zu.po
, and the following line to public\modules\core\views\header.client.view.html
<li><a ng-click="setLanguage('zu')">Zulu</a></li>
The default language for your application is set in config/custom.js
. The demo app ships with English as the default language, as set by this line
customConfiguration.defaultLanguage = 'en';
To choose a new default language, change 'en'
to be the language you would like to be the default (starting) language. For instance, to change this to use our Taiwanese translation as the default language, we would change this line to
gettextCatalog.setCurrentLanguage('zh_TW');
The default language only governs the language used when the first screen is initially loaded. After this point, participants are free to change the language using the dropdown menu in the header. Note that the language tag you specify (in the second example, 'zh_TW'
) must have a corresponding .po
file in the po/
directory(for this example, po/zh_TW.po
), and must be compiled using the nggettext_compile
grunt task (see the above sections.)
Running the application with node_modules/grunt-cli/bin/grunt
starts a Node.js web server. By default, this runs in development mode, which can be considerably slower than running in production mode. Running in production mode requires that you have built a production-ready version of your application. To do so, run the following command from the root directory of the repository:
NODE_ENV=development node_modules/grunt-cli/bin/grunt build
Once this successfully completes, use the following modification of the server startup command to run in production mode:
NODE_ENV=production node_modules/grunt-cli/bin/grunt
See the comments above about the database that will be used based on the NODE_ENV
you specify.
AngularJS directives provide much of the special sauce in the Emotion in Motion framework; they are, for instance, how questionnaires are automatically built based on the information you provide in the study specification structure document. There's plenty more that one can accomplish with additional custom directives, and we encourage you to write your own (and submit them for the rest of us to use!). To do so, you'll need to get your feet wet with AngularJS, but feel free to look through what we've already written in public/modules/core/directives/
to get started.
The development of the Emotion in Motion framework is still in its early stages. If you would like to contribute to this ongoing work, please submit a pull request!
If you do make changes, please ensure that you've added the appropriate tests . In addition, creating links to these pre- and post-commit git hooks will ensure that all tests are still passing before committing any changes:
# From the root directory of the repository:
ln -s ../../pre-commit.sh .git/hooks/pre-commit
ln -s ../../post-commit.sh .git/hooks/post-commit