Skip to content

System Architecture Guide

Robert Homewood edited this page Dec 12, 2017 · 1 revision

Introduction

This guide provides an overview the DREAM system architecture, as described in Deliverable D3.1.

The DREAM software system comprises three main sub-systems, corresponding to work-packages WP4 (Sensing and Interpretation), WP5 (Child Behaviour Analysis) and WP6 (Robot Behaviour). Initially, these three sub-systems are implemented by three placeholder components, as follows.

  • sensoryInterpretation
  • childBehaviourClassification
  • cognitiveControl

The functionality of each sub-system will be developed incrementally as the project progresses and as new components that implement part of the functionality encapsulated in the placeholder components are developed and integrated into the system.

In addition, a fourth place-holder component systemArchitectureGUI is provided. This component is a Graphic User Interface (GUI) to facilitate external control of the robot by a user (either a therapist or a software developer) and to provide the user with an easily-to-understand view on the current state of the robot cognitive control. It also provides a graphic rendering of the child’s behavioural state, degree of engagement, and degree of performance in the current intervention. All of the cognitiveController input and output ports can be connected to this GUI as well as the output ports from the childBehaviourClassification component.

All four place-holder components have been implemented are available in the DREAM SVN software repository, together with an example application. On-line documentation is available in the Component Reference Manual.

During integration, white-box testing will be performed on a system-level by removing the driver and stub functions that simulate the output and input of data in the top-level system architecture, i.e. in one of the three components above, allowing that source and sink functionality to be provided instead by the component being integrated.

Placeholder Component Functionality

The functionality of sensoryInterpretation is specified completely by the 25 perception primitives defined in Section 2 of Deliverable D1.3 (Child Behaviour Specification).

The functionality of cognitiveControl is specified partially by the seven action primitives defined in Section 2 of Deliverable D1.2 (Robot Behaviour Specification). It is only a partial specification because the basis for invoking each of these action primitives has not yet been defined (whereas, in the case of sensoryInterpretation, all of the primitives are continually invoked to monitor the status of the robot’s environment).

The functionality of childBehaviourClassification is encapsulated by three primitives, as follows (the primitives are defined below).

  • getChildBehaviour(state)
  • getChildMotivation(degree_of_engagement)
  • getChildPerformance(degree_of_performance)

Primitives and Ports

The parameters of every primitive in the three sub-systems are explosed by two dedicated ports, one for input and one for output, with the arguments encapsulated in a YARP vector or bottle, whichever is more appropriate.

The general naming convention for the two ports is /<primitive name>:i for input and /<primitive name>:o for output.

The sensoryInterpretation Component

The following are the primitives and associated input and output ports in the sensoryInterpretation component.

Note that not all primitives have input parameters. The components for those that do are stateful, i.e. once the associated argument values are set, they remain persistently in that state until reset by another input.


checkMutualGaze()
   /sensoryInterpretation/checkMutualGaze:o
   BufferedPort<VectorOf<int>>

getArmAngle(left_azimuth, elevation, right_azimuth, elevation)
   /sensoryInterpretation/getArmAngle:o
   BufferedPort<VectorOf<double>>

getBody(body_x, y, z)
   /sensoryInterpretation/getBody:o
   BufferedPort<VectorOf<double>>

getBodyPose(<joint_i>)
   /sensoryInterpretation/getBodyPose:o
   BufferedPort<VectorOf<double>>

getEyeGaze(eye, x, y, z)
   /sensoryInterpretation/getEyeGaze:i
  /sensoryInterpretation/getEyeGaze:o
  BufferedPort<VectorOf<double>>

getEyes(eyeL_x, y, z, eyeR_x, y, z)
   /sensoryInterpretation/getEyes:o
   BufferedPort<VectorOf<double>>

getFaces(<x, y, z>)
   /sensoryInterpretation/getFaces:o
   BufferedPort<VectorOf<double>>

getGripLocation(object_x, y, z, grip_x, y, z)
   /sensoryInterpretation/getGripLocation:i
   /sensoryInterpretation/getGripLocation:o
   BufferedPort<VectorOf<double>>

getHands(<x, y, z>)
   /sensoryInterpretation/getHands:o
   BufferedPort<VectorOf<double>>

getHead(head_x, y, z)
   /sensoryInterpretation/getHead:o
   BufferedPort<VectorOf<double>>

getHeadGaze(<plane_x, y, z>, x, y, z)
  /sensoryInterpretation/getHeadGaze:i
   /sensoryInterpretation/getHeadGaze:o
   BufferedPort<VectorOf<double>>

 getHeadGaze(x, y, z)
   /sensoryInterpretation/getHeadGaze:o
   BufferedPort<VectorOf<double>>

getObjects(<x, y, z>)
   /sensoryInterpretation/getObjects:o
   BufferedPort<VectorOf<double>>

getObjects(centre_x, y, z, radius, <x, y, z>)
   /sensoryInterpretation/getObjects:i
   /sensoryInterpretation/getObjects:o
   BufferedPort<VectorOf<double>>

getObjectTableDistance(object_x, y, z, vertical_distance)
   /sensoryInterpretation/getObjectTableDistance:i
   /sensoryInterpretation/getObjectTableDistance:o
   BufferedPort<VectorOf<double>>

getSoundDirection(threshold, azimuth, elevation)
   /sensoryInterpretation/getSoundDirection:i
   /sensoryInterpretation/getSoundDirection:o
   BufferedPort<VectorOf<double>>

identifyFace(x, y, z, face_id)
   /sensoryInterpretation/identifyFace:i
   /sensoryInterpretation/identifyFace:o
   BufferedPort<VectorOf<double>>
 
identifyFaceExpression(x, y, z, expression_id)
   /sensoryInterpretation/identifyFaceExpression:i
   /sensoryInterpretation/identifyFaceExpression:o
   BufferedPort<VectorOf<double>>

identifyObject(x, y, z, object_id)
   /sensoryInterpretation/identifyObject:i
   /sensoryInterpretation/identifyObject:o
   BufferedPort<VectorOf<double>>

identifyTrajectory(<x, y, z, t>, trajectory_descriptor)
   /sensoryInterpretation/identifyTrajectory:i
   /sensoryInterpretation/identifyTrajectory:o
   BufferedPort<VectorOf<double>>

identifyVoice(voice_descriptor)
   /sensoryInterpretation/identifyVoice:o
   BufferedPort<VectorOf<int>>

recognizeSpeech(text)
   /sensoryInterpretation/recognizeSpeech:o
   BufferedPort<Bottle>

trackFace(seed_x, y, z, time_interval, projected_x, y, z)
   /sensoryInterpretation/trackFace:i
   /sensoryInterpretation/trackFace:o
   BufferedPort<VectorOf<double>>

trackHand(seed_x, y, z, time_interval, projected_x, y, z)
   /sensoryInterpretation/trackHand:i
   /sensoryInterpretation/trackHand:o
   BufferedPort<VectorOf<double>>

trackObject(objectDescriptor, seed_x, y, z, time_interval, projected_x, y, z)
   /sensoryInterpretation/trackObject:i
   /sensoryInterpretation/trackObject:o
   BufferedPort<VectorOf<double>>

The childBehaviourClassification Component

The following are the primitives and associated input ports in the childBehaviourClassification component.


getChildBehaviour(<state, probability>)
   /childBehaviourClassification/getChildBehaviour:o
   BufferedPort<VectorOf<double>>

getChildMotivation(degree_of_engagement, confidence)
   /childBehaviourClassification/getChildMotivation:o
   BufferedPort<VectorOf<double>>

getChildPerformance(degree_of_performance, confidence)
   /childBehaviourClassification/getChildPerformance:o
   BufferedPort<VectorOf<double>>

The cognitiveControl Component

The following are the primitives and associated input ports in the cognitiveControl component.


grip()
   /cognitiveControl/grip:i
   BufferedPort<VectorOf<int>>

moveHand(handDescriptor, x, y, z, roll)
   /cognitiveControl/moveHand:i
   BufferedPort<VectorOf<double>>

moveHead (x, y, z)
   /cognitiveControl/moveHead:i
   BufferedPort<VectorOf<double>>

moveSequence(sequenceDescriptor)
   /cognitiveControl/moveSequence:i
   BufferedPort<VectorOf<int>>

moveTorso (x, y, z)
   /cognitiveControl/moveTorso:i
   BufferedPort<VectorOf<double>>

release()
   /cognitiveControl/release:i
   BufferedPort<VectorOf<int>>

say(text, tone)
   /cognitiveControl/say:i
   BufferedPort<Bottle>

enableRobot();
   /cognitiveControl/enableRobot:i
   BufferedPort<VectorOf<int>>

disableRobot();
   /cognitiveControl/disableRobot:i
   BufferedPort<VectorOf<int>>

getInterventionStatus(interventionDescriptor, stateDescriptor, cognitiveModeDescriptor)
   /cognitiveControl/getInterventionStatus:o
   BufferedPort<VectorOf<int>>

Inter-connectivity between the three components

Any component that need to access the information exposed on the ports associated with a primitive has to have equivalent ports of its own so that the ports can be connected but reversing the input/output designation. Thus, for example, one would connect

/sensoryInterpretation/identifyObject:i to /cognitiveController/identifyObject:o

and

/sensoryInterpretation/identifyObject:o to /cognitiveController/identifyObject:i.

This would allow cognitiveController to send the x, y, and z location of the object to be identified to sensoryInterpretation and then to receive the identification number of that object from sensoryInterpretation (see definition of identifyObject() in Deliverable D1.3).

Regarding the connectivity between the three components, the following principles apply.

  • each sensoryInterpretation output port is connected to the counterpart input port in the
    cognitiveController and childBehaviourClassification components;
  • each sensoryInterpretation input port is connected to the counterpart output port in the cognitiveController component (but not the childBehaviourClassification component);
  • each childBehaviourClassification output port is connected to the counterpart input port in the cognitiveController component;
  • each cognitiveController input port is, typically, not connected to any counterpart output port in either the sensoryInterpretation or childBehaviourClassification components since these ports will be typically be used only internally within the components that will constitute the cognitiveController as it is developed.

Child Behaviour Analysis Primitives

getChildBehaviour()

This primitive classifies the child’s behaviour on the basis of current percepts. It produces a set of number pairs where the first element of each pair represents a child state and the second element the likelihood that the child is in that state. Thus, the primitive effectively produces a discrete probability distribution across the space of child states.

getChildMotivation()

This primitive determines the degree of motivation and engagement on the basis of the temporal sequence of child behaviour states, quantifying the extent the children are motivated to participate in the tasks with the robot and detect in particular when their attention is lost. It produces two numbers, the first representing an estimate of the degree of engagement and the second representing an indication of confidence in that estimate.

getChildPerformance()

This primitive determines the degree of performance of the child on the basis of a temporal sequence of child behaviour states, quantifying the performance of the children in the therapeutic sessions. It produces two numbers, the first representing an estimate of the degree of performance and the second representing an indication of confidence in that estimate.


Clone this wiki locally