Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

svm model in 'spencer_social_relations' #63

Open
CodeToPoem opened this issue Dec 28, 2018 · 8 comments
Open

svm model in 'spencer_social_relations' #63

CodeToPoem opened this issue Dec 28, 2018 · 8 comments

Comments

@CodeToPoem
Copy link

CodeToPoem commented Dec 28, 2018

Hello,
I have a problem with the 'groups_probabilistic_small.model' in 'spencer_social_relations'. Could you tell me how to define the strengths of people's relations in samples? And can I see the origin datasets that used to retrain svm model?

@tlind
Copy link
Member

tlind commented Mar 5, 2019

The relationship strengths are not labeled during the annotation process. Instead, we labeled binary group membership (which person tracks belong to a particular group) per frame.

An SVM is trained using coherent motion indicators (relative speed, angle, distance between a pair of tracks) as input features. In particular, we train a probabilistic SVM model (after Platt):
https://stackoverflow.com/questions/20520801/which-method-does-libsvm-use-when-predicting-results-as-probability-estimation

At inference time, the probability output of the SVM is used as relationship strength to build the social network graph.

@tlind
Copy link
Member

tlind commented Mar 5, 2019

Here is some example code (how the coherent motion indicator features are computed and fed into the SVM for inference):

  // group model initialization
  m_svmGroupModel = svm_load_model("groups_probabilistic_small.model")

  m_svmNode = new svm_node[4];
  m_svmNode[0].index = 1; // libSVM indices are one-based
  m_svmNode[1].index = 2;
  m_svmNode[2].index = 3;
  m_svmNode[3].index = -1; // terminator



  // ...
  // in every cycle:

  m_svmNode[0].value = distance;
  m_svmNode[1].value = deltaspeed;
  m_svmNode[2].value = deltaangle;

  const Eigen::VectorXd& x2 = track2->get_x();
  v2 = hypot(x2(2), x2(3));
  angle2 = atan2(x2(3), x2(2));

  if (v2 < SOCIALRELATIONS_MIN_VELOCITY /* 0.1 */) {
      angle2 = 0.0;
  }

  Eigen::VectorXd diff = x2 - x1;

  // set feature values
  double distance = hypot(diff(0), diff(1));
  double deltaspeed = fabs(v1 - v2);
  double deltaangle = fabs(diff_angle_unwrap(angle1, angle2));

  double probabilityEstimates[2];
  svm_predict_probability(m_svmGroupModel, m_svmNode, probabilityEstimates);
  positiveRelationProbability = probabilityEstimates[0]; // relationship "strength"
  negativeRelationProbability = probabilityEstimates[1];

@tlind tlind mentioned this issue Mar 5, 2019
@CodeToPoem
Copy link
Author

Thanks for detailed explaination. And I think I get it now.

Besides, I really want to get the annotated CARMEN logfile of the groundtruth tracks/groups which you mentioned in #58 if I could because I want to compare my method of group detection with the method in this project.

PS:
I also want to test this project on "MoCap" dataset which includes both laser and RGBD data, could you send dataset to my e-mail( shimx1995@qq.com )?

@tlind
Copy link
Member

tlind commented Mar 6, 2019

Yes, I uploaded the stuff overnight. Here is a 3-minute long annotated CARMEN logfile, subset of the full dataset from the FUSION'14 paper as well as a piece of the parser to read the "SocialRelations" lines:
http://srl.informatik.uni-freiburg.de/datasetsdir/kindercar_rathausgasse_group_annotations.zip

The following ROS package and its Python scripts might come in handy in playing the CARMEN logfile. However, this was not what we originally used (original code is C++ and not released). Thus, the "SocialRelations" annotation type (line format) is not implemented:
https://github.com/spencer-project/spencer_people_tracking/tree/master/tracking/people/srl_tracking_logfile_import

Disclaimer: This stuff is 5 years old, so I am not 100% certain that this is the logfile used to train the SVM. Unfortunately, I do not have the command line anymore that was used for training with libsvm. What I can tell for certain is that this is a sequence from the FUSION'14 paper, and it seems to be the only sequence we labeled with groups. Note that in the paper, we did not perform a quantitative evaluation of group tracking, so the only reason I can think of that we annotated this was for training. There might be another social relations SVM training sequence from Matthias Luber's RSS'13 paper, which I currently cannot find. But I remember for certain that I was retraining the SVM.

If I did this again today, I would annotate a much larger dataset for training (largest group id I could find in this CARMEN logfile is 25), and as a first step clearly define the meaning of "group" (spatial, social, same goal, ...). Obviously, integrating further cues such as gaze direction and body orientation (for static persons) might also help.

I will send you the link to the MoCap dataset via email.

@CodeToPoem
Copy link
Author

Thanks again.

Lastly, I still have a question about the format of 'kindercar_2013-10-01-18-46-41.bag.synced.merged_all.log' so that I can parse the information about social relation.

(The 'carmen_format.txt' is not about the format, which is used to parse the information about the social relatioon but is not complete )

@tlind
Copy link
Member

tlind commented Mar 7, 2019

The 'carmen_format.txt' is the parser for all lines that start with "SocialRelations". Each line describes a single frame, and contains the number of labeled groups, then for each group the group ID, the number of laser echoes (points) that belong to the group, the indices of these laser points, the number of tracks that belong to the group, and the corresponding track IDs.

Then there are the "LABELEDLASER" lines, for which you can have a look at the Python scripts that I referenced. They first contain, for each laser array index (of which there are 760 because it is a 190 deg laser scanner with 0.25 deg resolution), a label if it is foreground or background. For the foreground items, there is then another array of the same length which associates the corresponding track ID with each laser point. With this information, you can compute where the individual tracks are located (their centroids). Via tracking the tracks, you can derive the coherent motion indicator feature values (relative speed, etc.). It seems I do not have this intermediate information anywhere on disk.

@CodeToPoem
Copy link
Author

Thanks for the detailed explanation!

@Roua22
Copy link

Roua22 commented Feb 17, 2021

Hello,
I would like to test my project on "MoCap" as well.
Could you please send the data-set to my email: roua@me.com?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants