Skip to content

Conversation

@Tanvi141
Copy link

@Tanvi141 Tanvi141 commented Nov 2, 2021

Our paper was recently accepted at WI-IAT and will be published soon, here is the arxiv version:(https://arxiv.org/abs/2110.15923)

We leverage HypHC in our work to reduce the dimensions. Our dataset had 59260 data points, each of dimension 600. The current version of the code is giving out of memory errors in the pre-training stage itself. By moving some lines around, and rewriting sections of the code, we were able to keep the same functionality of the code and train the model for our dataset.

I have included two additional arguments in the config file:

  • "large_dataset": should be set to 1 if the dataset is large, otherwise 0. In the case of large datasets, some matrix multiplications are replaced with loops, which make the code a bit slower (hence the argument is provided)
  • "data_points": the number of data points in the dataset, would be the number of lines in the .data file in the data directory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant