Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement proper block padding for 32 bit systems #1

Merged
merged 2 commits into from
Aug 29, 2018

Conversation

Vagabond
Copy link
Contributor

On ARM (not ARM64) where the sizeof(long) is 4 (vs 8 on 64 bit systems)
the GF-Complete requirement that data blocks be aligned on a multiple of
16 bytes can be violated. This patch reworks how we allocate and pad the
data shards such that:

  • All shards are allocated in contiguous memory
  • The spacing between shards is always a multiple of 16 bytes

I've run several million quickcheck tests on my amd64 machine and have not found a failure. I have not run EQC on an ARM device, however.

On ARM (not ARM64) where the sizeof(long) is 4 (vs 8 on 64 bit systems)
the GF-Complete requirement that data blocks be aligned on a multiple of
16 bytes can be violated. This patch reworks how we allocate and pad the
data shards such that:

* All shards are allocated in contiguous memory
* The spacing between shards is always a multiple of 16 bytes
Copy link

@lthiery lthiery left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like

c_src/erasure.c Outdated
@@ -126,19 +134,26 @@ decode(ErlNifEnv * env, int argc, const ERL_NIF_TERM argv[])
return enif_make_tuple2(env, enif_make_atom(env, "error"), enif_make_atom(env, "insufficent_shards"));
}

char **shards = malloc(sizeof(char*)*(k+m));
char *shards = NULL;
char **data_ptrs = calloc(k, sizeof(char*));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not allocate these on the stack?

@Vagabond Vagabond merged commit e14052d into master Aug 29, 2018
@Vagabond Vagabond deleted the adt/multiple-of-16-padding branch August 29, 2018 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants