Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics measuring unsafety #152

Open
Tracked by #241
troublescooter opened this issue Nov 28, 2020 · 1 comment
Open
Tracked by #241

Metrics measuring unsafety #152

troublescooter opened this issue Nov 28, 2020 · 1 comment
Labels
doc enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@troublescooter
Copy link

Given that the correct boundary for auditing is the visibility boundary, I'm currently not sure how to read the output of cargo-geiger on how many expressions are contained in unsafe blocks. I can understand how knowing the amount of unsafe functions, traits, impl and methods gives a rough estimate of how unsafe the library is, but to my understanding the amount of expressions in unsafe blocks can vary so wildly between implementations of equivalent amount of code to audit that it may not be useful in practice. Perhaps a more robust substitute could be found, like size of functions containing an unsafe block?

Naturally any metric will have its flaws, but I haven't seen an issue raised or documentation on what the output of cargo-geiger means, or what metrics would more accurately measure 'unsafe complexity', and I think this is a shortcoming that could be addressed. In practice, another metric that would seem to be equally important to pressure implementations on would be whether there are comments explaining why the usage of unsafe is warranted/needed and what invariants are upheld. This would indicate that some amount of care went into the use of unsafe.

@anderejd anderejd added enhancement New feature or request help wanted Extra attention is needed labels Nov 29, 2020
@anderejd
Copy link
Contributor

anderejd commented Nov 29, 2020

Improving the documentation seems like a good start.

Given that the correct boundary for auditing is the visibility boundary, I'm currently not sure how to read the output of cargo-geiger on how many expressions are contained in unsafe blocks.

I'm not sure about the first part about the visibility boundary, a library that doesn't present any public unsafe functions can still use unsafe internally and cause memory corruption, UB etc. for the entire process. The number of expressions inside unsafe blocks are listed in the default output. Maybe I'm misunderstanding your question completely?

I can understand how knowing the amount of unsafe functions, traits, impl and methods gives a rough estimate of how unsafe the library is, but to my understanding the amount of expressions in unsafe blocks can vary so wildly between implementations of equivalent amount of code to audit that it may not be useful in practice. Perhaps a more robust substitute could be found, like size of functions containing an unsafe block?

Regarding what should be measured, I'm thinking that in general, the more metrics the better.

How to interpret and weigh the results on the other hand, that depends on the context, which library it is, which environment the program will run in, personal preference, just to name a few factors.

@anderejd anderejd added the question Further information is requested label Nov 29, 2020
@pinkforest pinkforest added the doc label Jan 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants