You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is weird, because (a) I could not find the file anywhere in my source tree and (b), note how the actual uploaded bytes (bytesSent) are 3 times the size of the blob in the digest (50MB vs 18MB). And the bytesSent are supposed to be compressed data too!
So I did a bit of digging and was able to find the Action proto in the remote cache, walked the input root tree digest to find what this file is supposed to be:
So I had --execution_log_compact_file=%workspace%/compact_exec_log.binpb.zst set locally to send the log to our BES server. That got caught in this glob of //:srcs
So I fixed it by adding a new exclusion to the glob
Anyway, I think there are several asks after this experience:
The error message in case of a grpc failure should be more verbose. Users should not have to add remote_grpc_log to see why the ByteStream.write failed.
There should be something to detect files still being written to live so that we avoid uploading them and fail early. In this specific case, I think guard_against_concurrent_changes is already enabled in lite mode and does not do that. I suspect we can add a new check that fails when more data is uploaded than was specified in the digest(?)
We should have an 'auto' mode that writes execution_log_compact_file to the output_base, similar to Bazel profile. That way, users can still capture the compact exec log and send it to BES without having to manage it inside their workspace or a TMP dir somewhere.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I ran into this issue several times when developing bazel.git
The error message was very cryptic so I was unable to tell why it failed.
After weeks of failing on the same error and the same target, I started capturing grpc log and got this
This is weird, because (a) I could not find the file anywhere in my source tree and (b), note how the actual uploaded bytes (bytesSent) are 3 times the size of the blob in the digest (50MB vs 18MB). And the bytesSent are supposed to be compressed data too!
So I did a bit of digging and was able to find the Action proto in the remote cache, walked the input root tree digest to find what this file is supposed to be:
So I had
--execution_log_compact_file=%workspace%/compact_exec_log.binpb.zstset locally to send the log to our BES server. That got caught in this glob of//:srcsSo I fixed it by adding a new exclusion to the glob
Anyway, I think there are several asks after this experience:
The error message in case of a grpc failure should be more verbose. Users should not have to add remote_grpc_log to see why the ByteStream.write failed.
There should be something to detect files still being written to live so that we avoid uploading them and fail early. In this specific case, I think
guard_against_concurrent_changesis already enabled in lite mode and does not do that. I suspect we can add a new check that fails when more data is uploaded than was specified in the digest(?)We should have an 'auto' mode that writes
execution_log_compact_fileto the output_base, similar to Bazel profile. That way, users can still capture the compact exec log and send it to BES without having to manage it inside their workspace or a TMP dir somewhere.Beta Was this translation helpful? Give feedback.
All reactions