Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "build: add asan check in Github action" #32324

Closed
wants to merge 1 commit into from

Conversation

sam-github
Copy link
Contributor

This reverts commit 3ec4b21.

It always fails.

See: #32257

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines

@nodejs-github-bot nodejs-github-bot added the meta Issues and PRs related to the general management of the project. label Mar 17, 2020
@sam-github
Copy link
Contributor Author

I suggest that we not add tests that always fail, for example #32310 (recent PR, picked at random for being a npm dep update so completely unrelated to ASAN it should not fail).

@richardlau
Copy link
Member

cc @gengjiawen

@sam-github
Copy link
Contributor Author

Also, see 9dac823

@mmarchini
Copy link
Contributor

For context, it was not always failing, it started failing recently. But I'm fine reverting it for now

@gengjiawen
Copy link
Member

I think the decent way is adding continue-on-error like we did in V8 8.1

@sam-github
Copy link
Contributor Author

Good to know.

The test should be done in the CI docker build infrastructure if its a feature we care to ensure always passes, and have some reason to think it can stop working because of a PR. I don't think we should test things that don't have a reasonable chance of regression, human resources to setup builds and CPU/memory resources as well are not infinite.

If we don't care enough to block PRs when it fails, I don't think it should be tested in github actions. Showing big red Xes to every single collaborator on every PR but not expecting them to pay attention or do something about the failure is off-putting and confusing to contributors.

While the other github action test results don't block PRs in a technical sense (git node land ignores them), they are duplicates of tests they do will be done again later in CI, and will block the PRs then if they fail.

@gengjiawen
Copy link
Member

the build failed simply because memory issue. the docker env is 19.10, Ubuntu 18 a different issue.

@gengjiawen
Copy link
Member

PR got red Xs is annoying and mislead, so add continue-on-error for now is workaround. we see what happens when upgrade to V8 8.2.

@mmarchini
Copy link
Contributor

To elaborate on that: the ASAN build is also a Debug build, and some changes will cause the linking phase to use more memory than available on Actions (which is 7Gb). We could move it to Jenkins on a worker with more memory (probably the same we use for Debug builds already). OTOH I'm not entirely convinced that only building ASAN is enough, we should be running the test suite and take it into account as well (I think today we run it but allow it to fail). So as it is right now, we're only making sure it builds, which might be ok, but looks like a missing opportunity.

On a side note: do we need the ASAN build to be a Debug build? Or could it be a Release build?

@mmarchini
Copy link
Contributor

we see what happens when upgrade to V8 8.2.

The error is happening without V8 upgrade, so there's no guarantee it will be fixed on 8.2. And as I mentioned before, there are many changes that can make the ASAN CI fail due to memory limitations.

@sam-github
Copy link
Contributor Author

sam-github commented Mar 17, 2020

add continue-on-error for now is workaround

Why run a test in github actions against every single PR (often multiple times per PR as the branch is repushed) just to then ignore its results?

Tests should be run because they are relevant to all PRs, in the sense that the result of the test is actionable by the PR author.

If someone wants to periodically check ASAN to make sure it still works (again, is there any change we could make in node.js that would cause it to stop working?), they can just run that check locally on occaison. Or someone could add a new CI job in jenkins on a weekly cron that did nothing but configure and run tests with ASAN on a single platform, and those interested could watch it, or perhaps even add their emails to the notify-on-failure.

This doesn't seem to me to be a use-case that fits well with github actions.

@mmarchini
Copy link
Contributor

I think ASAN build is important and should be included in every PR (just like V8 does). For those not familiar, ASAN stands for AddressSanitizer), it helps identify common memory and pointer related issues. But as it is right now it doesn't work well for its purpose. Which brings me to this question again:

"do we need the ASAN build to be a Debug build? Or could it be a Release build?"

If we can run ASAN with Release builds, it'll take less time to run (today it takes ~3h with tests), and it's less likely there'll be linking issues.

Also, based on our current policy, there should be an ASAN build on Jenkins if we want to block PRs when ASAN build or test fails.

Copy link
Contributor

@mmarchini mmarchini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree there's no point on using resources from GH Actions (we can only run 20 jobs in parallel in this org) if we're not enforcing this on every PR. I still think it's worth having ASAN checks as part of our test suite, but we need to explore other venues to do so.

@gengjiawen
Copy link
Member

Why run a test in github actions against every single PR (often multiple times per PR as the branch is repushed) just to then ignore its results?

Why not. ASAN checks should be run on every single PR. It's not a bonus part. It's an essential part.

On a side note: do we need the ASAN build to be a Debug build? Or could it be a Release build?

The debug build will show the clear stacktrace, so developer can find where exactly the code cause the problem.

OTOH I'm not entirely convinced that only building ASAN is enough, we should be running the test suite and take it into account as well (I think today we run it but allow it to fail).

Current ASAN check already contains test, it already collect failed result as well.

If we don't care enough to block PRs when it fails, I don't think it should be tested in github actions.

We care. We just not clear current issues yet. In the long run we should remove all failure

ASAN_OPTIONS: halt_on_error=0
continue-on-error: true

Copy link
Member

@gengjiawen gengjiawen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strongly against remove this from CI.

@mmarchini
Copy link
Contributor

For some context, before introducing ASAN in their CI, V8 cleared the issues to make sure the codebase was compatible with ASAN. I think this is a better approach than having a perma-red build (or worse, a green build which is actually failing).

The debug build will show the clear stacktrace, so developer can find where exactly the code cause the problem.

I just tried a ASAN Release build, and I got stack traces:

==3131==ERROR: AddressSanitizer: new-delete-type-mismatch on 0x604000029e90 in thread T4:
  object passed to delete has wrong type:
  size of the allocated type:   48 bytes;
  size of the deallocated type: 1 bytes.
    #0 0x7fbd0043ff45 in operator delete(void*, unsigned long) (/lib/x86_64-linux-gnu/libasan.so.5+0x110f45)
    #1 0x55e8fa5851a4 in v8::internal::ArrayBufferCollector::PerformFreeAllocations() (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x22a31a4)
    #2 0x55e8fa58549f in std::_Function_handler<void (), v8::internal::ArrayBufferCollector::FreeAllocations()::{lambda()#1}>::_M_invoke(std::_Any_data const&) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x22a349f)
    #3 0x55e8fa2a58bc in non-virtual thunk to v8::internal::CancelableTask::Run() (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1fc38bc)
    #4 0x55e8f958620e in node::(anonymous namespace)::PlatformWorkerThread(void*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x12a420e)
    #5 0x7fbcfffb8668 in start_thread /build/glibc-t7JzpG/glibc-2.30/nptl/pthread_create.c:479
    #6 0x7fbcffede322 in clone (/lib/x86_64-linux-gnu/libc.so.6+0x122322)

0x604000029e90 is located 0 bytes inside of 48-byte region [0x604000029e90,0x604000029ec0)
allocated by thread T0 here:
    #0 0x7fbd0043e867 in operator new(unsigned long) (/lib/x86_64-linux-gnu/libasan.so.5+0x10f867)
    #1 0x55e8fad37921 in v8::internal::BackingStore::WrapAllocation(void*, unsigned long, void (*)(void*, unsigned long, void*), void*, v8::internal::SharedFlag) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x2a55921)
    #2 0x55e8f9d6e692 in v8::ArrayBuffer::NewBackingStore(void*, unsigned long, void (*)(void*, unsigned long, void*), void*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1a8c692)
    #3 0x55e8f925cf05 in node::Buffer::New(node::Environment*, char*, unsigned long, bool) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0xf7af05)
    #4 0x55e8f9263706 in node::Buffer::New(node::Environment*, unsigned long) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0xf81706)
    #5 0x55e8f97bcb41 in node::SyncProcessStdioPipe::GetOutputAsBuffer(node::Environment*) const (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x14dab41)
    #6 0x55e8f97bfba8 in node::SyncProcessRunner::BuildOutputArray() (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x14ddba8)
    #7 0x55e8f97c0dcb in node::SyncProcessRunner::BuildResultObject() (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x14dedcb)
    #8 0x55e8f97cb71a in node::SyncProcessRunner::Run(v8::Local<v8::Value>) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x14e971a)
    #9 0x55e8f97cc0f2 in node::SyncProcessRunner::Spawn(v8::FunctionCallbackInfo<v8::Value> const&) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x14ea0f2)
    #10 0x55e8f9eea8cb in v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1c088cb)
    #11 0x55e8f9eec7c4 in v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1c0a7c4)
    #12 0x55e8f9ef3c2a in v8::internal::Builtin_Impl_HandleApiCall(v8::internal::BuiltinArguments, v8::internal::Isolate*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1c11c2a)
    #13 0x55e8f9ef4e29 in v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x1c12e29)
    #14 0x55e8fcb3db38  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x485bb38)
    #15 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #16 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #17 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #18 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #19 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #20 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #21 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #22 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #23 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #24 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #25 0x55e8fcac9bda  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e7bda)
    #26 0x55e8fcac77d9  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e57d9)
    #27 0x55e8fcac75b7  (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x47e55b7)
    #28 0x55e8fa440e15 in v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x215ee15)
    #29 0x55e8fa443dab in v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x2161dab)

Thread T4 created by T0 here:
    #0 0x7fbd00369805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
    #1 0x55e8fcaaba93 in uv_thread_create_ex ../../deps/uv/src/unix/thread.c:258
    #2 0x55e8fcaaba93 in uv_thread_create ../../deps/uv/src/unix/thread.c:212
    #3 0x55e8f95935f4 in node::WorkerThreadsTaskRunner::WorkerThreadsTaskRunner(int) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x12b15f4)
    #4 0x55e8f959434b in node::NodePlatform::NodePlatform(int, node::tracing::TracingController*) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0x12b234b)
    #5 0x55e8f9209fef in node::V8Platform::Initialize(int) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0xf27fef)
    #6 0x55e8f92013d1 in node::InitializeOncePerProcess(int, char**) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0xf1f3d1)
    #7 0x55e8f9201b1f in node::Start(int, char**) (/home/mmarchini/workspace/nodejs/node-master/out/Release/node+0xf1fb1f)
    #8 0x7fbcffde31e2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x271e2)

I think it's worth using the Release build instead, it will be a lot faster to run and it'll be less prone to breakage due to OOM issues. If we really want Debug ASAN builds, it should be on Jenkins (if we have an instance large enough for it).

@Trott
Copy link
Member

Trott commented Mar 18, 2020

Why run a test in github actions against every single PR (often multiple times per PR as the branch is repushed) just to then ignore its results?

Why not.

If we tell people to ignore CI results for some platforms/tests/configurations, then they will ignore them for nearly all platforms/tests/configurations. We've seen this in the past.

If we care about the results, let's run it in an environment where it is likely to pass, whether that's Release build with GitHub Actions, Debug build with Jenkins, or whatever.

Copy link
Member

@mhdawson mhdawson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sam-github
Copy link
Contributor Author

@gengjiawen You haven't made a case for why this test should be run. You know it fails, its not going to pass until "the long run", the negative effects have been pointed out.

Why run a test in github actions against every single PR (often multiple times per PR as the branch is repushed) just to then ignore its results?

Why not. ASAN checks should be run on every single PR. It's not a bonus part. It's an essential part.

See above thread for "Why not". Also, given that it currently fails, its clearly not essential for everyone.

At the time the GH actions were added ASAN passed, which is why it was acceptable, though there were concerns at the time. It now fails. If your hope was that by running the tests in the GH actions they would cause the feature to be kept up to date, that has demonstrably NOT happened.

I have provided several alternative CI setups above that would better serve the goals of keeping ASAN working:

  • weekly CI jobs that don't block, but can be checked for failures, effectively the same as the current github actions, but without the negatives
  • getting the feature to work, then adding it as a docker container test variant - this would fail PRs if the feature fails, so the feature would always work.

If the current failures are in V8, an external dependency, failures might be out of the control of the Node.js project. You might need to work directly with the V8 team to add ASAN tests as blocking release criteria for their CI.

Copy link
Member

@BridgeAR BridgeAR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. We should work on getting this working again properly soon after!

mmarchini added a commit to mmarchini/node that referenced this pull request Mar 19, 2020
Changed from Debug build to Release build to avoid out of memory issues
during the link step. This should also speed up the build.

Commented out the test runner for now, since there are too many ASAN
warnings there to be useful. We should re-enable the test runner once
our ASAN warnings are fixed.

Simplify the environment: instead of running on a container, runs
directly on the host. With this, ASAN build looks identical to other
builds in GitHub Actions, with the exception of the `--enable-asan`
flag.

Ref: https://github.com/nodejs/node/issue/32257
Ref: nodejs#32324
@mmarchini
Copy link
Contributor

@sam-github @gengjiawen I opened #32352 as a compromise: it should fix the build on Actions, it accomplishes the same as before (ensures we don't break the build, not the tests), and it should be a lot faster. Of course, let's wait for the CI to finish to make sure it works before making any decisions.

@mmarchini
Copy link
Contributor

Ok, turns out even the release build is too heavy for GitHub Actions. My suggestion is moving this to Jenkins (if we have an instance large enough for it).

@gengjiawen
Copy link
Member

gengjiawen commented Mar 19, 2020

If we tell people to ignore CI results for some platforms/tests/configurations, then they will ignore them for nearly all platforms/tests/configurations. We've seen this in the past.

Yeap, I agree on this. But once we fixed all current issue. It will find memory leaks in every PR. I looked current solution as flaky test, we can't just remove it.

Also, given that it currently fails, its clearly not essential for everyone.

It's essential for Node.js stability, c/c++ is not rust. I want we find memory leaks earlier instead of user run a service long time then segment fault or OOM.

weekly CI jobs that don't block, but can be checked for failures, effectively the same as the current github actions, but without the negatives

I think this is doable. But before this landed I don't think it's a good idea we remove it from here.

In the end, I want to state that if this further can be done via github acition. I prefer it done in GA. Here are reasons:

  • User can have directly feedback on ASAN issue. With Jenkins or other tool, it's not even close. Like quic or other forked project, zero config is needed.

  • If some people want to help with the ASAN issue or check how it works. The script can show how to build and test the ASAN issue. Microsoft and Google invest a lots on ASAN. Visual studio planning integrated locally. I wish someday they can offer directly run Node.js ASAN check on their cloud service.

Edit:
I have created a issue asking github action can increase memory for this project: https://github.community/t5/GitHub-Actions/Can-you-help-to-give-more-memory-on-some-project/m-p/50660#M7918.

@mmarchini
Copy link
Contributor

mmarchini commented Mar 19, 2020

But once we fixed all current issue.

I think first we need a list of the current issues. And there's a good chance our ASAN build is misconfigured (V8 has specific configuration which we should be using when building it, but I don't think that's the case).

@mhdawson
Copy link
Member

@sam-github in order to override and objection by a collaborator when the consensus cannot be reached I believe we should have an issue in the TSC repo that references the issue, and TSC consensus in that issue that we should override the objection.

It if it urgent, then I'd follow it up with an email on the TSC mailing list with the ask that people chime in on the issue by date/time X (as required by the urgency).

@mmarchini
Copy link
Contributor

I opened #32380 to work on properly enabling sanitizers on our CI, starting with ASAN. I still think this should be removed from GitHub Actions until:

  1. GitHub increases the memory of instances; or
  2. We find tunables which prevent the out of memory issues from happening

@Trott
Copy link
Member

Trott commented Mar 20, 2020

@sam-github in order to override and objection by a collaborator when the consensus cannot be reached I believe we should have an issue in the TSC repo that references the issue, and TSC consensus in that issue that we should override the objection.

That seems reasonable and I'll go open an issue now, but I'm also adding tsc-agenda as that is what it says "can" be done in https://github.com/nodejs/node/blob/master/GOVERNANCE.md#tsc-meetings:

If consensus-seeking fails for an issue, a Collaborator may apply the tsc-agenda label. That will add it to the TSC meeting agenda.

@Trott
Copy link
Member

Trott commented Mar 20, 2020

@sam-github in order to override and objection by a collaborator when the consensus cannot be reached I believe we should have an issue in the TSC repo that references the issue, and TSC consensus in that issue that we should override the objection.

nodejs/TSC#840

@nschonni
Copy link
Member

Just throwing in #32143 which limits it to running on PRs that touch C/C++ files. Doesn't stop falls positives, but won't run on other types of PRs. Feel free to close that one if this lands though

@mmarchini
Copy link
Contributor

I still rather remove the Action than land #32143. When we re-add ASAN, we can skip it for *.md files, but not *.js files, since ASAN also valid for JavaScript because newly added JavaScript code can touch uncovered C++ sections of our files.

@gengjiawen
Copy link
Member

I opened #32380 to work on properly enabling sanitizers on our CI, starting with ASAN. I still think this should be removed from GitHub Actions until:

  1. GitHub increases the memory of instances; or
  2. We find tunables which prevent the out of memory issues from happening

We are not even set a pipeline in Jenkins to do ASAN check now. Removing this without any alternative solution being applied looks not okay with me.

@mmarchini
Copy link
Contributor

mmarchini commented Mar 20, 2020

We are not even set a pipeline in Jenkins to do ASAN check now

We already don't have any ASAN checks in our CI: this Action always fail, and it will keep always failing for the foreseeable future.

Removing it doesn't mean it will never be added again in the future, it means we won't be wasting 1 (per PR) machine out of 20 concurrent machines available for the org on a test we know will fail. As mentioned before, when we have similar issues on Jenkins, we remove the machine from rotation. If we want Actions to be treated as a official build as well, it should follow the same rule.

It also means we won't be sending the wrong signal to contributors: having an ASAN check showing up gives the false impression we're building and testing with ASAN, and that tests are passing. Not only the build is not working, if it works, the tests are not passing (and from what I've seen, several violations can actually be memory leaks and other issues), but we're showing as if it was.

As soon as we figure out an alternative, we can re-add it, either as an Action, as a Jenkins job, or as whichever alternative we came up with.

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@gabrielschulhof
Copy link
Contributor

@gengjiawen let's land this and open an issue to add an asan check to our CI pipeline. We can reference this PR as a source of possible infrastructure solutions. If you would like to drive the issue forward @nodejs/build can no doubt help you come up with the right infrastructure solution for building with such high memory requirements. Travis doesn't seem up for the job.

@gabrielschulhof
Copy link
Contributor

@gengjiawen we have #32380 for setting up sanitizers.

@gengjiawen gengjiawen self-requested a review March 21, 2020 04:16
@Trott Trott removed the tsc-agenda Issues and PRs to discuss during the meetings of the TSC. label Mar 21, 2020
mmarchini pushed a commit that referenced this pull request Mar 22, 2020
This reverts commit 3ec4b21.

See: #32257

PR-URL: #32324
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Jiawen Geng <technicalcute@gmail.com>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Michael Dawson <michael_dawson@ca.ibm.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Richard Lau <riclau@uk.ibm.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Beth Griggs <Bethany.Griggs@uk.ibm.com>
Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com>
@mmarchini
Copy link
Contributor

Landed in 1d49558

@mmarchini mmarchini closed this Mar 22, 2020
@richardlau richardlau mentioned this pull request Mar 23, 2020
MylesBorins pushed a commit that referenced this pull request Mar 24, 2020
This reverts commit 3ec4b21.

See: #32257

PR-URL: #32324
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Jiawen Geng <technicalcute@gmail.com>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Michael Dawson <michael_dawson@ca.ibm.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Richard Lau <riclau@uk.ibm.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Beth Griggs <Bethany.Griggs@uk.ibm.com>
Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com>
@MylesBorins MylesBorins mentioned this pull request Mar 24, 2020
MylesBorins pushed a commit that referenced this pull request Mar 24, 2020
This reverts commit 3ec4b21.

See: #32257

PR-URL: #32324
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Jiawen Geng <technicalcute@gmail.com>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Michael Dawson <michael_dawson@ca.ibm.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Richard Lau <riclau@uk.ibm.com>
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Beth Griggs <Bethany.Griggs@uk.ibm.com>
Reviewed-By: Gabriel Schulhof <gabriel.schulhof@intel.com>
@sam-github sam-github deleted the revert-pr-31902 branch May 4, 2020 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta Issues and PRs related to the general management of the project.
Projects
None yet
Development

Successfully merging this pull request may close these issues.