Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors will happen when upgrading the libraries #25105

Closed
KateGo520 opened this issue Jun 14, 2020 · 7 comments
Closed

Errors will happen when upgrading the libraries #25105

KateGo520 opened this issue Jun 14, 2020 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@KateGo520
Copy link

KateGo520 commented Jun 14, 2020

(The purpose of this report is to alert openshift/origin to the possible problems when openshift/origin try to upgrade the following dependencies)

An error will happen when upgrading library prometheus/client_golang:

github.com/prometheus/client_golang

-Latest Version: v1.6.0 (Latest commit 6edbbd9 on 28 Apr)
-Where did you use it:
https://github.com/openshift/origin/search?q=prometheus%2Fclient_golang&unscoped_q=prometheus%2Fclient_golang
-Detail:

github.com/prometheus/client_golang/go.mod

module github.com/prometheus/client_golang
go 1.11
require (
	github.com/beorn7/perks v1.0.1
	github.com/cespare/xxhash/v2 v2.1.1
	…
)

github.com/prometheus/client_golang/prometheus/registry.go

package prometheus
import (
	"github.com/cespare/xxhash/v2"
	"github.com/golang/protobuf/proto"
	…
)

This problem was introduced since prometheus/client_golang v1.2.0 . If you try to upgrade prometheus/client_golang to version v1.2.0 and above, you will get an error--- no package exists at "github.com/cespare/xxhash/v2"

I investigated the libraries' (prometheus/client_golang >= v1.2.0) release information and found the root cause of this issue is that----

  1. These dependencies all added Go modules in the recent versions.

  2. They all comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:

A package that has migrated to Go Modules must include the major version in the import path to reference any v2+ modules. For example, Repo github.com/my/module migrated to Modules on version v3.x.y. Then this repo should declare its module path with MAJOR version suffix "/v3" (e.g., module github.com/my/module/v3), and its downstream project should use "github.com/my/module/v3/mypkg" to import this repo’s package.

  1. This "github.com/my/module/v3/mypkg" is not the physical path. So earlier versions of Go (including those that don't have minimal module awareness) plus all tooling (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly See golang/dep#1962, golang/dep#2139.

Note: creating a new branch is not required. If instead you have been previously releasing on master and would prefer to tag v3.0.0 on master, that is a viable option. (However, be aware that introducing an incompatible API change in master can cause issues for non-modules users who issue a go get -u given the go tool is not aware of semver prior to Go 1.11 or when module mode is not enabled in Go 1.11+).
Pre-existing dependency management solutions such as dep currently can have problems consuming a v2+ module created in this way. See for example dep#1962.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher

Solution

1. Migrate to Go Modules.

Go Modules is the general trend of ecosystem, if you want a better upgrade package experience, migrating to Go Modules is a good choice.

Migrate to modules will be accompanied by the introduction of virtual paths(It was discussed above).

This "github.com/my/module/v3/mypkg" is not the physical path. So Go versions older than 1.9.7 and 1.10.3 plus all third-party dependency management tools (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly.

Then the downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide, govendor…).

[*] You can see who will be affected here: [17 module-unaware users, i.e., lima1909/kube-kubernetes-deploy, lixwlixw/drone-test, zhanglianx111/c2o]
https://github.com/search?q=openshift%2Forigin+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json&type=Code

2. Maintaining v2+ libraries that use Go Modules in Vendor directories.

If openshift/origin want to keep using the dependency manage tools (like dep, glide, govendor, etc), and still want to upgrade the dependencies, can choose this fix strategy.
Manually download the dependencies into the vendor directory and do compatibility dispose(materialize the virtual path or delete the virtual part of the path). Avoid fetching the dependencies by virtual import paths. This may add some maintenance overhead compared to using modules.

There are 22 module users downstream, such as diegodsac/istio-operator, dmage/doppel, ulikl/oapi-exporter…)
https://github.com/search?q=openshift%2Forigin+filename%3Ago.mod&type=Code

As the import paths have different meanings between the projects adopting module repos and the non-module repos, materialize the virtual path is a better way to solve the issue, while ensuring compatibility with downstream module users. A textbook example provided by repo github.com/moby/moby is here:
https://github.com/moby/moby/blob/master/VENDORING.md
https://github.com/moby/moby/blob/master/vendor.conf
In the vendor directory, github.com/moby/moby adds the /vN subdirectory in the corresponding dependencies.
This will help more downstream module users to work well with your package, otherwise they will be stuck in github.com/Masterminds/sprig v2.22.0 .

3. Request upstream to do compatibility processing.

The prometheus/client_golang have 1049 module-unaware users in github, such as: containerd/cri, gridgentoo/MapD, Mr8/yuanye…
https://github.com/search?o=desc&q=prometheus%2Fclient_golang+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json&s=indexed&type=Code

Summary

You can make a choice when you meet this DM issues by balancing your own development schedules/mode against the affects on the downstream projects.

For this issue, Solution 2 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.

References

Do you plan to upgrade the libraries in near future?
Hope this issue report can help you ^_^
Thank you very much for your attention.

Best regards,
Kate

@KateGo520
Copy link
Author

@smarterclayton @marun Could you help me review this issue? Thx :p

@marun
Copy link
Contributor

marun commented Jun 18, 2020

@smarterclayton @marun Could you help me review this issue? Thx :p

I'm working on transitioning to maintaining hyperkube in openshift/kubernetes, and switching origin to go mod won't be a priority until that's complete.

There is an alternative, though, to support go mod versioning with glide. I've already had to do it with structured-merge-diff:

Are these dep bumps required, and if so, is that for hyperkube or for e2e tests?

@KateGo520 KateGo520 changed the title Errors you may encounter when upgrading this dependency Errors you may encounter when upgrading the library github.com/prometheus/client_golang Jun 19, 2020
@KateGo520
Copy link
Author

@marun Thank you very much for your reply. I can understand your development schedule.
If upgrade prometheus/client_golang, your suggested solution (i.e., my suggested solution 2) can solve this problem.

@KateGo520 KateGo520 changed the title Errors you may encounter when upgrading the library github.com/prometheus/client_golang Errors will happen when upgrading the libraries Jun 19, 2020
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2020
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 19, 2020
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants