-
Notifications
You must be signed in to change notification settings - Fork 88
musllinux support #430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
musllinux support #430
Conversation
These look reasonable - how can I help? Do you know of a good way to test? I guess we need a muslinux testing docker image. |
Yeah looking at that now. Would you mind terribly if the alpine variants just pulled in pypa images like how the build step does? It'd really simplify things. |
The problem with that is that it is not a good test of whether the wheel really does work on another Linux. Do you think it would be hard to make up another docker image like : https://github.com/multi-build/docker-images |
Yeah that's what I'm looking at. Getting multiple parallel versions of python on Alpine is tedious. I think the only routes to doing so are compiling them yourself, as the pypa images do, or pyenv, which I do not know much about as I've always used it through asdf which would be even less indicative of normal environments :P |
There's probably an argument there for just using what Alpine ships tbh, it is work to get other versions so most people will likely be using whatever is in apk. |
Or I could quit complaining and just get started on porting the build script. :P |
Actually, I think you'll find the build script will work fine, with not much modification - it's just you'll have to compile nearly all your Python versions, not just some of them - but that's just taking stuff out, and tweaking a bit. |
Yeah it seems fine (talking about this one right?). Just feels weird because it'll be exactly duplicating the pypa images and then why bother producing new images? |
Another option would be to just pull the official images. Which is probably where most musl packages produced here will end up being run |
I was thinking of https://github.com/multi-build/docker-images/tree/focal/docker - but any scripts that will build Pythons for you, or images where you have Python you can test on, will be good. The point is to have images that have different, newer kernels and libraries, than the baseline from the Manylinux build image, to test that the wheels work OK with these kernels and libraries, as well as the ones they were compiled for. |
I think that makes sense. A PR to add alpine x86_64, i686, and arm64 images would be nice. I wouldn't worry about the build time to create python from source, after all these images are built only whenever a python version is released. PyPy is not available for alpine linux, so attempts to use multibuild with it should report an error early on. |
Alright, so I've been noodling on this for a bit and I think this last commit is going to give the most relevant experience for end users. By using the official python images for this we get support for a whole range of versions that we don't have to maintain and more importantly, we get to test in one of the most likely environments users will be running in. |
The default interface is very simple and does what is expected at least. I think alpine:latest is probably the most likely end environment but that would mean we could only test against the major versions Alpine ships. It is possible, and probably should be documented as such, to use DOCKER_TEST_IMAGE="alpine:latest" with something in the config.sh hooks like:
|
I guess that Python is not installed by default in alpine:latest?
It would be much better to have some alpine test image that was similar to
the other test images, in that you didn't have to have custom stuff in your
config, in order to run the standard tests. Did it turn out to be more
difficult than we had predicted to make a corresponding docker image with
different Python versions already installed?
|
That's just to use alpine:latest, the commit I pushed does the right thing without having to mess with hooks. |
Building pythons shouldn't be hard but I feel it wouldn't be representative of the environments people actually use. Just from what I've seen at my orgs and how I use it, when people are using musl python they're using it in docker. If you need a version other than what Alpine ships, you're probably going to google it and get the official docker images. Only after everything else has failed would a normal user try to compile their own but even then they're likely to switch to one of the debian based official images and be done with it. |
Tested this with cffi builds and I think it's ready for for others to poke at so moving it outta draft now. For the record, this is how I've been testing: #!/usr/bin/env bash
WORK_DIR=$(mktemp -d)
pushd "$WORK_DIR" || exit
git clone https://github.com/liath/cffi-travis-wheel .
hg clone https://foss.heptapod.net/pypy/cffi
git clone https://github.com/liath/multibuild --branch musllinux
source multibuild/common_utils.sh
source multibuild/travis_linux_steps.sh
export MB_PYTHON_VERSION=3.6 MB_ML_LIBC="musllinux" MB_ML_VER="_1_1"
before_install && build_wheel cffi
rm -rf "$WORK_DIR" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few first thoughts.
@@ -308,6 +313,7 @@ function build_wheel_cmd { | |||
local repo_dir=${2:-$REPO_DIR} | |||
[ -z "$repo_dir" ] && echo "repo_dir not defined" && exit 1 | |||
local wheelhouse=$(abspath ${WHEEL_SDIR:-wheelhouse}) | |||
mkdir -p "$wheelhouse" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this necessary? Did you get an error without it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, without it the wheel gets saved as a file at that path vs inside a folder.
docker_test_wrap.sh
Outdated
@@ -2,6 +2,17 @@ | |||
# Install and test steps on Linux | |||
set -e | |||
|
|||
# shim for python:alpine not having bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the kind of thing that really should go in the test image, not in the invocation script. Does sh
/ ash
not work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should but I was trying to go for as small a patch as possible instead of running around tweaking every script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right - but the point is, here we have stuff which should be in the test image, and because it's not, it add maintenance burden to the multibuild code, as the reviewer tries to work out what it is about the test image you are trying to fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, bash should be in the test image? Does multibuild assume /bin/sh is bash?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it required in the build image as well? Then the fix should be in pypya/manylinux. If it is only the test image, it should be fixed in multi-build/docker-images
travis_linux_steps.sh
Outdated
@@ -132,5 +140,6 @@ function install_run { | |||
-e MANYLINUX_URL="$MANYLINUX_URL" \ | |||
-e TEST_DEPENDS="$TEST_DEPENDS" \ | |||
-v $PWD:/io \ | |||
--entrypoint sh \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again - I think this should go in the test image itself. It should set its default entrypoint.
@liath - there is now a docker test image for Alpine - it should soon be available at https://hub.docker.com/u/multibuild - it's being built now. Can you refactor to use that? Or would you like me to take this over? |
The image is up, but the name |
I still maintain that testing in the images people will be running these wheels on is the better solution, but I've switched to your test image as requested. For the record, the hassle of installing bash on
|
Heck, if the shim is the issue we could just build from the official image inline with something like this in local plat=${1:-${PLAT:-x86_64}}
if [ -z "$DOCKER_TEST_IMAGE" ]; then
if [ "$MB_ML_LIBC" == "musllinux" ]; then
- local docker_image="multibuild/alpine3.14_$plat"
+ docker pull "python:$MB_PYTHON_VERSION-alpine"
+ local docker_image="multibuild_alpine:latest"
+ docker build -t "$docker_image" -f "$MULTIBUILD_DIR/alpine.Dockerfile" \
+ --build-arg "PYTHON_VERSION=$MB_PYTHON_VERSION" .
else
local bitness=$([ "$plat" == i686 ] && echo 32 || echo 64)
local docker_image="matthewbrett/trusty:$bitness"
+ docker pull $docker_image
fi
else
# aarch64 is called arm64v8 in Ubuntu
local plat_subst=$([ "$plat" == aarch64 ] && echo arm64v8 || echo $plat)
local docker_image="${DOCKER_TEST_IMAGE/\{PLAT\}/$plat_subst}"
+ docker pull $docker_image
fi
# Moved the `docker pull` around because we want to pull `python:alpine` not the inline image.
- docker pull $docker_image
docker run --rm \
-e PYTHON_VERSION="$MB_PYTHON_VERSION" \
-e MB_PYTHON_VERSION="$MB_PYTHON_VERSION" \ And alpine.Dockerfile: ARG PYTHON_VERSION="3.9"
FROM python:$PYTHON_VERSION-alpine
RUN apk update && apk add bash
ENTRYPOINT ["bash"] |
Is this waiting for more review? The NumPy builds use openblas compiled using multibuild, so in order to get numpy musl wheels we need musl support in multibuild. |
Looks good to me - let's get it in. |
musllinux spec landed, let's get these wheels rollin 😎