Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[merged] Clear capability bounding set #149

Closed

Conversation

alexlarsson
Copy link
Collaborator

The capability bounding set is a limit on what capabilities can
be regained at execve(). Due to PR_NO_NEW_PRIVS we should be safe
from any such issues, but we may as well clear it anyway.

Note, we also have to clear it in the new namespace if user namespaces
are enabled, because the kernel gives us a new set of full bounds in
the user namespace.

See #136 for some
discussion about this.

The capability bounding set is a limit on what capabilities can
be regained at execve(). Due to PR_NO_NEW_PRIVS we should be safe
from any such issues, but we may as well clear it anyway.

Note, we also have to clear it in the new namespace if user namespaces
are enabled, because the kernel gives us a new set of full bounds in
the user namespace.

See containers#136 for some
discussion about this.
{
unsigned long cap;

for (cap = 0; cap <= 63; cap++)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where did this 63 value come from? A quick glance at the systemd source (again my first go-to reference) is they have a cap_last_cap() function which does all sorts of gymnastics. But on the other hand it wouldn't be that bad to copy.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The syscall api used to set capabilities currently allow a maximum of 64 capabilities (its got 2 32bit fields). There are not actually that many, but we just ignore EINVAL for the "too large" ones.

Eventually the kernel could have some new syscall to get more caps, but I doubt that will happen.

@cgwalters
Copy link
Collaborator

@rh-atomic-bot r+ 6b3dd4f

@rh-atomic-bot
Copy link

⌛ Testing commit 6b3dd4f with merge b35f84a...

@rh-atomic-bot
Copy link

☀️ Test successful - status-redhatci
Approved by: cgwalters
Pushing b35f84a to master...

@rh-atomic-bot rh-atomic-bot changed the title Clear capability bounding set [merged] Clear capability bounding set Jan 13, 2017
@mariospr
Copy link

mariospr commented Feb 6, 2017

We recently upgraded bubblewrap to 1.7 on Endless and observed this error now whenever we try to run any flatpak on our ARM product:

$ flatpak run -v com.endlessm.encyclopedia.ptXA: No installations directory in /etc/flatpak/installations.d. Skipping
XA: Allowing dri access
XA: Allowing x11 access
XA: Allowing pulseaudio access
XA: Running '/usr/bin/bwrap --args 19 /usr/lib/flatpak/flatpak-dbus-proxy --fd=16 unix:abstract=/tmp/dbus-hwXi4V2Q6k,guid=470e40e6fb763ef90b027f125894dd37 /run/user/1001/.dbus-proxy/session-bus-proxy-9284UY --filter --own=com.endlessm.encyclopedia.pt --own=com.endlessm.encyclopedia.pt.* --talk=org.freedesktop.portal.*'
XA: Running '/usr/bin/bwrap --args 18 /usr/lib/flatpak/flatpak-dbus-proxy --fd=16 unix:path=/var/run/dbus/system_bus_socket /run/user/1001/.dbus-proxy/system-bus-proxy-X384UY --filter --talk=com.endlessm.Metrics'
Dropping capability 0 from bounds: Operation not permitted

Reading through man prctl, it's suggested that this EPERM error would mean that we don't have the CAP_SETPCAP capability in the calling thread, so one question is why that cap is not there on our ARM kernel (a question I'm actively trying to answer now already, but need some feedback of our kernel team first).

However, another question I have about is whether it would make sense to call prctl(PR_CAPBSET_READ, CAP_SETPCAP) in drop_cap_bounding_set(), early returning if it returns zero.

Comments?

@cgwalters
Copy link
Collaborator

Can you file a new issue for this?

@mariospr
Copy link

mariospr commented Feb 6, 2017

Sure, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants