-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GlusterFS volume error : glusterfs: failed to get endpoints #6331
Comments
The instructions are wrong - you need to create a service. See #6167 |
Docs will be created with openshift/openshift-docs#1356 |
Oops, missed that you had the service :) Regarding the mismatch, agree that is ugly. |
@rootfs how do we fix this so end users can easily have gluster that the whole cluster can use? |
@smarterclayton |
Sounds fine to me. On Wed, Dec 16, 2015 at 3:47 PM, Huamin Chen notifications@github.com
|
@rootfs , @smarterclayton to be clear, to only workaround is to define the endpoints in each namespace? |
@karelstriegel you need to have a service to associate the endpoint. |
@rootfs I have a service to associate the endpoints, both are in the default namespace. So guess I need to have both for each namespace? (as a workaround) update: That solves my issue, but you should be able to define it cluster-wide. |
yes, service and endpoints live in the same namespace. |
@rootfs I personally like to specify the gluster endpoints separately, it is an abstraction I do like and use. What about adding a |
sure, i'll look into that. |
Putting the server list into every pod/podTemplate does not seem to solve the OPs requirements:
Changing an endpoints in all namespaces is easier than changing the volume spec in all pods. |
Also, you cannot really make an required field optional, because clients other than kubelet might be relying on the definition of the Glusterfs volume, and will be broken if a required field is not set. |
Have you considered either (1) adding an optional "endpointsNamespace" field (which is super easy to implement but has authz implications or (2) making a thing that helps you broadcast an service's endpoints from a source namespace into headless endpoints in all other namespaces. |
For PVC we would already have the security protection (end users can't On Wed, Aug 31, 2016 at 4:49 PM, Eric Tune notifications@github.com wrote:
|
Good point about DNS. Option 1 is looking best. |
@erictune @smarterclayton I like the namespace as a parameter pattern. I already used it in one of the PRs. Here, server list doesn't replace endpoint. Server list is used only when endpoint is not working. It solves the following issues:
If the hosts are in the endpoint, all is good. But if the hosts are not in the endpoint, then we have a split brain problem. If we decide to go option (1), then we have to either create a new endpoint (and the headless service) or update an existing endpoint, a quite unprecedented pattern. But if we use the hosts in the proposed server list, then we at least can get updated working hosts. |
My thoughts are same as what @rootfs shared here. Also, to ensure the backward compatibility, the endpoint is still kept and the preference is given to endpoint than the servers list. The servers list is only used when endpoint is not working. |
@humblec When I looked at kubernetes/kubernetes#33020 it sounds like it allows you to reference an existing endpoints in another namespace. For example, if my namespace is called "ericsns" and there is a gluster cluster in the "gluster" namespace, then I can refer to that and kubelet will use those other endpoints. But, your description in your last comment uses the words |
@erictune yes, in the new PR, the provided 'endpointnamespace' is used. Regarding |
So this would allow me to make a kubelet create a service and endpoint for me in someone else's namespace by creating a pod with a specially crafted glustervolumesource? That's not something we should enable |
@liggitt I'm not sure I understand why a kubelet would ever create a service and endpoint. Although I haven't read to see if the patch is doing something different that it should. The idea was supposed to be that the provisioner, the trusted component which creates PVs, will create, in a namespace controlled by the provisioner configuration (lets call it While this may be triggered by an end user creating a PVC, which causes the provisioner to do its work, the end user, nor the kubelet, create services, endpoints, etc. The provisioner does that. The kubelet will USE the If this is not what the code is doing we have a problem. |
The GlusterVolumeSource object can be part of a PersistentVolume, or part of a Volume definition inside a pod spec. I'm primarily concerned about adding a namespace field to the latter (though the more params we add to PV provisioners that end users shouldn't be able to see/modify, the harder ACL will be). I would not want the possibility of part of the system acting in another namespace thinking the GlusterVolumeSource came from a PersistentVolume under cluster admin control, when in reality it came from a pod spec Volume under user control. |
So that I thought we discussed as well. Although now I have a new 'concern' which you just brought up. I'd been thinking that it was irrelevant if we let a user set the endpointName and endpointNamespace in the pod spec volume definition. But I'm afraid that is not true. While discovering the IP addresses which are part of the endpoint in another namespace does not seem security sensitive we are subtly changing how those can be used. Since this will cause the Is the only answer to abandon endpoints altogether? |
I agree with Jordan that cross-namespace references are to be avoided for Another reason to avoid them is that they expose an implementation detail. But, what happens if, say, the cluster admin decides to try out a new I think that the service producer and consumer need to be loosely coupled, Some other options to explore: Continue to use a local-to-consumer-namespace endpoints, and try harder to Refer to the gluster cluster via a DNS name, which could be, but is not On Wed, Sep 21, 2016 at 8:23 AM, Eric Paris notifications@github.com
|
Okay, prepare for backpedaling.
On Wed, Sep 21, 2016 at 8:32 AM, Eric Tune etune@google.com wrote:
|
@erictune although a kube-DNS solution might be even better. Instead of listing ANY endpoints in the volume list a DNS address. That address could obviously be of the form Not sure that really helps the fact that the user can cause the kubelet to act on their behalf connecting to this address. Then again no solution involving the in pod volume definitions can overcome these issues... |
Impersonate-User feature would address the latter issue. |
Just to clarify what the code does now, if the |
Fix for : openshift/origin#6331 Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Is this closed by kubernetes/kubernetes#31854 ? |
Hello, A customer using glusterfs in a pretty large OpenShift cluster (3k containers, > 400 PVs) here :) No I think this is not closed by kubernetes/kubernetes#31854. With heketi you just create/delete the service & endpoints dynamically, but the problem still remains. I still don't like that gluster pv definition is different from all the other storage types. Why are not all the information inside the gluster pv object like with other storage types? The current solution has several downsides:
I know about your limitations with changing the existing object. Here a few ideas:
If this field is not specified it takes the data from the current namespace. If specified and no access right just write an event with an error message. We would have no problem to add this config to one of our global namespaces where everyone has read permissions. Or even better, add the information directly to the pv object:
Then if glusterIPs is present, take these IPs, otherwise fallback to current solution. DNS seems also like a good way to go. I would like to "re-activate" the discussion around this topic. |
HI @ReToCode, you got most of it, one blocker was |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Is that problem solved? I'm in the same situation, my GlusterFS endpoints are in the Too bad that this issue is stalled :( |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
Same problem here... @eduardobaitello and @arrawatia did you manage to find a workaround this issue? @smarterclayton and other OpenShift members: please re-open this issue and investigate for a solution or at least workaround. As it is right now the GlusterFS persitant storage in OpenShift is useless... /reopen |
@hostingnuggets I used the ugly workaround: Create an endpoint and a headless service for every namespace in the cluster that has pods using GlusterFS... |
@eduardobaitello thanks for your answer. Uh oh yes that's really ugly but I tried it out and it works... This means that an admin needs to create an endpoint and service for each project within the namespace of that project :-( |
I followed the instructions from [1] and [2] to setup Gluster FS PV but when I try to use the PVC in a pod it fails.
The error in node logs is :
But the "glusterfs-dev" service and endpoint exists in the "default" namespace:
It looks like the glusterfs plugin looks for the endpoint in the pod namespace [3] rather that the "default" namespace as documented.
I tried creating the endpoint in the project and it worked.
Is this a document bug or a regression ? Creating endpoints per project is cumbersome as the endpoints need glusterfs cluster IPs and do not work with hostname (as per [1]) and any change in glusterfs cluster will mean that every project on the cluster needs to be updated.
[1] https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html
[2] kubernetes/kubernetes#12964
[3] https://github.com/openshift/origin/blob/master/Godeps/_workspace/src/k8s.io/kubernetes/pkg/volume/glusterfs/glusterfs.go#L86
The text was updated successfully, but these errors were encountered: