-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't hard-code ports #74
Conversation
Tested this PR on a Docker host with a new devspace following the manual instructions in the README. For the port mapping, the following services were tested successfully. For each test, the
The only connections which failed were:
|
Happy to get this in (as 0.6.0?) and then look into either non-shared filesystems and/or services in a follow-up. |
Deploying a devspace in openstack with the changes (i.e. Discovered while working on the prep, blocker for now |
I could add extra instruction to retrieve the dynamic port (not tested yet) |
This will require more change in the role since |
Can you run it in docker without openstack? |
@jburel : can you perhaps just back up to an older version? |
@jburel @sbesson reckons this PR works on a shared Docker host. Can you outline how this doesn't fit your requirements, then we can see if there are any workarounds? |
As mentioned previously:
|
With regard to the current production version used in the Ansible role, I think ome/ansible-role-devspace#4 bumped it to 0.5.2. I realized that we have not set up the Travis deployment so the tag was not pushed to Galaxy. I manually reimported the role for now so that https://galaxy.ansible.com/openmicroscopy/devspace/ should now be at 0.1.2. Totally agreed on trying to limit the moving parts given the short deadline. With the latest Galaxy role, how far are we from being able to use the latest production role openmicroscopy.devspace:0.1.2 to deploy devspace 0.5.2 on Openstack for the scope of the training? If possible, we could certainly us this as a base of discussion for all the limitations of the current design (user, shared Docker host, documentation) and agree on the priorities of 0.6.0. |
I will have to revert some changes now that few things we wanted to do currently do not work |
Now that I finally solved the problem with the key. I will check if I can upgrade docker-py allowing us to use determine the port as described in this PR. |
I made the modification to the role it order to be able to run The output is:
This will not be useful in the context of devspace in openstack since https://DEVSPACE_IP:32840 I will have to either add my commits (installation update and removal of snoopy key usage) on top of 0.5.2 or remove this PR for now and review post training. The first option is not ideal since people attending the training will have to work of my branch and not the new tag. @sbesson changes on top of 0.5.2 are valid and work (#72 and #77). #77 is a useful PR in the context of the training I have not tested but I reckon I will have similar issue with connection via web/insight etc. |
What ports are allowed by the security groups on the instance? |
|
You can either add the dynamically assigned port to one of the rules, or apply the |
I modified the playbook used to create the instance and apply the |
It looks that there is a problem to access server via java |
This is likely the use of |
@joshmoore you are correct I can connect now if click the secure option in insight. |
Hard-coded mapped ports are probably the main blocker to running multiple copies of devspace on the same host. This removes the mapping, leaving docker to dynamically map the ports.
To run multiple copies of devspace you must use the docker-compose
-p, --project-name NAME
flag to distinguish between multiple copies. e.g.which means altspace should be accessible on https://docker-host:32805/
See #73