-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Azimuth user outside of image build #251
base: main
Are you sure you want to change the base?
Conversation
5534038
to
651f0b3
Compare
651f0b3
to
0884213
Compare
ansible/roles/linux-data-volumes/files/data-volumes-configure-volume.yml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this change should modify the linux-data-volumes
role. IMHO, the stuff for setting up HOME
for the azimuth
user should be in a separate role, with a separate ansible-init
playbook (that executes after the data volumes one).
The reason we did it like this was to ensure the user was created before the /data mountpoint permissions are setup and after the volume is mounted. Though maybe we can change the mount-point permissions in the user role instead. |
I would be equally happy with splitting this logic into two roles - one that runs before data volumes and is responsible for creating the user and one that runs after and is responsible for configuring the home. Chowning in a role that runs after data volumes also sounds reasonable though. The responsibility of the data volumes role is to identify volumes, format and mount them at the correct place. It shouldn't have any knowledge of what those volumes are used for I think. I have a preference for many small roles that have one responsibility, so a new role that is responsible for setting up the |
Possibly the data-volumes role shouldn't even chown the directories itself - it should just identity, format and mount them for |
I've pushed some initial changes for this up at 0f006d2, though maybe this should be in another change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few minor comments, but I think this looks a lot better
- name: Ensure data mount has the correct permissions | ||
file: | ||
path: "/data" | ||
state: directory | ||
owner: "podman" | ||
group: "podman" | ||
mode: '0755' | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RDP gateway doesn't have a /data
mount
tasks: | ||
- name: Get Azimuth user metadata | ||
ansible.builtin.set_fact: | ||
azimuth_username: "azimuth" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The workstation playbook is setting a metadata item for this that I think we should respect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, I changed it because of #251 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we either respect the metadata item here or we don't pretend to support it in the workstation patch. I don't really mind which.
azimuth_uid: "{{ openstack_metadata['azimuth_workstation_uid'] | default('1006') }}" | ||
azimuth_gid: "{{ openstack_metadata['azimuth_workstation_gid'] | default('1006') }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These defaults are different to the workstation playbook - is that on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, I think we're only setting defaults here anyway to get this to work without the metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the idea is to allow this to continue to work without the playbook patch, then the defaults should match the user in the Packer file that is being removed, i.e. 1005
.
|
||
- name: Install ansible-init vars for users | ||
copy: | ||
content: "{{ { 'user_mountpoint': user_mountpoint } | to_nice_yaml }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stylistic choice, but I like having this as a nicely formatted variable under vars
like the other places where this is done, as I think it is easier to read.
- name: Get Guacamole user info | ||
getent: | ||
database: passwd | ||
key: "{{ guacamole_user }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the guacamole_user
to use the username specified in metadata?
The way this is currently set up, it will use the username baked into the vars file at build time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, although I'm not sure if we're going to be using this outside of the case where the user is 'azimuth'.
I suppose we should decide on whether to set the user in metadata or assume the user is always azimuth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, I don't care which we do but we need to be consistent between the image and the playbook. Probably using a fixed azimuth
username is easier for now, but don't pretend it can be changed in the playbook patch.
Metadata azimuth-cloud/caas-workstation#26