Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build fails on Ansible when running as root #263

Closed
rushyrush opened this issue Jun 24, 2020 · 1 comment · Fixed by #306
Closed

Build fails on Ansible when running as root #263

rushyrush opened this issue Jun 24, 2020 · 1 comment · Fixed by #306

Comments

@rushyrush
Copy link

Build fails when running as root
Host OS: CentOS 7.8
Environment: vSphere 6.7u3

When running as a normal user it builds without error.

Can documentation be update to inform user to not run as root, check for root during "make", or be fixed to support root user?

Ansible error: "You need to be root to perform this command. "

[root@localhost capi}# make build-node-ova-vsphere-centos-7
packer build -var-file="/root/image-builder-master/images/capi/packer/config/kubernetes.json"  -var-file="/root/image-builder-master/images/capi/packer/config/cni.json"  -var-file="/root/image-builder-master/images/capi/packer/config/containerd.json"  -var-file="/root/image-builder-master/images/capi/packer/config/ansible-args.json"   -var-file="packer/ova/packer-common.json" -var-file="/root/image-builder-master/images/capi/packer/ova/centos-7.json" -var-file="packer/ova/vsphere.json" -except=esx -except=local -only=vsphere-iso  -only=vsphere packer/ova/packer-node.json
vsphere: output will be in this color.
==> vsphere: Retrieving ISO
==> vsphere: Trying https://mirrors.edge.kernel.org/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-2003.iso
==> vsphere: Trying https://mirrors.edge.kernel.org/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-2003.iso?checksum=sha256%3A101bc813d2af9ccf534d112cbe8670e6d900425b297d1a4d2529c5ad5f226372
==> vsphere: https://mirrors.edge.kernel.org/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-2003.iso?checksum=sha256%3A101bc813d2af9ccf534d112cbe8670e6d900425b297d1a4d2529c5ad5f226372 => /root/image-builder-master/images/capi/packer_cache/5a03ac2db9b9f47812a1c314ada462d469e94d91.iso
==> vsphere: Uploading 5a03ac2db9b9f47812a1c314ada462d469e94d91.iso to packer_cache/5a03ac2db9b9f47812a1c314ada462d469e94d91.iso
==> vsphere: File already uploaded; continuing
==> vsphere: Creating VM...
==> vsphere: Customizing hardware...
==> vsphere: Mounting ISO images...
==> vsphere: Starting HTTP server on port 8578
==> vsphere: Set boot order temporary...
==> vsphere: Power on VM...
==> vsphere: Waiting 10s for boot...
==> vsphere: HTTP server is working at http://172.16.4.102:8578/
==> vsphere: Typing boot command...
==> vsphere: Waiting for IP...
==> vsphere: IP address: 172.16.6.137
==> vsphere: Using ssh communicator to connect: 172.16.6.137
==> vsphere: Waiting for SSH to become available...
==> vsphere: Connected to SSH!
==> vsphere: Provisioning with Ansible...
   vsphere: Setting up proxy adapter for Ansible....
==> vsphere: Executing Ansible: ansible-playbook -e packer_build_name=vsphere -e packer_builder_type=vsphere-iso -e packer_http_addr=172.16.4.102:8578 --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars containerd_url=https://storage.googleapis.com/cri-containerd-release/cri-containerd-1.3.4.linux-amd64.tar.gz containerd_sha256=4616971c3ad21c24f2f2320fa1c085577a91032a068dd56a41c7c4b71a458087 containerd_pause_image=k8s.gcr.io/pause:3.2 containerd_additional_settings= custom_role= custom_role_name= disable_public_repos=false extra_debs= extra_repos= extra_rpms= http_proxy= https_proxy= kubernetes_cni_http_source=https://github.com/containernetworking/plugins/releases/download kubernetes_cni_http_checksum=sha256:https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz.sha256 kubernetes_http_source=https://storage.googleapis.com/kubernetes-release/release kubernetes_container_registry=k8s.gcr.io kubernetes_rpm_repo=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 kubernetes_rpm_gpg_key="https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg" kubernetes_rpm_gpg_check=True kubernetes_deb_repo="https://apt.kubernetes.io/ kubernetes-xenial" kubernetes_deb_gpg_key=https://packages.cloud.google.com/apt/doc/apt-key.gpg kubernetes_cni_deb_version=0.7.5-00 kubernetes_cni_rpm_version=0.7.5-0 kubernetes_cni_semver=v0.7.5 kubernetes_cni_source_type=pkg kubernetes_semver=v1.16.2 kubernetes_source_type=pkg kubernetes_load_additional_imgs=false kubernetes_deb_version=1.16.2-00 kubernetes_rpm_version=1.16.2-0 no_proxy= redhat_epel_rpm=https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm reenable_public_repos=true remove_extra_repos=false  --extra-vars guestinfo_datasource_slug=https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo guestinfo_datasource_ref=v1.3.1 guestinfo_datasource_script=https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo/v1.3.1/install.sh -e ansible_ssh_private_key_file=/tmp/ansible-key052930351 -i /tmp/packer-provisioner-ansible076760258 /root/image-builder-master/images/capi/ansible/node.yml
   vsphere:
   vsphere: PLAY [all] *********************************************************************
   vsphere:
   vsphere: TASK [Gathering Facts] *********************************************************
   vsphere: ok: [default]
   vsphere:
   vsphere: TASK [include_role : node] *****************************************************
   vsphere:
   vsphere: TASK [setup : Put templated sources.list in place] *****************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : Find existing repo files] ****************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : Disable repos] ***************************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : Install extra repos] *********************************************
   vsphere:
   vsphere: TASK [setup : update apt cache] ************************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : perform a dist-upgrade] ******************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : install baseline dependencies] ***********************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : install extra debs] **********************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : install pinned debs] *********************************************
   vsphere: skipping: [default]
   vsphere:
   vsphere: TASK [setup : add epel repo] ***************************************************
   vsphere: fatal: [default]: FAILED! => {"changed": false, "changes": {"installed": ["/tmp/.ansible/ansible-tmp-1593011471.17-52094-260135939672814/epel-release-latest-7.noarchELfQ1w.rpm"]}, "msg": "You need to be root to perform this command.\n", "rc": 1, "results": ["Loaded plugins: fastestmirror\n"]}
   vsphere:
   vsphere: PLAY RECAP *********************************************************************
   vsphere: default                    : ok=1    changed=0    unreachable=0    failed=1    skipped=9    rescued=0    ignored=0
   vsphere:
==> vsphere: Provisioning step had errors: Running the cleanup provisioner, if present...
==> vsphere: Clear boot order...
==> vsphere: Power off VM...
==> vsphere: Destroying VM...
Build 'vsphere' errored: Error executing Ansible: Non-zero exit status: exit status 2
==> Some builds didn't complete successfully and had errors:
--> vsphere: Error executing Ansible: Non-zero exit status: exit status 2
==> Builds finished but no artifacts were created.
make: *** [build-node-ova-vsphere-centos-7] Error 1
@codenrhoden
Copy link
Contributor

codenrhoden commented Jul 24, 2020

@kkeshavamurthy I think this is a simple one for you to try out.

To recreate, you would basically kick off any OVA build as root -- and the Ansible steps are going to fail. The solution is likely this one-line fix already present in the QEMU builder: #250

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants