I cannot make molecule test. I got error as below:
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1667246171.1306775-8118-8192388989455 `\" && echo ansible-tmp-1667246171.1306775-8118-8192388989455=\"` echo ~/.ansible/tmp/ansible-tmp-1667246171.1306775-8118-8192388989455 `\" ), exited with result 1", "unreachable": true}
In /etc/ansible/ansible.cfg I added remote_tmp = /tmp and cleared ./cache dir.
It didn't bring any update.
I'm running WSL2 and Docker Desktop (4.13), "Integration" is enabled for WSL in Docker Desktop settings.
Related
I have used in Docker for go build with get private bitbucket repo but Always getting 403 forbidden and Access denied error, like below.
[91mgo: missing Mercurial command. See https://golang.org/s/gogetcmd
[0m[91mgo: missing Mercurial command. See https://golang.org/s/gogetcmd
[0m[91mgo get bitbucket.org/Mycompany/app-client: reading https://api.bitbucket.org/2.0/repositories/Mycompany/app-client?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
[0mThe command '/bin/sh -c go get bitbucket.org/Mycompany/app-client' returned a non-zero code: 1
I have added in docker file like below , also added jenkins user id_rsa.pub in bitbucket.
ARG ssh_prv_key
ARG ssh_pub_key
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN echo " IdentityFile /var/lib/jenkins/.ssh/id_rsa " >> /etc/ssh/ssh_config
RUN git config --global user.email "admin#Mycompany
RUN git config --global user.name admin
RUN echo " IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
then :- docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
For ubuntu VM its getting correctly , but from Docker only getting this issue.
When you ADD id_rsa /root/.ssh/id_rsa, ensure you're using the root user for the next steps (needing SSH).
Also, ensure the Bitbucket Repository doesn't have IP filtering enabled or it allows the right IPs.
I'm trying to deploy a build via Jenkins pipeline using agent docker and Ansible playbook but it fails on Gathering Facts stage as shown below:
TASK [Gathering Facts] *********************************************************
fatal: [destination.box.local]: UNREACHABLE! => {"changed": false, "msg": "argument must be an int, or have a fileno() method.", "unreachable": true}
Similar Jenkins pipeline using agent any and Ansible not from docker (local installation) will do the job w/o any hiccups.
Agent section from Jenkins pipeline looks like:
agent {
docker {
image 'artifactory.devbox.local/docker-local/myrepo/jdk8:latest'
args '-v $HOME/.m2:/root/.m2 -v /etc/ansible:/etc/ansible -v $HOME/.ansible/tmp:/.ansible/tmp -v $HOME/.ssh:/root/.ssh'
}
}
Any thought what I need to add to it to let Ansible run a playbook?
PS.
After adding ansible_ssh_common_args='-o StrictHostKeyChecking=no' to the Ansible inventory (or setting host_key_checking = False in the config) I have got that error:
TASK [Gathering Facts] *********************************************************
fatal: [destination.box.local]: UNREACHABLE! => {"changed": false, "msg": "'getpwuid(): uid not found: 700'", "unreachable": true}
fatal: [ansible_ssh_common_args=-o StrictHostKeyChecking=no]: UNREACHABLE! => {"changed": false, "msg": "[Errno -3] Try again", "unreachable": true}
In my case it ended up that Jenkins was running docker agent with specific UID and GID. To get that fixed it required to rebuild that docker image with creating internal Jenkins user with the same UID and GID
For that purpose on top of the Jenkinsfile to crate that docker image I have added:
def user_id
def group_id
node {
user_id = sh(returnStdout: true, script: 'id -u').trim()
group_id = sh(returnStdout: true, script: 'id -g').trim()
}
and then during the build stage I have passed additional arguments to the docker as
--build-arg JenkinsUserId=${user_id} --build-arg JenkinsGroupId=${group_id}
then in the Dockerfile for that build:
FROM alpine:latest
#pick up provided ARGs for the bild
ARG JenkinsUserId
ARG JenkinsGroupId
//do your stuff here
#create Ansible config directory
RUN set -xe \
&& mkdir -p /etc/ansible
#create Ansible tmp directory
RUN set -xe \
&& mkdir -p /.ansible/tmp
#set ANSIBLE_LOCAL_TEMP
ENV ANSIBLE_LOCAL_TEMP /.ansible/tmp
#create Ansible cp directory
RUN set -xe \
&& mkdir -p /.ansible/cp
#set ANSIBLE_SSH_CONTROL_PATH_DIR
ENV ANSIBLE_SSH_CONTROL_PATH_DIR /.ansible/cp
# Create Jenkins group and user
RUN if ! id $JenkinsUserId; then \
groupadd -g ${JenkinsGroupId} jenkins; \
useradd jenkins -u ${JenkinsUserId} -g jenkins --shell /bin/bash --create-home; \
else \
addgroup --gid 1000 -S jenkins && adduser --uid 1000 -S jenkins -G jenkins; \
fi
RUN addgroup jenkins root
# Tell docker that all future commands should run as the appuser user
USER jenkins
and finally update docker agent for the main pipeline which had issue:
agent {
docker {
image 'artifactory.devbox.local/docker-local/myrepo/jdk8:latest'
args '-v $HOME/.m2:/root/.m2 -v $HOME/.ssh:/home/jenkins/.ssh -v /etc/ansible:/etc/ansible -v $HOME/.ansible/tmp:/.ansible/tmp -v $HOME/.ansible/cp:/.ansible/cp'
}
}
I'm building a docker image and getting the error:
=> ERROR [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact 0.7s
------
> [14/36] RUN --mount=type=secret,id=jfrog-cfg,target=/root/.jfrog/jfrog-cli.conf jfrog rt dl --flat artifact/artifact.tar.gz; set -eux; mkdir -p /usr/local/artifact; tar xzf artifact.tar.gz -C /usr/local/; ln -s /usr/local/artifact /usr/local/artifact;:
#22 0.524 [Error] open /root/.jfrog/jfrog-cli.conf: read-only file system
------
failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: executor failed running [/bin/bash -eo pipefail -c jfrog rt dl --flat artifact/${ART_TAG}.tar.gz; set -eux; mkdir -p /usr/local/${ART_TAG}; tar xzf ${ART_TAG}.tar.gz -C /usr/local/; ln -s /usr/local/${ART_VERSION} /usr/local/artifact;]: runc did not terminate sucessfully
The command I use to build the docker image is
DOCKER_BUILDKIT=1 docker build -t imagename . --secret id=jfrog-cfg,src=${HOME}/.jfrog/jfrog-cli.conf (jfrog config exists at ${HOME}/.jfrog/jfrog-cli.conf)
JFrog is working and the artifact I'm downloading exists as I can manually download it outside of using docker.
On Linux, docker is run using the root user, so ${HOME} is /root and not /home/your-user-name or whatever your usual home folder is. Try using explicit full pathnames instead of the env var.
I have been using ansible core for some time now and expanding my team so the need for ansible awx has become a little more pressing. I have been working at it for a week now and I think it's time to shout for help.
We had a process of replacing the baseurl of angularjs apps with some variable using ansible and set some settings before we compile it (currently thinking of a different way of doing this using build server like TeamCity but not right now we we are trying to be up with ansible awx).
ansible core checks out the code from the git branch version , replaces the variables and zip it to s3 etc.
Knowing that, the ansible awx host was configured with the nvm then node was installed and the .nvm mapped to /home/awx/.nvm
I have also mapped a bashrc to /home/awx/.bashrc. When I log into the awx_task container docker exec -it awx_task /bin/bash I see the below:
[root#awx ~]# npm --version
5.5.1
[root#awx ~]# echo $PATH /home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[root#awx ~]# env
NVM_DIR=/home/awx/.nvm
LANG=en_US.UTF-8
HOSTNAME=awx
NVM_CD_FLAGS=
DO_ANSIBLE_HOME=/opt/do_ansible_awx_home
PWD=/home/awx
HOME=/home/awx
affinity:container==eb57afe832eaa32472812d0cd8b614be6df213d8e866f1d7b04dfe109a887e44
TERM=xterm
NVM_BIN=/home/awx/.nvm/versions/node/v8.9.3/bin
SHLVL=1
LANGUAGE=en_US:en
PATH=/home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
[root#awx ~]# cat /home/awx/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
All the volume mappings, etc were done with the installer role templates and tasks so the output above is the same after multiple docker restart and reinstall running the ansible awx installer playbook. But during the execution of the playbook that makes use of the npm, it seems it has a different env PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
At this point, I am not sure whether I failed to configure the path properly or other containers like awx_web should also be configured etc.
I have also noticed the env NVM_BIN and modified the npm playbook to include the path to the npm executable:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ lookup('env','NVM_BIN') }}/npm"
and it doens't even show during execution thus pointing at different path and env variables being loaded.
I will be grateful if you could shed some lights on whatever I am doing wrongly.
Thanks in advance
EDITS : After implementing #sergei suggestion I have used the extra vars npm_bin: /home/awx/.nvm/versions/node/v8.9.3/bin
I have changed the task to look like:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ npm_bin }}/npm"
But it produced this result:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" && echo
ansible-tmp-1579790680.4419668-165048670233209="` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/language/npm.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10173xtu81x_o/tmpd40htayd TO /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 114, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib64/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib64/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I have also tried to use shell module directly with the following:
- name: Running npm install
shell: "{{ npm_bin }}/npm install"
args:
chdir: "{{ bps_git_checkout_folder }}"
That has produced this instead:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" && echo
ansible-tmp-1579791187.453365-253173616238218="` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site- packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10395h1ga8fw3/tmpepeig729 TO /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"delta": "0:00:00.005528",
"end": "2020-01-23 14:53:07.928843",
"invocation": {
"module_args": {
"_raw_params": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"_uses_shell": true,
"argv": null,
"chdir": "/opt/do_ansible_awx_home/gh/deployments/sandbox/bps",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Not really seeing what's wrong here . Grateful if anybody can share some lights on this.
Where is your packages sitting? In the host or inside the container? All execution is in the task container.
If you're npm files are sitting on the 'host' and not in the container then you have to refer to the host that the containers are sitting on to to refer to the path.
I run docker-compose up -d and then ssh into the container. I can load the site via localhost just fine but when I try to edit the source code on my local it does not let me due to permission errors. This is the ls -la output on container vs local:
Container:
Local:
My dockerfile has the chown command:
My local user is called pwm. I tried running chown -R pwm:pwm ../app from host at which point I am able to edit files but then I get laravel permission denied errors. Then I need to runchown -R www-data:www-data ../app again to fix it.
How can I fix this?
For a development environment, my go-to solution for this is to setup an entrypoint script inside the container that starts as root, changes the user inside the container to match that of the file/directory owner from a volume mount (which will be your user on the host), and then switch to that user to run the app. I've got an example of doing this along with the scripts needed to implement this in your own container in my base image repo: https://github.com/sudo-bmitch/docker-base
In there, the fix-perms script does the heavy lifting, including code like the following:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
That script is run as root inside the container on startup. The last step of the entrypoints that I run will call something like:
exec gosu ${app_user} "$#"
which runs the container command as the application user as the new pid 1 executable.