ansible escaping certificate content - parsing

I have an issue with ansible. I am attempting to install some software which requires a auto-generated certificate. The certificate is auto-generated each time the install is run.
I run the command to pull the certificate out of a settings file.
bosh alias-env director -e director --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
there seems to be escape characters in the certificate and ansible craps out every time.
The certicate output is :
:~$ bosh int ./creds.yml --path /director_ssl/ca
-----BEGIN CERTIFICATE-----
MIIDFDCCAfygAwIBAgIRALV4CbzZnmM/DpVWtV0QpXAwDQYJKoZIhvcNAQELBQAw
MzEMMAoGA1UEBhMDVVNBMRYwFAYDVQQKEw1DbG91ZCBGb3VuZHJ5MQswCQYDVQQD
EwJjYTAeFw0xNzA2MjMxMjI1MzNaFw0xODA2MjMxMjI1MzNaMDMxDDAKBgNVBAYT
A1VTQTEWMBQGA1UEChMNQ2xvdWQgRm91bmRyeTELMAkGA1UEAxMCY2EwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDB7PNn3J3RayZp32cSWofTsNAj5VjD
h0dl8cpPxEgmrRjGDbKMplP1IqgfudxeJLlNzhNBRmrfqXc9RLvLCp9+foeq/ErC
nKzLKPYsu2bHXsVFqTFDotl7TL9TSd9JGeKKom4RwzlZ5deXlfZIduYwdMAOGfOL
hAqsbO9BewdlNWTFJIRsR+KHPlvxs1kvQohIxzPRv5MjyRm6ylUwuWNs0bEQixIs
C34379sba12FFlN8dO3okZKH26rnIMCzpIOH7IBZsEPLFWl1T3NkWITzFpsg4wiX
ajiK/LI441cFld28g4TgqvfCMFtmmsYcnpNAC7RKGSYkvAkXKSbKkagLAgMBAAGj
IzAhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEB
CwUAA4IBAQCTTJHimCiXniG/UwbQ2ZPS1gMGnjWHvvnroA3sg0Jnv3Se3opxhlro
lbaJqMfR46d3bRyILjtiTD3aDC71aUu8CIeaVlzRIOW0BSWQFZB67y/ZkLe96wg0
8LafcTh2UqYw77Xlt9fwRoZTAwFjnXW/SV0DpKfTmMdCN9M/rtPLiJSsVN8Z1get
/p2YHYAJ6OU3ClKNfVgcmC1IFauQb77ctMsd0sY2t6XMY7HY6RACYNidfHJM14tL
YCtkuvFs8ZP8TpHQY0C5FuNk0nPHcbUiHaD3KAuWRoGkFNvnD54v4IX13zy/iWgU
1TU2nomKujmt5lEB8NZF7jzfW2vlxprA
-----END CERTIFICATE-----
Succeeded
the error I get is:
TASK [set env vars for login to director...] ***********************************
fatal: [51.xxx.xxx.xxx]: FAILED! => {"changed": true, "cmd": "bosh alias-env boshdir -e boshdir --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)", "delta": "0:00:00.001721", "end": "2017-06-26 09:16:49.854271", "failed": true, "rc": 2, "start": "2017-06-26 09:16:49.852550", "stderr": "/bin/sh: 1: Syntax error: \"(\" unexpected", "stderr_lines": ["/bin/sh: 1: Syntax error: \"(\" unexpected"], "stdout": "", "stdout_lines": []}
I tried switching up ansible shells from shell, Command and RAW. and raw seem to work for the ingestion of the certificate values, but doesn't seem to be able to access the other env variables I set in a previous task. Does anyone know how to escape out the certificate content?
The ansible script I run for this portion of the setup is:
---
- hosts: all
gather_facts: no
tasks:
- name: Update director creds file on deployment server
copy: src="files/bosh-creds.yml" dest="/home/bosher/creds.yml" owner="bosher" group="bosher" mode="0755"
become: yes
- name: Update state file on deployment server
copy: src="files/bosh-state.json" dest="/home/bosher/state.json" owner="bosher" group="bosher" mode="0755"
become: yes
- name: Update bosh concourse Manifest on deployment server
copy: src="files/temp-con-man.yml" dest="/home/bosher/con-man.yml" owner="bosher" group="bosher" mode="0755"
become: yes
- name: Update bosh cloud config on deployment server
copy: src="files/temp-con-cloud-azure.yml" dest="/home/bosher/cloud-config.yml" owner="bosher" group="bosher" mode="0755"
become: yes
- name: Download bosh exe and place in path location
get_url: url="https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-2.0.16-linux-amd64" dest="/usr/local/bin/bosh" mode="0755"
become: true
- name: set jumpbox host file for dns of director...
shell: |
sudo chmod 777 /etc/hosts
sudo echo "10.0.0.6 boshdir" >> /etc/hosts
sudo chmod 644 /etc/hosts
- name: set env vars for login to director...
shell: |
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
- name: set env vars for login to director...
shell: bosh alias-env director -e director --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
- name: upload stemcells and releases to director...
shell: |
bosh -e director us "https://s3.amazonaws.com/bosh-core-stemcells/azure/bosh-stemcell-3421-azure-hyperv-ubuntu-trusty-go_agent.tgz"
bosh -e director ur "http://bosh.io/d/github.com/concourse/concourse"
bosh -e director ur "https://s3.amazonaws.com/bosh-compiled-release-tarballs/garden-runc-1.6.0-ubuntu-trusty-3363.20-20170505-155950-147762079-20170505155956.tgz?versionId=DNopG3gqI9AbTzMddjmAIvIJetuuh6LY"
echo y | bosh -e director ucc "~/cloud-config.yml"
- name: run the concourse install...
shell: echo y | sudo bosh -e director -d "concourse" deploy "./manifest.yml"
This seems to be kicking me and I can't seem to get it work. Can some one point out what I am doing wrong here?

Your error is that the shell used (/bin/sh) does not handle the '<(cmd)' syntax
$ /bin/sh -c 'cat <(echo foo)'
/bin/sh: 1: Syntax error: "(" unexpected
$ /bin/bash -c 'cat <(echo foo)'
foo
You can use another shell with the executable parameter of shell module.
BUT
Environment are not shared between tasks: each task launch an independent shell through SSH.
You have 2 choices:
launch all bosh preparation and commands in only 1 task
- name: launch all bosh commands
shell: |
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
bosh alias-env director -e director --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
bosh -e director us "https://s3.amazonaws.com/bosh-core-stemcells/azure/bosh-stemcell-3421-azure-hyperv-ubuntu-trusty-go_agent.tgz"
bosh -e director ur "http://bosh.io/d/github.com/concourse/concourse"
bosh -e director ur "https://s3.amazonaws.com/bosh-compiled-release-tarballs/garden-runc-1.6.0-ubuntu-trusty-3363.20-20170505-155950-147762079-20170505155956.tgz?versionId=DNopG3gqI9AbTzMddjmAIvIJetuuh6LY"
echo y | bosh -e director ucc "~/cloud-config.yml"
echo y | sudo bosh -e director -d "concourse" deploy "./manifest.yml"
args:
executable: /bin/bash
Use the environment key to set environment on tasks. You also have to register output of commands to use them as envvars later.
- name: Get secret for login to director...
shell: bosh int ./creds.yml --path /admin_password
environment:
BOSH_CLIENT: admin
register: bosh_client_secret
- name: set env vars for login to director...
shell: bosh alias-env director -e director --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
args:
executable: /bin/bash
environment:
BOSH_CLIENT: admin
BOSH_CLIENT_SECRET: "{{ bosh_client_secret.stdout }}"
...

Related

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

SSH "Host key verification failed" in GitHub Actions - but key exists in known_hosts

I have the weirdest error in GitHub Actions that I have been trying to resolve for multiple hours now and I am all out of ideas.
I currently use a very simple GitHub Action. The end goal is to run specific bash commands via ssh in other workflows.
Dockerfile:
FROM ubuntu:latest
COPY entrypoint.sh /entrypoint.sh
RUN apt update && apt install openssh-client -y
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
mkdir -p ~/.ssh/
echo "$1" > ~/.ssh/private.key
chmod 600 ~/.ssh/private.key
echo "$2" > ~/.ssh/known_hosts
echo "ssh-keygen"
ssh-keygen -y -e -f ~/.ssh/private.key
echo "ssh-keyscan"
ssh-keyscan <IP>
ssh -i ~/.ssh/private.key -tt <USER>#<IP> "echo test > testfile1"
echo "known hosts"
cat ~/.ssh/known_hosts
wc -m ~/.ssh/known_hosts
action.yml
name: "SSH Runner"
description: "Runs bash commands in remote server via SSH"
inputs:
ssh_key:
description: 'SSH Key'
known_hosts:
description: 'Known Hosts'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.ssh_key }}
- ${{ inputs.known_hosts }}
current workflow file in the same repo:
on: [push]
jobs:
try-ssh-commands:
runs-on: ubuntu-latest
name: SSH MY_TEST
steps:
- name: Checkout
uses: actions/checkout#v2
- name: test_ssh
uses: ./
with:
ssh_key: ${{secrets.SSH_PRIVATE_KEY}}
known_hosts: ${{secrets.SSH_KNOWN_HOSTS}}
In the github action online console I get the following output:
ssh-keygen
---- BEGIN SSH2 PUBLIC KEY ----
Comment: "2048-bit RSA, converted by root#844d5e361d21 from OpenSSH"
AAAAB3NzaC1yc2EAAAADAQABAAABAQDaj/9Guq4M9V/jEdMWFrnUOzArj2AhneV3I97R6y
<...>
9f/7rCMTJwae65z5fTvfecjIaUEzpE3aen7fR5Umk4MS925/1amm0GKKSa2OOEQnWg2Enp
Od9V75pph54v0+cYfJcbab
---- END SSH2 PUBLIC KEY ----
ssh-keyscan
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# <IP>:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
<IP> ssh-ed25519 AAAAC3NzaC1lZD<...>9r5SNohBUitk
<IP> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRNWiDWO65SKQnYZafcnkVhWKyxxi5r+/uUS2zgYdXvuZ9UIREw5sumR95kbNY1V90<...>
qWXryZYaMqMiWlTi6ffIC5ZoPcgGHjwJRXVmz+jdOmdx8eg2llYatRQbH7vGDYr4zSztXGM77G4o4pJsaMA/
***
Host key verification failed.
known hosts
***
175 /github/home/.ssh/known_hosts
As far as I understand *** is used to replace GitHub secrets which in my case is the key of the known host. Getting *** as a result for the ssh-keyscan and the cat known_host should mean, that the known_hosts file is correct and a connection should be possible. Because in both cases the console output is successfully censored by GitHub. And since the file contains 175 characters I can assume it contains the actual key. But as one can see the script fails with Host key verification failed.
When I do the same steps manually in another workflow with the exact same input data I succeed. Same goes for ssh from my local computer with the same private_key and known_host files.
This for example works with the exact same secrets
- name: Create SSH key
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ../private.key
sudo chmod 600 ../private.key
echo "$SSH_KNOWN_HOSTS_PROD" > ~/.ssh/known_hosts
shell: bash
env:
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
SSH_KNOWN_HOSTS: ${{secrets.SSH_KNOWN_HOSTS}}
- name: SSH into DO and run
run: >
ssh -i ../private.key -tt ${SSH_USERNAME}#${SERVER_IP}
"
< commands >
"
Using the -o "StrictHostKeyChecking no" flag on the ssh command in the entrypoint.sh also works. But I would like to avoid this for security reasons.
I have been trying to solve this issue for hours, but I seem to miss a critical detail. Has someone encountered a similar issue or knows what I am doing wrong?
So after hours of searching I found out what the issue was.
When force accepting all host keys with the -o "StrictHostKeyChecking no" option no ~/.ssh/known_hosts file is created. Meaning that the openssh-client I installed in the container does not seem to read from that file.
So telling the ssh command where to look for the file solved the issue:
ssh -i ~/.ssh/private.key -o UserKnownHostsFile=/github/home/.ssh/known_hosts -tt <USER>#<IP> "echo test > testfile1"
Apparently one can also change the location of the known_hosts file within the ssh_config permanently (see here).
Hope this helps someone at some point.
First, add a chmod 600 ~/.ssh/known_hosts as well in your entrypoint.
For testing, I would check if options around ssh-keyscan make any difference:
ssh-keyscan -H <IP>
# or
ssh-keyscan -t rsa -H <IP>
Check that your key is generated using the default rsa public-key cryptosystems.
The HostKeyAlgorithms used might be set differently, in which case:
ssh-keyscan -H -t ecdsa-sha2-nistp256 <IP>

Unable to run npm command in ansible awx_task container

I have been using ansible core for some time now and expanding my team so the need for ansible awx has become a little more pressing. I have been working at it for a week now and I think it's time to shout for help.
We had a process of replacing the baseurl of angularjs apps with some variable using ansible and set some settings before we compile it (currently thinking of a different way of doing this using build server like TeamCity but not right now we we are trying to be up with ansible awx).
ansible core checks out the code from the git branch version , replaces the variables and zip it to s3 etc.
Knowing that, the ansible awx host was configured with the nvm then node was installed and the .nvm mapped to /home/awx/.nvm
I have also mapped a bashrc to /home/awx/.bashrc. When I log into the awx_task container docker exec -it awx_task /bin/bash I see the below:
[root#awx ~]# npm --version
5.5.1
[root#awx ~]# echo $PATH /home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[root#awx ~]# env
NVM_DIR=/home/awx/.nvm
LANG=en_US.UTF-8
HOSTNAME=awx
NVM_CD_FLAGS=
DO_ANSIBLE_HOME=/opt/do_ansible_awx_home
PWD=/home/awx
HOME=/home/awx
affinity:container==eb57afe832eaa32472812d0cd8b614be6df213d8e866f1d7b04dfe109a887e44
TERM=xterm
NVM_BIN=/home/awx/.nvm/versions/node/v8.9.3/bin
SHLVL=1
LANGUAGE=en_US:en
PATH=/home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
[root#awx ~]# cat /home/awx/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
All the volume mappings, etc were done with the installer role templates and tasks so the output above is the same after multiple docker restart and reinstall running the ansible awx installer playbook. But during the execution of the playbook that makes use of the npm, it seems it has a different env PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
At this point, I am not sure whether I failed to configure the path properly or other containers like awx_web should also be configured etc.
I have also noticed the env NVM_BIN and modified the npm playbook to include the path to the npm executable:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ lookup('env','NVM_BIN') }}/npm"
and it doens't even show during execution thus pointing at different path and env variables being loaded.
I will be grateful if you could shed some lights on whatever I am doing wrongly.
Thanks in advance
EDITS : After implementing #sergei suggestion I have used the extra vars npm_bin: /home/awx/.nvm/versions/node/v8.9.3/bin
I have changed the task to look like:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ npm_bin }}/npm"
But it produced this result:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" && echo
ansible-tmp-1579790680.4419668-165048670233209="` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/language/npm.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10173xtu81x_o/tmpd40htayd TO /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 114, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib64/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib64/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I have also tried to use shell module directly with the following:
- name: Running npm install
shell: "{{ npm_bin }}/npm install"
args:
chdir: "{{ bps_git_checkout_folder }}"
That has produced this instead:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" && echo
ansible-tmp-1579791187.453365-253173616238218="` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site- packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10395h1ga8fw3/tmpepeig729 TO /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"delta": "0:00:00.005528",
"end": "2020-01-23 14:53:07.928843",
"invocation": {
"module_args": {
"_raw_params": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"_uses_shell": true,
"argv": null,
"chdir": "/opt/do_ansible_awx_home/gh/deployments/sandbox/bps",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Not really seeing what's wrong here . Grateful if anybody can share some lights on this.
Where is your packages sitting? In the host or inside the container? All execution is in the task container.
If you're npm files are sitting on the 'host' and not in the container then you have to refer to the host that the containers are sitting on to to refer to the path.

CircleCI environmental variables for HEROKU not being set properly causing GIT to fail

I am a CircleCI user, and I am setting up an integration with Heroku.
I want to do the following, and setup security with integrations with dockerHub and also to Heroku from the CircleCI portal page, using this config.yml file.
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY} ${HEROKU_APP}
config.yml
version: 2
jobs:
build:
working_directory: ~/springboot_swagger_example-master-cassandra
docker:
- image: circleci/openjdk:8-jdk-browsers
steps:
- checkout
- restore_cache:
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- type: add-ssh-keys
- type: deploy
name: "Deploy to Heroku"
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
# Install Heroku fingerprint (this is heroku's own key, NOT any of your private or public keys)
echo 'heroku.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu8erSx6jh+8ztsfHwkNeFr/SZaSOcvoa8AyMpaerGIPZDB2TKNgNkMSYTLYGDK2ivsqXopo2W7dpQRBIVF80q9mNXy5tbt1WE04gbOBB26Wn2hF4bk3Tu+BNMFbvMjPbkVlC2hcFuQJdH4T2i/dtauyTpJbD/6ExHR9XYVhdhdMs0JsjP/Q5FNoWh2ff9YbZVpDQSTPvusUp4liLjPfa/i0t+2LpNCeWy8Y+V9gUlDWiyYwrfMVI0UwNCZZKHs1Unpc11/4HLitQRtvuk0Ot5qwwBxbmtvCDKZvj1aFBid71/mYdGRPYZMIxq1zgP1acePC1zfTG/lvuQ7d0Pe0kaw==' >> ~/.ssh/known_hosts
# git push git#heroku.com:yourproject.git $CIRCLE_SHA1:refs/heads/master
# Optional post-deploy commands
# heroku run python manage.py migrate --app=my-heroku-project
fi
- run: mvn package
- run:
name: Install Docker client
command: |
set -x
VER="17.03.0-ce"
curl -L -o /tmp/docker-$VER.tgz https://get.docker.com/builds/Linux/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Docker image
command: docker build -t joethecoder2/spring-boot-web:$CIRCLE_SHA1 .
- run:
name: Push to DockerHub
command: |
docker login -u$DOCKERHUB_LOGIN -p$DOCKERHUB_PASSWORD
docker push joethecoder2/spring-boot-web:$CIRCLE_SHA1
- run:
name: Setup Heroku
command: |
curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
chmod +x .circleci/setup-heroku.sh
.circleci/setup-heroku.sh
- run:
name: Deploy to Heroku
command: |
mkdir app
cd app/
heroku create
# git push https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP.git master
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}
git push https://heroku:${HEROKU_API_KEY}#git.heroku.com/${HEROKU_APP}.git master
- store_test_results:
path: target/surefire-reports
- store_artifacts:
path: target/spring-boot-web-0.0.1-SNAPSHOT.jar
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY}
${HEROKU_APP}
The question, and problem is why aren't these settings being detected automatically?
You need to set the value for the variables: https://circleci.com/docs/2.0/env-vars/
They are being echo'd because you're echoing them.
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}

How to spin up a docker container (or docker-compose) with cloud-init (cloud-config)

I try to spin up a server which should run docker and docker-compose with a simple "hello-world" container. My YAML file looks like this:
#cloud-config
ssh_authorized_keys:
- ssh-rsa MY_SSH_KEY_HERE
package_update: true
package_upgrade: true
packages:
- docker.io
runcmd:
- [ sh, -c, "sudo apt install -y docker" ]
- [ sh, -c, "sudo apt install -y docker-compose"]
- [ sh, -c, "sudo service docker start" ]
rancher:
services:
rancher-server:
image: hello-world
restart: always
ports:
- 80:80
environment:
- TEST_VAR=TEST
Docker gets installed but wont start the image
root#test ~ # which docker
/usr/bin/docker
root#test ~ # which docker-compose
/usr/bin/docker-compose
> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
BTW: is it necessary to include the packages: docker.io ?
In this answer, you can ignore adding default azure user to docker group if you are not using Azure VM. But keep in my mind to run docker you have to add your current user to docker group otherwise you may get permission denied error.
#cloud-config
package_update: true
# Setup swap memory
disk_setup:
ephemeral0:
table_type: mbr
layout: [66, [33, 82]]
overwrite: True
fs_setup:
- device: ephemeral0.1
filesystem: ext4
- device: ephemeral0.2
filesystem: swap
mounts:
- ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
# Enable Docker's swap limit support (System restart required)
bootcmd:
- [ sh, -c, 'sudo echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\" >> /etc/default/grub' ]
- [ sh, -c, 'sudo update-grub' ]
# Install latest stable docker and docker-compose
runcmd:
- [ sh, -c, 'curl -sSL https://get.docker.com/ | sh' ]
- [ sh, -c, 'sudo curl -L https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep "tag_name" | cut -d \" -f4)/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo chmod +x /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo docker run -d nginx:latest' ]
# Add default azure user to docker group
system_info:
default_user:
groups: [docker]
# Restart the system
power_state:
delay: "now"
mode: reboot
message: First reboot
condition: True
This user-data string is working for me on DigitalOcean using ubuntu-18-04-x64 VM type. I expect it would work on any version of Ubuntu 18.04 built for a cloud virtual machine.
#cloud-config
# https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config-apt.txt
# https://docs.docker.com/install/linux/docker-ce/ubuntu/
apt:
sources:
download-docker-com.list:
source: "deb https://download.docker.com/linux/ubuntu $RELEASE stable"
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
# Search for package versions: $ apt-cache madison docker-ce
packages:
- docker-ce=5:19.03.5~3-0~ubuntu-bionic
- docker-compose=1.17.1-2
- containerd.io=1.2.10-3
users:
- name: user
uid: 1000
# Test Docker installation with $ docker run -u 1000 -t -i --rm hello-world

Resources