Travis CI -- Docker run shell to extra Values - docker

I am currently using Travis-CI, I am trying to do something like the following but I get just an empty value for the variable.
ANSIBLE_VERSION=$(docker run -d <ID/TAG> /bin/bash -c "ansible --version"|head -1 |awk '{print $2}')
I have tested the command on my local machine and it works correctly, so I am not sure what the problem might be Travis related?
Many thanks

It seems this is due to the following:
https://github.com/travis-ci/travis-ci/issues/3149

Related

expected 2 keywords got 4 in robot framework

How can this problem be solved for example, The Execute special command on accepts 2 arguments, but I want to make it accept more. I want these two commands to be together so that I can replace the yaml file of a docker image at once. I have tried putting the other commands in brackets but it didnt still work
Execute special command on ${cluster} kubectl exec -n ${namespace} get statefulsets/postgresql-pod -o yaml | sed "s#image: docker repo /stolon#image: bbbdocker repo/stolon#"
Execute special command on ${cluster} kubectl -n ${namespace} replace -f -

Path is different depending on how you connect to container

I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment

Best practice to include a bash script in a Docker image

I'm creating a Dockerfile that needs to execute a command, let's call it foo
In order to execute foo, I need to create a .cfc in current directory with token information to call this foo service.
So basically I should do something like
ENV FOO_TOKEN token
ENV FOO_HOST host
ENV FOO_SHARED_DIRECTORY directory
ENV LIBS_TARGET target
and then put the first three variables in a .cfg file and then launch a command using the last variable as target.
Given that if run more than one CMD in a Dockerfile, only the last one will be considered, how should I do that?
My ideal execution is docker run -e "FOO_TOKEN=aaaaaaa" -e "FOO_HOST=myhost" -e "FOO_SHARED_DIRECTORY=Shared" -e "LIBS_TARGET=target/scala-2.11/*.jar" -it --rm --name my-ci-deploy foo/foo:latest
If you wanted to keep everything in the Dockerfile (something I think is rather desirable), you can do something nasty like:
ENV SCRIPT=IyEvdXNyL2Jpbi9lbnYgYmFzaApwZG9fc3Fsc3J2PTAKc3Vkbz0KdmVuZG9yPSQoIGxzYl9yZWxlYXNlIC1p
RUN echo -n "$SCRIPT" | base64 -d | /usr/bin/env bash
Where the contents of SCRIPT= are derived by piping your shell script thusly:
cat my_script.sh | base64 --wrap=0
You may have to adjust the /usr/bin/env bash if you have a really minimal (Alpine) setup.

How to set environment variable in pre-start in Upstart script?

We have a custom C++ daemon application that forks once. So we've been doing this in our Upstart script on Ubuntu 12.04 and it works perfectly:
expect fork
exec /path/to/the/app
However now we need to pass in an argument to our app which contains the number of CPUs on the machine on which it runs:
cat /proc/cpuinfo | grep processor | wc -l
Our first attempt was this:
expect fork
exec /path/to/the/app -t `cat /proc/cpuinfo | grep processor | wc -l`
While that starts our app with the correct -t value, Upstart tracks the wrong pid value, I'm assuming because those cat, grep & wc commands all launch processes in exec before our app.
I also tried this, and even it doesn't work, I guess because setting an env var runs a process? Upstart still tracks the wrong pid:
expect fork
script
NUM_CORES=32
/path/to/the/app -t $NUM_CORES
end script
I've also tried doing this in an env stanza but apparently those don't run commands:
env num_cores=`cat /proc/cpuinfo | grep processor | wc -l`
Also tried doing this in pre-start, but env vars set there don't have any values in the exec stanza:
pre-start
NUM_CORES=32
end script
Any idea how to get this NUM_CORES set properly, and still get Upstart to track the correct pid for our app that forks once?
It's awkward. The recommended method is to write an env file in the pre-start stanza and then source it in the script stanza. It's ridiculous, I know.
expect fork
pre-start script
exec >"/tmp/$UPSTART_JOB"
echo "NUM_CORES=$(cat /proc/cpuinfo | grep processor | wc -l)"
end script
script
. "/tmp/$UPSTART_JOB"
/path/to/app -t "$NUM_CORES"
end script
post-start script
rm -f "/tmp/$UPSTART_JOB"
end script
I use the exec line in the pre-start because I usually have multiple env variables and I don't want to repeat the redirection code.
This only works because the '. ' command is a built-in in dash and thus no process is spawned.
According to zram-config's upstart config:
script
NUM_CORES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
/path/to/the/app -t $NUM_CORES
end script
I would add
export NUM_CORES
after assigning it a value in "script". I remember that a /bin/sh symlinked to a non-Bash shell may run scripts, so I would avoid Bash-only constructs.
Re: using the "env" stanza, it passes values literally and does not process them using shell conventions.

Problems deploying with Capistrano and Perforce after changing workspace

we have been uising capistrano to roll out changes for some time (it was set up by a previous coder). as IT have decided to remove his workspace from perforce ive created a new one in my name and figured it would just work, but it seems to be rolling back on [deploy:update_code].
ive chekced the usual username / password errors, and with any of these incorrect the error is
Perforce password (P4PASSWD) invalid or unset.
or
p4client is incorrect or unset
so im quite confident it isnt that.
any ideas? ill paste the error message in ommiting any details! thanks a lot
failed: "sh -c 'p4 -p p4SERVER:1666 -u P4Username -P p4Password -c P4Workspace sync -f #298781 && cp -rf p4 -p p4SERVER:1666 -u P4Username -P p4Password -c P4Workspace client -o | grep ^Root | cut -f2 /home/ubuntu/clan/releases/20110905145323 && (echo 298781 > /home/ubuntu/clan/releases/20110905145323/REVISION)'" on ipaddressofserver
I managed to resolve this problem. the issue was that when creating the workspace the "host" was specified as being my local machine, removing this optional param allowed it to deploy correctly

Resources