expected 2 keywords got 4 in robot framework - docker

How can this problem be solved for example, The Execute special command on accepts 2 arguments, but I want to make it accept more. I want these two commands to be together so that I can replace the yaml file of a docker image at once. I have tried putting the other commands in brackets but it didnt still work
Execute special command on ${cluster} kubectl exec -n ${namespace} get statefulsets/postgresql-pod -o yaml | sed "s#image: docker repo /stolon#image: bbbdocker repo/stolon#"
Execute special command on ${cluster} kubectl -n ${namespace} replace -f -

Related

Path is different depending on how you connect to container

I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment

Docker build requires exactly 1 argument

When I run this command on my gitlab pipeline
docker build --build-arg NPM_TOKEN=${NPM_TOKEN} --tag $REGISTRY_IMAGE/web-public:$CI_COMMIT_SHA --tag $REGISTRY_IMAGE/web-public:$CI_COMMIT_REF_NAME packages/web-public
it fails with
build requires exactly 1 argument
It looks to me like I am actually passing one argument, the path; packages/web-public. Flags are not arguments as far as I know.
What am I missing here?
This is the structure of my project
Quote your variables. Something in those variables is expanding to be more than the single arg to the flag.
docker build --build-arg "NPM_TOKEN=${NPM_TOKEN}" --tag "$REGISTRY_IMAGE/web-public:$CI_COMMIT_SHA" --tag "$REGISTRY_IMAGE/web-public:$CI_COMMIT_REF_NAME" packages/web-public
You can also echo that command to see how the variables are expanding, e.g.
echo docker build ...
from https://docs.docker.com/engine/reference/commandline/build/
docker build [OPTIONS] PATH | URL | -
It looks like there's something wrong with your PATH. Try using the absolute path or change to the directory containing the Dockerfile and use .
see also: "docker build" requires exactly 1 argument(s)
My issue was that I had a multi line script entry, eg
script:
- >
docker build \
--network host \
-t ${CI_REGISTRY}/kylehqcom/project/image:latest \
....
As soon as I added to a single line, we were all ok. So I guess the line breaks got "entered" after the first line which meant that the subsequent lines were ignored and the error was returned. Also note, that I CI linted via the GitLab ui and all was syntactically correct.

Best practice to include a bash script in a Docker image

I'm creating a Dockerfile that needs to execute a command, let's call it foo
In order to execute foo, I need to create a .cfc in current directory with token information to call this foo service.
So basically I should do something like
ENV FOO_TOKEN token
ENV FOO_HOST host
ENV FOO_SHARED_DIRECTORY directory
ENV LIBS_TARGET target
and then put the first three variables in a .cfg file and then launch a command using the last variable as target.
Given that if run more than one CMD in a Dockerfile, only the last one will be considered, how should I do that?
My ideal execution is docker run -e "FOO_TOKEN=aaaaaaa" -e "FOO_HOST=myhost" -e "FOO_SHARED_DIRECTORY=Shared" -e "LIBS_TARGET=target/scala-2.11/*.jar" -it --rm --name my-ci-deploy foo/foo:latest
If you wanted to keep everything in the Dockerfile (something I think is rather desirable), you can do something nasty like:
ENV SCRIPT=IyEvdXNyL2Jpbi9lbnYgYmFzaApwZG9fc3Fsc3J2PTAKc3Vkbz0KdmVuZG9yPSQoIGxzYl9yZWxlYXNlIC1p
RUN echo -n "$SCRIPT" | base64 -d | /usr/bin/env bash
Where the contents of SCRIPT= are derived by piping your shell script thusly:
cat my_script.sh | base64 --wrap=0
You may have to adjust the /usr/bin/env bash if you have a really minimal (Alpine) setup.

Dockerfile capture output of a command

I have the following line in my Dockerfile which is supposed to capture the display number of the host:
RUN DISPLAY_NUMBER="$(echo $DISPLAY | cut -d. -f1 | cut -d: -f2)" && echo $DISPLAY_NUMBER
When I tried to build the Dockerfile, the DISPLAY_NUMBER is empty. But however when I run the same command directly in the terminal I get the see the result. Is there anything that I'm doing wrong here?
Commands specified with RUN are executed when the image is built. There is no display during build hence the output is empty.
You can exchange RUN with ENTRYPOINT then the command is executed when the docker starts.
But how to forward the hosts display to the container is another matter entirely.
Host environment variables cannot be passed during build, only at run-time.
Only build args can be specified by:
first "declaring the arg"
ARG DISPLAY_NUMBER
and then running
docker build . --no-cache -t disp --build-arg DISPLAY_NUMBER=$DISPLAY_NUMBER
You can work around this issue using the envsubst trick
RUN echo $DISPLAY_NUMBER
And on the command line:
envsubst < Dockerfile | docker build . -f -
Which will rewrite the Dockerfile in memory and pass it to Docker with the environment variable changed.
Edit: Note that this solution is pretty useless though, because you probably
want to do this during run-time anyways, because this value should depend on not on where the image is built, but rather where it is run.
I would personally move that logic into your ENTRYPOINT or CMD script.

Jenkins parameterization issue using Cucumber

I'm trying to find the right sintax for an instrucion that runs a docker image, maps a volume, and calls tests written in Cucumber with a JUnit output.
When I set the following instruction with "Execute command shell" in a job configuration and I don't map any volume, tests run:
docker run docker-registry.dev.xoom.com/agrimaldi/jasper:${VERSION} cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output
Problem is, I need a volume in order to read the output of the tests. So I try out with the following line:
docker run --rm -v /var/lib/jenkins/jobs/qacolombia/workspace/default/features/output:/opt/xbp_stamp_jasper/features/output docker-registry.dev.xoom.com/agrimaldi/jasper:${VERSION} cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output
But Jenkins doesn't seem to recognize the "#" symbol. I've tried with several positions of single quotes, for example: '#co' or 'cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output', using backslashes, double quotes... and Jenkins doesn't recognize the whole instruction. Would you please help me with a suggestion of a way of sending parameters?
Any help is highly appreciated.

Resources