Kubernetes container args behave incorrect - docker

I want to use Kubernetes and the postman/newman Docker image to execute my API tests.
Locally, I can execute the image with
docker run postman/newman run <url-to-collection> --env-var baseUrl=<local-hostname>
I include the Image in a Kubernetes manifest file
spec:
containers:
- name: newman
image: postman/newman:latest
args:
- run
- '<url-to-collection>'
- --env-var baseUrl=<kubernetes-hostname>
When I apply the manifest and look at the logs of the container, I get the following error:
error: unknown option '--global-var baseUrl=<kubernetes-hostname>'
I tried out many things with quotes and using the command section instead of the args section, but always with the same result.
I figure that Kubernetes somehow builds the command in a way, that the newman executable can not understand it.
However I could not find any info about that.
(I also created an issue in the GitHub repo of Newman here)
Could anybody explain to me where this problem comes from and how I might solve this?
Thanks anyways!

Linux commands are made up of a sequence of words. If you type a command at a shell prompt, it takes responsibility for splitting it into words for you, but in the Kubernetes args: list, you need to split out the words yourself.
args:
- run
- '<url-to-collection>'
# You want these as two separate arguments, so they need to be
# two separate list items
- --env-var
- baseUrl=<kubernetes-hostname>
If the two arguments are in the same list item, they are a single "word":
# /bin/ls "/app/some directory/some file"
command:
- /bin/ls
# a single argument, including embedded spaces
- /app/some directory/some file
The same basic rules apply for Docker Compose entrypoint: and command: and the JSON-syntax Dockerfile ENTRYPOINT and CMD directives, except that the Docker-native forms all also accept a plain string that they will split on spaces (using a shell in the Dockerfile case, but not in the Compose case).
In the docker run command you provide, the shell on your host system processes it first, so the --env-var option and baseUrl=... argument get split into separate words before they're passed into the container.

Related

How to append command to docker-compose.yml without override existing?

In docker-compose.yml I have a service
app:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '80:80'
depends_on:
- db
So I want to execute command yii migrate --interactive=0 when container is started. But if I just add string
command: "yii migrate --interactive=0"
It override command that already specified in yiisoftware/yii2-php:7.1-apache dockerfile. How can I append command, not replace? Is it possible?
I already googled for this problem, but most popular solution is "create your own dockerfile". Can I solve this without create/modify dockerfile or shell script, only with docker-compose?
To do this, you would have to define your own ENTRYPOINT in your docker-compose.yml. Besides building your own Dockerfile, there is no way around this.
As much as I am searching, I cannot find a CMD instruction in this image's Dockerfile, though. So this is probably what's being used.
I'm not familiar with PHP, so I cannot estimate what you would have to change to run a container with your specific needs, but these are the points that you should have a look at.
You can include the pre-existing command in your new command. Now understand that everything in those commands can be called from sh or bash. So look up the existing command by doing an:
docker image inspect yiisoftware/yii2-php:7.1-apache
Then check the command in Config { Command }. (Not ConatinerConfig.) So if the existing command had been:
"start-daemon -z foo-bar"
then your new command would be:
sh -c "start-daemon -z foo-bar; yii migrate --interactive=0"
Now you'll have the existing command plus your new command and the desired effect.

How to use multiple image tags with docker-compose

According to this and this GitHub issues, currently there is no native way how to supply multiple tags for a service's image when using docker-compose to build one or multiple images.
My use case for this would be to build images defined in a docker-compose.yml file and tag them once with some customized tag (e.g. some build no. or date or similar) and once as latest.
While this can be easily achieved with plain docker using docker tag, docker-compose only allows to set one single tag in the image key. Using docker tag together with docker-compose is not an option for me since I want to keep all my docker-related definitions in the docker-compose.yml file and not copy them over into my build script.
What would be a decent work-around to achieve setting of multiple tags with docker-compose and without having to hardcode/copy the image names first?
I have some nice and clean solution using environment variables (bash syntax for default variable value, in my case it is latest but you can use anything ), this is my compose:
version: '3'
services:
app:
build: .
image: myapp-name:${version:-latest}
build and push (if you need to push to the registry) with the default tag, change the version using environment variable and build and push again:
docker-compose build
docker-compose push
export version=0.0.1
docker-compose build
docker-compose push
You can also take the following approach:
# build is your actual build spec
build:
image: myrepo/myimage
build:
...
...
# these extend from build and just add new tags statically or from environment variables or
version_tag:
extends: build
image: myrepo/myimage:v1.0
some_other_tag:
extends: build
image: myrepo/myimage:${SOME_OTHER_TAG}
You can then just run docker-compose build and docker-compose push and you will build and push the correct set of tagged imaged
I came up with a couple of work-arounds of different complexity. They all rely on the assumption that ${IMAGE_TAG} stores the customized tag that represents e.g. a build no. and we want to tag all services' images with this tag as well as with latest.
grep the image names from the docker-compose.yml file
images=$(cat docker-compose.yml | grep 'image: ' | cut -d':' -f 2 | tr -d '"')
for image in $images
do
docker tag "${image}":"${IMAGE_TAG}" "${image}":latest
done
However, this is error prone if somebody adds a comment in docker-compose.yml which would e.g. look like # Purpose of this image: do something useful....
Build twice
Use ${IMAGE_TAG} as an environment variable in your docker-compose.yml file as described here in the first example.
Then, simply run the build process twice, each time substituting ${IMAGE_TAG} with a different value:
IMAGE_TAG="${IMAGE_TAG}" docker-compose build
IMAGE_TAG=latest docker-compose build
The second build process should be much faster than the first one since all image layers should still be cached from the first run.
Drawback of this approach is that it will flood your log output with two subsequent build processes for each single service which might make harder to search through it for something useful.
Besides, if you have any command in your Dockerfile which always flushes the build cache (e.g. an ADD command fetching from a remote location with auto-updating last-modified headers, adding files which are constantly updated by an external process etc.) then the extra build might slow things down significantly.
Parse image names from the docker-compose.yml file with some inline Python code
Using a real yaml parser in Python (or any other language such as Ruby or perl or whatever is installed on your system) is more robust than the first mentioned grep approach since it will not get confused by comments or strange but valid ways of writing the yml file.
In Python, this could look like this:
images=$(python3 <<-EOF # make sure below to indent with tabs, not spaces; or omit the "-" before "EOF" and use no indention at all
import yaml
content = yaml.load(open("docker-compose.build.yml"))
services = content["services"].values()
image_names = (service["image"].split(":")[0] for service in services)
print("\n".join(image_names))
EOF
)
for image in ${images}
do
docker tag ${image}:${IMAGE_TAG} ${image}:latest
done
Drawback of this approach is that the machine executing the build has to have Python3 installed, along with the PyYAML library. As already mentioned, this pattern could similarly be used with Python2 or any other programming language that is installed.
Get image names with combination of some docker commands
The following approach using some native docker and docker-compose commands (using go-templates) is a bit more complex to write but also works nicely.
# this should be set to something unique in order to avoid conflicts with other running docker-compose projects
compose_project_name=myproject.tagging
# create containers for all services without starting them
docker-compose --project-name "${compose_project_name}" up --no-start
# get image names without tags for all started containers
images=$(docker-compose --project-name "${compose_project_name}" images -q | xargs docker inspect --format='{{ index .RepoTags 0}}' | cut -d':' -f1)
# iterate over images and re-tag
for image in ${images}
do
docker tag "${image}":"${IMAGE_TAG}" "${image}":latest
done
# clean-up created containers again
docker-compose --project-name "${compose_project_name}" down
While this approach does not have any external dependencies and is more safe than the grep method, it might take a few more seconds to execute on large setups for creating and removing the containers (typically not an issues though).
As suggested by #JordanDeyton extends cannot by used anymore in Compose file format > 3 and the Extension fields capability added in the version 3.4 can replace it to achieve the same goal. Here is an example.
version: "3.4"
# Define common behavior
x-ubi-httpd:
&default-ubi-httpd
build: ubi-httpd
# Other settings can also be shared
image: ubi-httpd:latest
# Define one service by wanted tag
services:
# Use the extension as is
ubi-httpd_latest:
*default-ubi-httpd
# Override the image tag
ubi-httpd_major:
<< : *default-ubi-httpd
image: ubi-httpd:1
ubi-httpd_minor:
<< : *default-ubi-httpd
image: ubi-httpd:1.0
# Using an environment variable defined in a .env file for e.g.
ubi-httpd_patch:
<< : *default-ubi-httpd
image: "ubi-httpd:${UBI_HTTPD_PATCH}"
Images can be built now with all the defined tags
$ docker-compose build
# ...
$ docker images | grep ubi-httpd
# ubi-httpd 1 8cc412411805 3 minutes ago 268MB
# ubi-httpd 1.0 8cc412411805 3 minutes ago 268MB
# ubi-httpd 1.0.1 8cc412411805 3 minutes ago 268MB
# ubi-httpd latest 8cc412411805 3 minutes ago 268MB
There is now a built-in solution using buildx bake, released in v.0.7.0.
This feature was implemented following to my suggestion in https://github.com/docker/buildx/issues/396.
Docker comes bundled with buildx installed, however, if you are on Mac and running Docker Desktop the bundled buildx version is older at the time of writing this and you will need to install the correct version of buildx in addition to Docker.
Add the x-bake extension field to your docker-compose.yaml:
version: '3.9'
services:
my-app:
image: my-repo/my-image:latest
build:
context: .
dockerfile: Dockerfile
x-bake:
tags:
- my-repo/my-image:${MY_TAG_1}
- my-repo/my-image:${MY_TAG_2}
- my-repo/my-image:${MY_TAG_3}
- my-other-repo/my-image:${MY_TAG_1}
- my-other-repo/my-image:${MY_TAG_2}
- my-other-repo/my-image:${MY_TAG_3}
To build and tag the image run:
buildx bake --load
To build, tag and push to image to the repository or even to multiple repositories:
buildx bake --push

Variable substitution not working on Windows 10 with docker compose

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

TravisCI/Docker: parameterized start of docker containers with matrix feature

I have a software that should be tested against a serious of WebDAV backends that are available as Docker containers. The lame approach is to start all containers within the before_install section like
before_install:
- docker run image1
- docker run image2
- ...
This does not make much sense and wastes system resource since I only need to have only on particular docker container running as part of test run.
My test configuration uses a matrix...it is possible to configure the docker image to be run using an environment variable as part of the matrix specs?
This boils down to two questions:
can I use environment variables inside steps of the before_install section
is the 'matrix' evaluated before the before_install section in order to make
use of environment variables defined inside the matrix
?
The answer to both of your questions is yes.
I have been able to build indepentant dockerfiles using the matrix configuration. A sample dockerfile might look like
sudo: required
services:
- docker
env:
- DOCKERFILE=dockerfile-1
- DOCKERFILE=dockerfile-2
before_install:
- docker build -f $DOCKERFILE .
In this case there would be two independant runs each building a separate image. You could also use a docker pull command if your images are on the docker hub.

how to ignore some container when i run `docker-compose rm`

I have four containers that was node ,redis, mysql, and data. when i run docker-compose rm,it will remove all of my container that include the container data.my data of mysql is in the the container and i don't want to rm the container data.
why i must rm that containers?
Sometime i must change some configure files of node and mysql and rebuild.So
,I must remove containers and start again.
I have searched using google again over again and got nothing.
As things stand, you need to keep your data containers outside of Docker Compose for this reason. A data container shouldn't be running anyway, so this makes sense.
So, to create your data-container do something like:
docker run --name data mysql echo "App Data Container"
The echo command will complete and the container will exit immediately, but as long as you don't docker rm the container you will still be able to use it in --volumes-from commands, so you can do the following in Compose:
db:
image: mysql
volumes-from:
- data
And just remove any code in docker-compose.yml to start up the data container.
An alternative to docker-compose, in Go (https://github.com/michaelsauter/crane), let's you create contianer groups -- including overriding the default group so that you can ignore your data containers when rebuilding your app.
Given you have a "crane.yaml" with the following containers and groups:
containers:
my-app:
...
my-data1:
...
my-data2:
...
groups:
default:
- "my-app"
data:
- "my-data1"
- "my-data2"
You can build your data containers once:
# create your data-only containers (safe to run several times)
crane provision data # needed when building from Dockerfile
crane create data
# build/start your app.
crane lift -r # similar to docker-compose build && docker compose up
# Force re-create off your data-only containers...
crane create --recreate data
PS! Unlike docker-compose, even if building from Dockerfile, you MUST specify an "image" -- when not pulling, this is the name docker will give the image locally! Also note that the container names are global, and not prefixed by the folder name the way they are in docker-compose.
Note that there is at least one major pitfall with crane: It simply ignores misplaced or wrongly spelled fields! This makes it harder to debug that docker-compose yaml.
#AdrianMouat Now , I can specify a *.yml file when I starting all container with the new version 1.2rc of docker-compose (https://github.com/docker/compose/releases). just like follows:
file:data.yml
data:
image: ubuntu
volumes:
- "/var/lib/mysql"
thinks for your much useful answer

Resources