How to run multiple postman/newman collections in docker container - docker

I am trying to run multiple postman collections in a docker container. Since postman doesn't come with the feature to run a folder with multiple collections, I tried to run multiple commands in docker. Nothing worked.
This is my dockerfile.
FROM postman/newman_alpine33
COPY . .
ENTRYPOINT["sh","-c"]
And this is my postman container in docker-compose.
postman:
build:
context: ./
dockerfile: Dockerfile-postman
container_name: postmanTests
command:
run "https://www.getpostman.com/collections/1" --env-var "base_url=http://service:8080" &&
run "https://www.getpostman.com/collections/2" --env-var "base_url=http://service:8080"
volumes:
- container-volume:/var/lib/postman
depends_on:
- service
I tried using the sh-c command, the bash -c command, but I got an error that -c is invalid.

Your ENTRYPOINT line is causing problems. It forces the entire command you might want to run to be packed into a single shell word. Compose command: doesn't expect this, and I'd expect your container to try to execute the shell command run with no arguments.
One straightforward path here could be to just run multiple containers. I'm looking at the documentation for the postman/newman_alpine33 image and it supports injecting the Postman data as a Docker mount of some sort. Compose is a little better suited to long-running containers than short-lived tasks like this. So I might run
docker run \
--rm \ # clean up the container when done
-v name_container-volume:/var/lib/postman \
--net name_default \ # attach to Compose network
postman/newman_alpine33 \ # the unmodified image
--url "https://www.getpostman.com/collections/1" \ # newman options
--env-var "base_url=http://service:8080"
docker run --rm --net name_default \
-v name_container-volume:/var/lib/postman \
postman/newman_alpine33 \
--url "https://www.getpostman.com/collections/2" \
--env-var "base_url=http://service:8080"
(You could use a shell script to reduce the repetitive options, or alternatively use docker-compose run if you can write a service definition based on this image.)
If you really want to do this in a single container, it's helpful to understand what the base image is actually doing with those command arguments. The Docker Hub page links to a GitHub repository and you can in turn find the Dockerfile there. That ends with the line
ENTRYPOINT ["newman"]
so the docker run command part just supplies arguments to newman. If you want to run multiple things in the same container, and orchestrate them using a shell, you need to replace this entrypoint, and you need to explicitly restate the newman command.
For this we could do everything in a Compose setup, and that makes some sense since the Postman collections are "data" and the URLs are environment-specific. Note here that we override the entrypoint: at run time, and its value has exactly three items, sh, -c, and the extended command to be run packed into a single string.
services:
postman:
image: postman/newman_alpine33 # do not build: a custom image
volumes:
- container-volume:/var/lib/postman
- .:/etc/newman # inject the collections
entrypoint: # override `newman` in image
- /bin/sh
- -c
- >-
newman
--url "https://www.getpostman.com/collections/1"
--env-var "base_url=http://service:8080"
&& newman
--url "https://www.getpostman.com/collections/1"
--env-var "base_url=http://service:8080"
depends_on:
- service
(The >- syntax creates a YAML block scalar; the text below it is a single string. > converts all newlines within the string to spaces and - removes leading and trailing newlines. I feel like I see this specific construction somewhat regularly in a Kubernetes context.)

Related

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

How to put command of build docker container to dockerfile?

I have script: docker run -it -p 4000:4000 bitgosdk/express:latest --disablessl -e test
how to put this command to dockerfile with arguments?
FROM bitgosdk/express:latest
EXPOSE 4000
???
Gone through your Dockerfile contents.
The command running inside container is:
/ # ps -ef | more
PID USER TIME COMMAND
1 root 0:00 /sbin/tini -- /usr/local/bin/node /var/bitgo-express/bin/bitgo-express --disablessl -e test
The command is so because the entrypoint set in the Dockerfile is ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/node", "/var/bitgo-express/bin/bitgo-express"] and the arguments --disablessl -e test are the one provided while running docker run command.
The --disablessl -e test arguments can be set inside your Dockerfile using CMD:
CMD ["--disablessl", "-e","test"]
New Dockerfile:
FROM bitgosdk/express:latest
EXPOSE 4000
CMD ["--disablessl", "-e","test"]
Refer this to know the difference between entrypoint and cmd.
You don't.
This is what docker-compose is used for.
i.e. create a docker-compose.yml with contents like this:
version: "3.8"
services:
test:
image: bitgodsdk/express:latest
command: --disablessl -e test
ports:
- "4000:4000"
and then execute the following in a terminal to access the interactive terminal for the service named test.
docker-compose run test
Even if #mchawre's answer seems to directly answer OP's question "syntactically speaking" (as a Dockerfile was asked), a docker-compose.yml is definitely the way to go to make a docker run command, as custom as it might be, reproducible in a declarative way (YAML file).
Just to complement #ChrisBecke's answer, note that the writing of this YAML file can be automated. See e.g., the FOSS (under MIT license) https://github.com/magicmark/composerize
FTR, the snippet below was automatically generated from the following docker run command, using the accompanying webapp https://composerize.com/:
docker run -it -p 4000:4000 bitgosdk/express:latest
version: '3.3'
services:
express:
ports:
- '4000:4000'
image: 'bitgosdk/express:latest'
I omitted the CMD arguments --disablessl -e test on-purpose, as composerize does not seem to support these extra arguments. This may sound like a bug (and FTR a related issue is opened), but meanwhile it might just be viewed as a feature, in line of #DavidMaze's comment…

Container doesn't have code changes, image does

Yes, this question has been posted before. Yes I've applied the relevant solutions and ignored the irrelevant ones. I just don't understand what is going on here.
Problem: I make change to my source code by modifying the default return message. I COPY the src tree into my baseImage (multi-stage builds). I build a stagingImage from the baseImage. The baseImage has the code change but the container running the stagingImage returns the pre-code-change message. I've replicated this behavior on MacOs (Catalina) and Amazon Linux 2.
Note that I am doing this manually, i.e. I'm not relying on IDEs and inotfy-s and the like. The only tools involved are the CLI, vim, make and docker(-compose).
Details: I have a Dockerfile that is capable of doing multi-stage builds. Here's the relevant part of the baseImage build:
COPY ./composer.json /var/www/html/composer.json
COPY ./php-apache/auth.json /root/.composer
COPY ./ /var/www/html
and here's a sample of how I build my other images :
FROM php:7.4-apache AS cleanImage
COPY --from=baseImage / /
FROM cleanImage AS stagingImage
COPY ./.env.staging /var/www/html/.env
RUN /bin/bash -c 'rm -rf /var/lib/apt/lists/*'
# Set entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["apachectl"]
I modified the Makefile from Chapter 10 of "Docker In Action". Here's a sample:
## image-base : Build the base image that the others are based on
.PHONY: image-base
image-base: metadata
#echo "Building Base Image"
docker image build --no-cache --force-rm --tag src:$(BUILD_ID)-base \
-f src/Dockerfile \
--target baseImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Base Image. BUILD_ID: $(BUILD_ID)"
## image-staging : Build the staging image
.PHONY: image-staging
image-staging: metadata image-base
#echo "Building Staging App Image"
docker image build -t src:$(BUILD_ID)-staging \
-f src/Dockerfile \
--target=stagingImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Staging App Image. BUILD_ID: $(BUILD_ID)-staging"
## up env=<env> : Bring up environments. env values are prod, local, staging.
.PHONY: up
up:
ifeq ($(strip $(BUILD_ID)),)
$(error BUILD_ID environment variable is not set. Run `make metadata` to generate one)
endif
docker-compose -f docker-compose.yml -f docker-$(env).yml up -d
Where BUILD_ID is of the form YYYYMMDD-epoch-git_SHA. Note that the baseImage uses the --no-cache flag.
So far, so good (I think).
My `docker-compose.yml``` file looks like this:
version: '3.7'
volumes:
web_app:
services:
php-apache:
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
volumes:
- web_app:/var/www/html
env_file:
- ./src/.env.prod
and my docker-staging.yml file looks like this:
version: '3.7'
services:
php-apache:
image: 'src:20200924-174443-3f16358-staging'
container_name: perfboard-staging
ports:
- 8888:80
- 9261:443
env_file:
- ./src/.env.staging
Yes, I've hardcoded the name of the stagingImage for the purposes of debugging.
What I expect to see: when I hit localhost:8888 I expect to see my modified message. I do not.
When inspecting the baseImage, the modified message is there. I cannot directly inspect the stagingImage because I keep getting an Apache error presumably because of the entrypoint.
If I delete every image and container from my system, this behaves as expected.
Deleting the above specific baseIamge and stagingImage does not fix the problem.
Any ideas on where to look?
Your docker-compose.yml file specifies
volumes:
- web_app:/var/www/html
That causes the contents of the web_app volume to be mounted over the /var/www/html directory in the container, hiding whatever was initially in the image.
You should delete this volume declaration.
The first time only when you run the container, Docker will copy the contents of the image into an empty named volume. From that point onward, it treats the volume as user data and never makes any changes to it; even if the underlying image is updated, the volume contents are not, and when you run the container, the volume takes precedence over the updated image code.
There are a number of practical problems with depending on the "copy from an image into a volume" behavior (it doesn't work with host bind mounts; it doesn't work on Kubernetes; it causes image updates to be ignored) and I'd try very hard to avoid mounting any sort of volume over your application code.
If you think it's important to have the volume for some other reason, you need to cause Compose to delete it, so that it will get recreated and have the "first time only" behavior again. docker-compose down -v will delete all containers, networks, and volumes, but that will also include things like your database data. You might be able to use docker-compose down (without -v) to stop the container, then use docker volume ls; docker volume rm dirname_web_app to manually delete the volume.

Cifs mount in docker container with docker-compose

I want to cif mount a directory into a docker-container.
As there are solutions to this, I tried the --privileged flag and setting the capabilities needed:
docker-compose.yaml:
version: '2.0'
services:
mounttest:
image: test
privileged: true
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
restart: unless-stopped
container_name: test
mem_limit: 500m
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/apps/docker-test/
Dockerfile:
FROM ubuntu:18.04
ADD . /apps/docker-test/
# APT-GET
RUN apt-get update && apt-get install -y \
sudo \
cifs-utils
# CHMOD SHELL SCRIPTS
RUN chmod 0755 /apps/docker-test/run.sh
RUN chmod 0755 /apps/docker-test/build.sh
RUN /apps/docker-test/build.sh
CMD bash /apps/docker-test/run.sh
build.sh:
mkdir -p /test_folder
echo "Mount"
sudo mount -t cifs -o username=XXX,password=XXX,workgroup=XX //server/adress$ /test_folder
run.sh starts a python script
This does not work, instead:
docker-compose build
gives me the error:
Unable to apply new capability set
All the solutions I found only mention the privileged flag or capabilities, which are set. Can anyone help?
This error happens because you're trying to mount a device inside the build step. At this point, these capabilities aren't available for the build container to use and it seems to be rolling out as a flag for disabling security at buildkit rather than enabling custom capabilities at build time.
The usual way to do that is to have your CIFS mount ready when you start your build process, as it'd not expose any authentication, device or mount point, as well as it's easier for docker to handle changes and react to them (since the build process works hard to cache everything before building it).
If you still want to do that, you'll need a few extra steps to enable the insecure flags from both the buildkitd and the docker buildx:
Mind that, as of today (2020-09-09), the support is still experimental and unforeseen consequences can happen.
Ensure that you're using docker version 19.03 or later.
Enable the experimental features, by adding the key "experimental":"enabled" to your ~/.docker/config.json
Create and use a builder that has the security.insecure entitlement enabled:
docker buildx create --driver docker-container --name local \
--buildkitd-flags '--allow-insecure-entitlement security.insecure' \
--use
Change your Dockerfile to use experimental syntax by adding before your first line:
# syntax = docker/dockerfile:experimental
Change the Dockerfile instruction so it runs the code without security constraints:
RUN --security=insecure /apps/docker-test/build.sh
Build your docker image using the BuildKit and the --allow security.insecure flag:
docker buildx build --allow security.insecure .
That way your build will be able to break free the security constraints.
I must reiterate that that's not a recommended practice for a few reasons:
It'll expose the building step for other images to escalate that permission hole.
The builder cannot properly cache that layer, since it's using insecure features.
Keep that in mind and happy mounting :)
The answer I found, is to put the mount command into the run.sh file. As the command (or CMD) in the Dockerfile is only executed when running
docker-compose up
the mount will only be executed after the build, done beforehand, is already finished.
Therefore, before starting the python script, the mount command is executed.
In my case, that only worked with the privileged flag set to true.

Creating custom build image for AWS CodeBuild with docker-compose

I'm trying to create a custom docker image in order to use it as build image with AWS CodeBuild. It works fine if I just do docker build against Dockerfile with set up environment. But now I need to add a postgres instance to run the tests against. So I thought using docker-compose would do the trick. However I'm failing to figure out how to make it work. It seems like the static part of the composition (the image from Dockerfile) just stops right away when I try docker-compose up, since there is no entry point. At this point I can connect to db instance by running docker-compose run db psql -h db -U testdb -d testdb. But when I build and feed it to the script provided by AWS, it runs fine until my tests try to reach the DB-server. This is where it fail with timeout, as if there was no db instance.
Configs look like this:
version: '3.7'
services:
api-build:
tty: true
build: ./api_build
image: api-build
depends_on:
- db
db:
image: postgres:10-alpine
restart: always
environment:
POSTGRES_USER: testdb
POSTGRES_PASSWORD: testdb
And Dockerfile under ./api_build:
FROM alpine:3.8
FROM ruby:2.3-alpine as rb
RUN apk update && apk upgrade && \
echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories && \
echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
alpine-sdk \
tzdata \
libxml2-dev \
libxslt-dev \
libpq \
postgresql-dev \
elixir \
erlang
UPDATE: I just realized that docker-compose build just builds parts of composition if it's needed (e.g. Docker file updated), so does that mean there's no way to create an image using docker compose? Or am I doing something very wrong?
Since there are no answers I'll try to answer it myself. I'm not sure if it's gonna be useful, but I found out that I had some misconceptions concerning Docker, which prevented me from seeing a solution or the lack of.
1) What I didn't realize is that docker-compose is used for orchestration of container compositions, it cannot be built into a single image that contains all services that you need.
2) Multi-stage builds sounded exciting and a bit magical until I figured out that every next stage starts image from scratch. The only thing you can do is copy some files from previous stages (if aliased with AS). It's still cool, but copying manually an installation with hundreds of files might (and will) become a nightmare.
3) Docker is designed to have only one process running inside of the container, but it doesn't mean it can't run multiple processes. So the solution for my problem was using a supervisor. S6 in particular, which is said to be lightweight, which is exactly what I needed with tiny Alpine images.
I ended up deploying s6-overlay from just-containers:
RUN curl -L -s https://github.com/just-containers/s6-overlay/releases/download/v1.21.4.0/s6-overlay-amd64.tar.gz \
| tar xvzf - -C /
ENTRYPOINT [ "/init" ]
It provides /etc/services.d directory where service scripts go. For example for postgresql, the minimal example would be (in /etc/services.d/postgres/run):
#!/usr/bin/execlineb -P
s6-setuidgid postgres
postgres -D /usr/local/pgsql/data
Pretty much that's it.

Resources