Creating custom build image for AWS CodeBuild with docker-compose - docker

I'm trying to create a custom docker image in order to use it as build image with AWS CodeBuild. It works fine if I just do docker build against Dockerfile with set up environment. But now I need to add a postgres instance to run the tests against. So I thought using docker-compose would do the trick. However I'm failing to figure out how to make it work. It seems like the static part of the composition (the image from Dockerfile) just stops right away when I try docker-compose up, since there is no entry point. At this point I can connect to db instance by running docker-compose run db psql -h db -U testdb -d testdb. But when I build and feed it to the script provided by AWS, it runs fine until my tests try to reach the DB-server. This is where it fail with timeout, as if there was no db instance.
Configs look like this:
version: '3.7'
services:
api-build:
tty: true
build: ./api_build
image: api-build
depends_on:
- db
db:
image: postgres:10-alpine
restart: always
environment:
POSTGRES_USER: testdb
POSTGRES_PASSWORD: testdb
And Dockerfile under ./api_build:
FROM alpine:3.8
FROM ruby:2.3-alpine as rb
RUN apk update && apk upgrade && \
echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories && \
echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
alpine-sdk \
tzdata \
libxml2-dev \
libxslt-dev \
libpq \
postgresql-dev \
elixir \
erlang
UPDATE: I just realized that docker-compose build just builds parts of composition if it's needed (e.g. Docker file updated), so does that mean there's no way to create an image using docker compose? Or am I doing something very wrong?

Since there are no answers I'll try to answer it myself. I'm not sure if it's gonna be useful, but I found out that I had some misconceptions concerning Docker, which prevented me from seeing a solution or the lack of.
1) What I didn't realize is that docker-compose is used for orchestration of container compositions, it cannot be built into a single image that contains all services that you need.
2) Multi-stage builds sounded exciting and a bit magical until I figured out that every next stage starts image from scratch. The only thing you can do is copy some files from previous stages (if aliased with AS). It's still cool, but copying manually an installation with hundreds of files might (and will) become a nightmare.
3) Docker is designed to have only one process running inside of the container, but it doesn't mean it can't run multiple processes. So the solution for my problem was using a supervisor. S6 in particular, which is said to be lightweight, which is exactly what I needed with tiny Alpine images.
I ended up deploying s6-overlay from just-containers:
RUN curl -L -s https://github.com/just-containers/s6-overlay/releases/download/v1.21.4.0/s6-overlay-amd64.tar.gz \
| tar xvzf - -C /
ENTRYPOINT [ "/init" ]
It provides /etc/services.d directory where service scripts go. For example for postgresql, the minimal example would be (in /etc/services.d/postgres/run):
#!/usr/bin/execlineb -P
s6-setuidgid postgres
postgres -D /usr/local/pgsql/data
Pretty much that's it.

Related

How to run multiple postman/newman collections in docker container

I am trying to run multiple postman collections in a docker container. Since postman doesn't come with the feature to run a folder with multiple collections, I tried to run multiple commands in docker. Nothing worked.
This is my dockerfile.
FROM postman/newman_alpine33
COPY . .
ENTRYPOINT["sh","-c"]
And this is my postman container in docker-compose.
postman:
build:
context: ./
dockerfile: Dockerfile-postman
container_name: postmanTests
command:
run "https://www.getpostman.com/collections/1" --env-var "base_url=http://service:8080" &&
run "https://www.getpostman.com/collections/2" --env-var "base_url=http://service:8080"
volumes:
- container-volume:/var/lib/postman
depends_on:
- service
I tried using the sh-c command, the bash -c command, but I got an error that -c is invalid.
Your ENTRYPOINT line is causing problems. It forces the entire command you might want to run to be packed into a single shell word. Compose command: doesn't expect this, and I'd expect your container to try to execute the shell command run with no arguments.
One straightforward path here could be to just run multiple containers. I'm looking at the documentation for the postman/newman_alpine33 image and it supports injecting the Postman data as a Docker mount of some sort. Compose is a little better suited to long-running containers than short-lived tasks like this. So I might run
docker run \
--rm \ # clean up the container when done
-v name_container-volume:/var/lib/postman \
--net name_default \ # attach to Compose network
postman/newman_alpine33 \ # the unmodified image
--url "https://www.getpostman.com/collections/1" \ # newman options
--env-var "base_url=http://service:8080"
docker run --rm --net name_default \
-v name_container-volume:/var/lib/postman \
postman/newman_alpine33 \
--url "https://www.getpostman.com/collections/2" \
--env-var "base_url=http://service:8080"
(You could use a shell script to reduce the repetitive options, or alternatively use docker-compose run if you can write a service definition based on this image.)
If you really want to do this in a single container, it's helpful to understand what the base image is actually doing with those command arguments. The Docker Hub page links to a GitHub repository and you can in turn find the Dockerfile there. That ends with the line
ENTRYPOINT ["newman"]
so the docker run command part just supplies arguments to newman. If you want to run multiple things in the same container, and orchestrate them using a shell, you need to replace this entrypoint, and you need to explicitly restate the newman command.
For this we could do everything in a Compose setup, and that makes some sense since the Postman collections are "data" and the URLs are environment-specific. Note here that we override the entrypoint: at run time, and its value has exactly three items, sh, -c, and the extended command to be run packed into a single string.
services:
postman:
image: postman/newman_alpine33 # do not build: a custom image
volumes:
- container-volume:/var/lib/postman
- .:/etc/newman # inject the collections
entrypoint: # override `newman` in image
- /bin/sh
- -c
- >-
newman
--url "https://www.getpostman.com/collections/1"
--env-var "base_url=http://service:8080"
&& newman
--url "https://www.getpostman.com/collections/1"
--env-var "base_url=http://service:8080"
depends_on:
- service
(The >- syntax creates a YAML block scalar; the text below it is a single string. > converts all newlines within the string to spaces and - removes leading and trailing newlines. I feel like I see this specific construction somewhat regularly in a Kubernetes context.)

How to update source code without rebuilding image each time?

Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
I think I have already optimize my Dockerfile enough to decrease building time, but it's always 2 commands and some waiting time for sometimes just one line of code added. It's longer than a simple CTRL + S and check the results.
The commands I have to do for each little update in my code:
docker-compose down
docker-compose build
docker-compose up
Here's my Dockerfile:
FROM python:3-slim as development
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /requirements.txt
COPY ./scripts /scripts
EXPOSE 80
RUN apt-get update && \
apt-get install -y \
bash \
build-essential \
gcc \
libffi-dev \
musl-dev \
openssl \
wget \
postgresql \
postgresql-client \
libglib2.0-0 \
libnss3 \
libgconf-2-4 \
libfontconfig1 \
libpq-dev && \
pip install -r /requirements.txt && \
mkdir -p /vol/web/static && \
chmod -R 755 /vol && \
chmod -R +x /scripts
COPY ./files /files
WORKDIR /files
ENV PATH="/scripts:/py/bin:$PATH"
CMD ["run.sh"]
Here's my docker-compose.yml file:
version: '3.9'
x-database-variables: &database-variables
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ALLOWED_HOSTS: ${ALLOWED_HOSTS}
x-app-variables: &app-variables
<<: *database-variables
POSTGRES_HOST: ${POSTGRES_HOST}
SPOTIPY_CLIENT_ID: ${SPOTIPY_CLIENT_ID}
SPOTIPY_CLIENT_SECRET: ${SPOTIPY_CLIENT_SECRET}
SECRET_KEY: ${SECRET_KEY}
CLUSTER_HOST: ${CLUSTER_HOST}
DEBUG: 0
services:
website:
build:
context: .
restart: always
volumes:
- static-data:/vol/web
environment: *app-variables
depends_on:
- postgres
postgres:
image: postgres
restart: always
environment: *database-variables
volumes:
- db-data:/var/lib/postgresql/data
proxy:
build:
context: ./proxy
restart: always
depends_on:
- website
ports:
- 80:80
- 443:443
volumes:
- static-data:/vol/static
- ./files/templates:/var/www/html
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
- ./etc/letsencrypt:/etc/letsencrypt
volumes:
static-data:
db-data:
Mount your script files directly in the container via docker-compose.yml:
volumes:
- ./scripts:/scripts
- ./files:/files
Keep in mind you have to use a prefix if you use a WORKDIR in your Dockerfile.
Quickly answer
Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
If your app needs a build step, you cannot skip it.
In your case, you can install the requirements before the python app, so on each source code modification, you just need to run your python app, not the entire stack: postgress, proxy, etc
Docker purpose
The main docker goal or feature is to enable developers to package applications into containers which are easy to deploy anywhere, simplifying your infrastructure.
So, in this sense, docker is not strictly for the developer stage. In the developer stage, the programmer should use an specialized IDE (eclipse, intellij, visual studio, etc) to create and update the source code. Also some languages like java, c# and frameworks like react/ angular needs a build stage.
These IDEs has features like hot reload (automatic application updates when source code change), variables & methods auto-completion, etc. These features achieve to reduce the developer time.
Docker for source code changes by developer
Is not the main goal but if you don't have an specialized ide or you are in a very limited developer workspace(no admin permission, network restrictions, windows, ports, etc ), docker can rescue you
If you are a java developer (for instance), you need to install java on your machine and some IDE like eclipse, configure the maven, etc etc. With docker, you could create an image with all the required techs and the establish a kind of connection between your source code and the docker container. This connection in docker is called Volumes
docker run --name my_job -p 9000:8080 \
-v /my/python/microservice:/src \
python-workspace-all-in-one
In the previous example, you could code directly on /my/python/microservice and you only need to enter into my_job and run python /src/main.py. It will work without python or any requirement on your host machine. All will be in python-workspace-all-in-one
In case of technologies that need a build process: java & c#, there is a time penalty because, the developer should perform a build on any source code change. This is not required with the usage of specialized ide as I explained.
I case of technologies who not require build process like: php, just the libraries/dependencies installation, docker will work almost the same as the specialized IDE.
Docker for local development with hot-reload
In your case, your app is based on python. Python don't require a build process. Just the libraries installation, so if you want to develop with python using docker instead the classic way: install python, execute python app.py, etc you should follow these steps:
Don't copy your source code to the container
Just pass the requirements.txt to the container
Execute the pip install inside of container
Run you app inside of container
Create a docker volume : your source code -> internal folder on container
Here an example of some python framework with hot-reload:
FROM python:3
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install -r requirements.txt
CMD [ "mkdocs", "serve", "--dev-addr=0.0.0.0:8000" ]
and how build as dev version:
docker build -t myapp-dev .
and how run it with volumes to sync your developer changes with the container:
docker run --name myapp-dev -it --rm -p 8000:8000 -v $(pwd):/usr/src/app mydocs-dev
As a summary, this would be the flow to run your apps with docker in a developer stage:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
build the docker image for development purposes
run the app syncing the source code with container (-v)
developer modify the source code
if you can use some kind of hot-reload library on python
the app is ready to be opened from a browser
Docker for local development without hot-reload
If you cannot use a hot-reload library, you will need to build and run whenever you want to test your source code modifications. In this case, you should copy the source code to the container instead the synchronization with volumes as the previous approach:
FROM python:3
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
RUN mkdocs build
WORKDIR /usr/src/app/site
CMD ["python", "-m", "http.server", "8000" ]
Steps should be:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
developer modify the source code
build
docker build -t myapp-dev.
run
docker run --name myapp-dev -it --rm -p 8000:8000 mydocs-dev

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

docker-compose not producting "No Such File or Directory" when files exist in container

I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!

docker-compose empty volume with a rails app on OSX

Not sure how to ask this question because I can't understand the problem. Also, I'm not a docker expert and this may be a stupid issue.
I have a Rails project with docker-compose. And there's 2 situations. First I'm able to build and run the app with docker-compose up and everything looks fine, the problem is the code is not reloading when I change it. Second, when I add a volume in docker-compose.yml, docker-compose up exit because Gemfile can't be found, the mounted folder is empty.
Dockerfile and docker-compose.yml extract, I renamed some stuff:
# File: Dockerfile.app
FROM ruby:2.5-slim-stretch
RUN apt-get update -qq && apt-get install -y redis-tools
RUN apt-get install -y autoconf bison build-essential #(..etc...)
RUN echo "gem: --no-document" > ~/.gemrc
RUN gem install bundler
ADD . /docker-projects
WORKDIR /docker-projects/project1/core
ENV BUNDLE_APP_CONFIG /docker-projects/project1/core/.bundle
RUN /bin/sh -c bundle install --local --jobs
# File: docker-compose.yml
app:
build: .
dockerfile: Dockerfile.app
command: /bin/sh -c "bundle exec rails s -p 8080 -b 0.0.0.0"
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- .:/docker-projects
links:
- redis
- mysql
- memcached
My 'docker-projects' is a big project made of different rails_engines and gems libraries. We manage this with the 'repo' tool.
Running docker-compose build app work fine, and I can see bundle install logs. Then docker-compose up app exit with error 'Gemfile not found'.
It was working with no problem till I decided to recover 50gb of space from docker containers and rebuild everything. Not sure what changed.
If I add the volume(docker-compose), the mounted volume is empty. If I remove the volume(docker-compose), the code is not reloading as it was.
Versions I'm using:
Docker version 18.09.7, build 2d0083d
OSX 10.14.5
docker (through brew) with xhyve driver
I tried with a new basic docker-compose project and I didn't have this issue. Any ideas? I'll keep looking.
Thanks.
Ok, I found the problem. This is the command I was using to generate my docker-machine:
docker-machine create default \
--driver xhyve \
--xhyve-cpu-count 4 \
--xhyve-memory-size 12288 \
--xhyve-disk-size 256000 \
--xhyve-experimental-nfs-share \
--xhyve-boot2docker-url https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
I probably did an upgrade in the middle because didn't work anymore. The docker-maching showed some warnings about NFS conflicts with my existing /etc/exports definition but the machine was created.
After searching around, I realize I have to rewrite the command above like this:
docker-machine create default \
--driver=xhyve \
--xhyve-cpu-count=4 \
--xhyve-memory-size=12288 \
--xhyve-disk-size=256000 \
--xhyve-boot2docker-url="https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso" \
--xhyve-experimental-nfs-share=/Users \
--xhyve-experimental-nfs-share-root "/"
The difference beside the '=' is the *-nfs-share options. I commented my /etc/exports to avoid the conflict warning, and recreated the machine. Now it works like it was before.
The option --xhyve-experimental-nfs-share-root is "/xhyve-nfsshares" by default, so I changed to "/" where I have it.

Resources