Hex Cannot Be Found on Dockerized Phoenix App when running on Drone - docker

So I currently have a setup wherein I deploy my dockerized Phoenix application to run tests on a self hosted Drone server. Currently the issue arises that no matter what Dockerfile I use(currently alpine-elixir-phoenix or a base elixir image with the following) which installs hex/rebar like below:
# Install Hex+Rebar
RUN mix local.hex --force && \
mix local.rebar --force
I receive the error upon booting in Drone,
Could not find Hex, which is needed to build dependency :phoenix
I have found that by using an older version of alpine-elixir-phoenix:2.0 this issue does not come up which leads me to believe it may be something to do with hex/elixir having updated since then? Additionally if I run the commands to install hex and rebar within the container in Drone once it is instantiated there is no issue. I ran a whoami on the instantiated Drone container and the user is root if that makes a difference. Additionally if I run the container locally and run mix hex.info, it correctly states that hex is installed, however the issue is that on the Drone instantiated container this fails.
Example .drone.yml:
pipeline:
backend_test:
image: bitwalker/alpine-elixir-phoenix
commands:
- cd api
- apk update
- apk add postgresql-client
- MIX_ENV=test mix local.hex --force
- MIX_ENV=test mix local.rebar --force
- MIX_ENV=test mix deps.get
- MIX_ENV=test mix ecto.create
- MIX_ENV=test mix ecto.migrate
- mix test
Example Docker File used(bitwalker/alpine-elixir-phoenix) - https://github.com/bitwalker/alpine-elixir-phoenix/blob/master/Dockerfile
where the same installation of local.hex and local.rebar occurs in the Dockerfile on lines 29 && 30. However upon instantiation of the container it is not found and therefore must be run again in the CMDs.
Furthermore I encountered this problem again but with make and g++ not installing on alpine. I may be doing something incorrect but I cannot see where.
testbuild_env Dockerfile
FROM bitwalker/alpine-erlang:19.2.1b
ENV HOME=/opt/app/ TERM=xterm
# Install Elixir and basic build dependencies
RUN \
echo "#edge http://nl.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk update && \
apk --no-cache --update add \
git make g++ curl \
elixir#edge=1.4.2-r0 && \
rm -rf /var/cache/apk/*
# Install Hex+Rebar
RUN mix local.hex --force && \
mix local.rebar --force
ENV DOCKER_BUCKET test.docker.com
ENV DOCKER_VERSION 17.05.0-ce-rc1
ENV DOCKER_SHA256 4561742c2174c01ffd0679621b66d29f8a504240d79aa714f6c58348979d02c6
RUN set -x \
&& curl -fSL "https://${DOCKER_BUCKET}/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz" -o docker.tgz \
&& echo "${DOCKER_SHA256} *docker.tgz" | sha256sum -c - \
&& tar -xzvf docker.tgz \
&& mv docker/* /usr/local/bin/ \
&& rmdir docker \
&& rm docker.tgz \
&& docker -v
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["sh"]
with the following .drone.yml
build:
image: test_buildenv
commands:
- cd api
- apk add make
- apk add g++
- MIX_ENV=test mix local.hex --force
- MIX_ENV=test mix local.rebar --force
- docker login --username USERNAME --password PASSWORD
- mix docker.build # creates a release file after running a dockerfile.build image
- mix docker.release # creates a minimalist image to run the release file that was just created
- mix docker.publish # pushes newly created image to dokcerh
volumes:
- /var/run/docker.sock:/var/run/docker.sock

The problem is that Drone builds its own isolated working environment as an extra layer on top of your docker image, so the ENV settings in your Dockerfile are not available. You need to independently tell Drone the environment info so it knows where hex is installed.
I managed to get this working by setting MIX_HOME in the .drone.yml file:
Dockerfile:
FROM bitwalker/alpine-elixir:1.8.1
RUN mix local.hex --force
.drone.yml:
pipeline:
build:
image: # built image of the above Dockerfile
environment:
MIX_HOME: /opt/app/.mix
commands:
- mix deps.get

Related

How to prevent having to rebuild image on code changes

I started using Docker for a personal project and realized that this increases my development time to an unnacceptable amount. I would rather spin up an LXC instance if I had to rebuild images for every code change.
I heard there was a way to mount this but wasn't sure exactly how one would go about it. I also have a docker compose yaml file but I think you mount a volume or something in the Dockerfile? The goal is to have code changes not need to rebuild a container image.
FROM ubuntu:18.04
EXPOSE 5000
# update apt
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends build-essential gcc wget
# pip installs
FROM python:3.10
# TA-Lib
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
ADD app.py /
RUN pip install --upgrade pip setuptools
RUN pip install pymysql
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
RUN pip freeze >> /tmp/requirement.txt
COPY . /tmp
CMD ["python", "/tmp/app.py"]
RUN chmod +x ./tmp/start.sh
RUN ./tmp/start.sh
version: '3.8'
services:
db:
image: mysql:8.0.28
command: '--default-authentication-plugin=mysql_native_password'
restart: always
environment:
- MYSQL_DATABASE=#########
- MYSQL_ROOT_PASSWORD=####
# client:
# build: client
# ports: [3000]
# restart: always
server:
build: server
ports: [5000]
restart: always
Here's what I would suggest to make dev builds faster:
Bind mount code into the container
A bind mount is a directory shared between the container and the host. Here's the syntax for it:
version: '3.8'
services:
# ... other services ...
server:
build: server
ports: [5000]
restart: always
volumes:
# Map the server directory in into the container at /code
- ./server:/code
The first part of the mount, ./server is relative to the directory that the docker-compose.yml file is in. If the server directory and the docker-compose.yml file are in different directories, you'll need to change this part.
After that, you'd remove the part of the Dockerfile which copies code into the container. Something like this:
# pip installs
FROM python:3.10
# TA-Lib
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
RUN pip install --upgrade pip setuptools
RUN pip install pymysql
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
CMD ["python", "/code/app.py"]
The advantage of this approach is that when you hit 'save' in your editor, the change will be immediately propagated into the container, without requiring a rebuild.
Documentation on syntax
Note about production builds: I don't recommend bind mounts when running your production server. In that case, I would recommend copying your code into the container instead of using a bind mount. This makes it easier to upgrade a running server. I typically write two Dockerfiles and two docker-compose.yml files: one set for production, and one set for development.
Install dependencies before copying code into container
One part of your Dockerfile is causing most of the slowness. It's this part:
ADD app.py /
# ... snip two lines ...
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
This defeats Docker's layer caching. Docker is capable of caching layers, and using the cache if nothing in that layer has changed. However, if a layer changes, any layer after that change will be rebuilt. This means that changing app.py will cause the pip install --requirement /tmp/requirements.txt line to run again.
To make use of caching, you should follow the rule that the least-frequently changing file goes in first, and most-frequently changing file goes last. Since you change the code in your project more often than you change which dependencies you're using, that means you should copy app.py in after you've installed the dependencies.
The Dockerfile would change like this:
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
# After installing dependencies
ADD app.py /
In my projects, I find that rebuilding a container without changing dependencies takes about a second, even if I'm not using the bind-mount trick.
For more information, see the documentation on layer caching.
Remove unused stage
You have two stages in your Dockerfile:
FROM ubuntu:18.04
# ... snip ...
FROM python:3.10
The FROM command means that you are throwing out everything in the image and starting from a new base image. This means that everything in between these two lines is not really doing anything. To fix this, remove everything before the second FROM statement.
Why would you use multistage builds? Sometimes it's useful to install a compiler, compile something, then copy it into a fresh image. Example.
Merge install and remove step
If you want to remove a file, you should do it in the same layer where you created the file. The reason for this is that deleting a file in a previous layer does not fully remove the file: the file still takes up space in the image. A tool like dive can show you files which are having this problem.
Here's how I would suggest changing this section:
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
Merge the rm into the previous step:
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure && \
make && \
make install && \
cd .. && \
rm -R ta-lib ta-lib-0.4.0-src.tar.gz

docker-compose and Dockerfile do not install drupal

New to docker and I wanted to install Drupal 7 with docker, to mirror our production server environment. (We are getting ready to upgrade to Drupal 8 - not relevant to this question here.) When I run docker-compose the docker container and an app folder is created, but there is nothing inside app/ . I then placed a composer.json in the root to run composer and install drupal 7. That works, but I thought the point of docker-compose was that it would install everything including drupal 7.
What am I doing wrong?
Follow up question:
Since I am trying to mirror the drupal site on the production server environment, I need to install drupal version 7.69, but this version is not listed on Docker Hub as a package. So, I can't install that specific version?
Docker 19.03.13
MacOS 10.14.6
LAMP
MySQL databases not in volume, but served from Mac development environment
Directory structure:
root
|--apache-drupal.conf
|--docker-compose.yml
|--Dockerfile
|--composer.json
Dockerfile
FROM drupal:7.73-apache
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
automake \
bsdmainutils \
build-essential \
ssh \
unzip \
curl \
libopenmpi-dev \
openmpi-bin \
git \
default-mysql-client \
vim \
wget \
zlib1g-dev
# Install Composer
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
mv composer.phar /usr/local/bin/composer && \
php -r "unlink('composer-setup.php');" && \
ln -s /root/.composer/vendor/bin/drush /usr/local/bin/drush
RUN cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini && \
sed -i -e "s/^ *memory_limit.*/memory_limit = -1/g" /usr/local/etc/php/php.ini && \
sed -i -e "s/^ *upload_max_filesize.*/upload_max_filesize = 30M/g" /usr/local/etc/php/php.ini
# Install Drush
RUN composer global require drush/drush:8.2 && \
composer global update
#RUN wget -O drush.phar https://github.com/drush-ops/drush-launcher/releases/download/0.4.2/drush.phar && \
# chmod +x drush.phar && \
# mv drush.phar /usr/local/bin/drush
# Clean repository
RUN apt-get clean && rm -rf /var/www/html/* && rm -rf /var/lib/apt/lists/*
COPY apache-drupal.conf /etc/apache2/sites-enabled/000-default.conf
WORKDIR /app
docker-compose.yml
version: '2'
services:
drupal:
image: userID/website_d7:1.0
container_name: website_d7
build: .
ports:
- "8033:80"
extra_hosts:
- "test.docker:127.0.0.1"
environment:
MYSQL_USER: user
MYSQL_PASS: pass
MYSQL_DATABASE: website_d7
volumes:
- ./app:/app:cached
restart: always
Running docker containers with:
docker-compose build
docker-compose up
I personnaly never installed drupal from docker / docker-compose ; I always used composer to do it which is in my opinion better cause you can manage the drupal version you want and the modules you need in the composer.json. I only use docker / docker-compose to build the environment and the containers (database / frontend / backend / cache manager).
under volumes:, you are mounting your local ./app folder INTO the container, overwriting /app inside the container. so, if there is nothing in it to begin with, then it wont be filled by your docker container being created.
On the Drupal Docker Hub page (under the volumes heading) they talk about adding 4 volume mounts for specific folders, like:
volumes:
- /path/on/host/modules:/var/www/html/modules
- /path/on/host/profiles:/var/www/html/profiles
- /path/on/host/sites:/var/www/html/sites
- /path/on/host/themes:/var/www/html/themes
if you put all those into your ./app folder locally, you might end up with something like:
volumes:
- ./app/modules:/var/www/html/modules
- ./app/profiles:/var/www/html/profiles
- ./app/sites:/var/www/html/sites
- ./app/themes:/var/www/html/themes
generally, I like to use docker for containerizing the environment/runtime/third party systems (like Drupal or more often in my case WordPress) and then I setup volumes similar to the above for specific folder(s) that are unique to the project (like themes, plugins, etc.). In my case, I usually do WordPress development, so I just have a single mount for ./wp-content:/var/www/html/wp-content
RE: your follow up question - if you look at the "Tags" tab on that same docker hub page (or search for 7.69 on there) you'll see it is actually listed there, so it should be available.

Building of a Docker image with Qt5 compiled with MinGW works in a container run from "docker:latest" image, but fails in GitLab CI

I want to prepare a docker image with Qt5 with MinGW. Part of the process is building Qt 5.14.0 with MinGW and that is the part where it fails.
Building on my machine.
There weren't any problems when I pulled the docker:latest image on my PC, ran container from it and built my image in this container. It worked fine.
Building in GitLab CI pipeline.
When I pushed the Dockerfile in Gitlab, where it is built in container from the same docker:latest image, it fails to build Qt with the following error message:
Could not find qmake spec ''.
Error processing project file: /root/src/qt-everywhere-src-5.14.0
Screenshot of the failure
CI script:
stages:
- deploy
variables:
CONTAINER_NAME: "qt5-mingw"
PORT: "5000"
image: docker:latest
build-snapshot:
stage: deploy
tags:
- docker
- colo
environment:
name: snapshot
url: https://somedomain.com/artifactory/#/artifacts/qt5-mingw
before_script:
- docker login -u ${ARTIFACT_USER} -p ${ARTIFACT_PASS} somedomain.com:${PORT}
script:
- docker build -f Dockerfile -t ${CONTAINER_NAME} .
- export target_version=$(docker inspect --format='{{index .Config.Labels "com.domain.version" }}' ${CONTAINER_NAME})
- docker tag ${CONTAINER_NAME} dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker tag dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version} dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
after_script:
- docker logout dsl.domain.com:${PORT}
- docker rmi ${CONTAINER_NAME}
except:
- master
- tags
The Dockerfile:
FROM debian:buster-slim
########################
# Install what we need
########################
# Custom Directory
ENV CUSTOM_DIRECTORY YES
ENV WDEVBUILD /temp/build
ENV WDEVSOURCE /temp/src
ENV WDEVPREFIX /opt/windev
# Custom Version
ENV CUSTOM_VERSION NO
ENV QT_SERIES 5.14
ENV QT_BUILD 0
ENV LIBJPEGTURBO_VERSION 2.0.3
ENV LIBRESSL_VERSION 3.0.2
ENV OPENSSL_VERSION 1.1.1c
ENV UPX_VERSION 3.95
# SSL Choice
ENV USE_OPENSSL YES
# Exclude Static Qt
ENV BUILD_QT32_STATIC NO
ENV BUILD_QT64_STATIC NO
# Copy directory with qt_build script
COPY rootfs /
# install tools
RUN apt-get update \
&& apt-get install -y bash \
cmake \
coreutils \
g++ \
git \
gzip \
libucl1 \
libucl-dev \
make \
nasm \
ninja-build \
perl \
python \
qtchooser \
tar \
wget \
xz-utils \
zlib1g \
zlib1g-dev \
&& apt-get install -y binutils-mingw-w64-x86-64 \
mingw-w64-x86-64-dev \
g++-mingw-w64-x86-64 \
gcc-mingw-w64-x86-64 \
binutils-mingw-w64-i686 \
mingw-w64-i686-dev \
g++-mingw-w64-i686 \
gcc-mingw-w64-i686 \
&& rm -rf /temp \
&& rm -rf /var/lib/apt/lists/*
# Build Qt with mingw and the step where it fails.
RUN /opt/windev/bin/qt_build \
LABEL com.domain.version="1.0.0"
LABEL vendor="Someone"
LABEL com.domain.release-date="2020-01-21"
Debugging process so far:
The version of the docker:latest is the same in both cases.
The version of MinGW is the same in both cases.
I tried also with Qt 5.12.6 and the result is the same.
I have found it. I think the answer is here.
The package libseccomp2 is 2.3.1 on the CI Runner machine and 2.4.1 on my PC. But Qt versions after 5.10 are using system call that has been added in 2.3.3, so that's why it can be build on my PC and can't be built on the runner.
Reamrak: It doesn't matter that it is build in container run from docker:latest image, because the Docker Daemon is mounted when the container is started, so apparently it continue to use some features of the host and the docker work is not completely containerized.

How to combine Dockerfiles in gitlab ci?

I have this gitlab-ci.yml to build my SpringBoot app:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS clean compile
only:
- /^release.*/
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
- "cat target/site/coverage/jacoco-ut/index.html"
only:
- /^release.*/
Now, i need to run another JOB on the Test Stage: Integration Tests. My app runs the integration tests on Headless Chrome with an in memory database, all i need to do on windows is: mvn integration-test
I've found a Dockerfile that has the Headless Chrome ready, so i need to combine the maven:latest image with this new image https://hub.docker.com/r/justinribeiro/chrome-headless/
How can i do that?
You can write a new docker file by choosing maven:latest as the base image. (That means all the maven latest image dependencies are there). You can refer this link to how to write a docker file.
Since the base image of the maven:latest is a debian image and docker file that contains Dockerfile that has the Headless Chrome is also a debian image so all the OS commands are same. So you can write a docker file like following where the base image is maven:latest and rest is same as here.
FROM maven:latest
LABEL name="chrome-headless" \
maintainer="Justin Ribeiro <justin#justinribeiro.com>" \
version="2.0" \
description="Google Chrome Headless in a container"
# Install deps + add Chrome Stable + purge all the things
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-beta \
fontconfig \
fonts-ipafont-gothic \
fonts-wqy-zenhei \
fonts-thai-tlwg \
fonts-kacst \
fonts-symbola \
fonts-noto \
ttf-freefont \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
# Add Chrome as a user
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome \
&& mkdir -p /home/chrome && chown -R chrome:chrome /home/chrome \
&& mkdir -p /opt/google/chrome-beta && chown -R chrome:chrome /opt/google/chrome-beta
# Run Chrome non-privileged
USER chrome
# Expose port 9222
EXPOSE 9222
# Autorun chrome headless with no GPU
ENTRYPOINT [ "google-chrome" ]
CMD [ "--headless", "--disable-gpu", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222" ]
I have checked this and it's working fine. Once you have write the Dockerfile you can build it using dokcer build . from the same repository as Dockerfile. Then you can either push this to docker hub or your own registry where your gitlab runner can access the docker image. Make sure you tag the docker image of your preference as example let's think the tag is and you are pushing to your local repository {your-docker-repo}/maven-with-chrome-headless:1.0.0
Then use that previous tag in your gitlab-ci.yml file as image: {your-docker-repo}/maven-with-chrome-headless:1.0.0
You do not "combine" docker containers. You put different services into different containers and run them all together. Look at kubernetes (it has now generic support in gitlab) or choose simpler solution like docker-compose or docker-swarm.
For integration tests we use docker-compose.
Anyway, if using docker-compose, you will probably fall into the situation that you need so-called docker-in-docker. It depends on the type of worker, you use to run your gitlab jobs. If you use shell executor, everything will be fine. If you are using docker executor, you will have to setup it properly, because you cant call docker from docker without additional manual setup.
If using several containers is not your choice and you definitely want to put all in one container, the recommended way is to use supervisor to launch processes inside container. One of the options is supervisord: http://supervisord.org/

Unable to download docker golang image: No command specified

Newbie in docker here.
I want to build a project using go language and my docker-compose.yml file has the following:
go:
image: golang:1.7-alpine
volumes:
- ./:/server/http
ports:
- "80:8080"
links:
- postgres
- mongodb
- redis
environment:
DEBUG: 'true'
PORT: '8080'
When I run docker-compose up -d in terminal, it returns the following error:
`ERROR: for go Cannot create container for service go: No command specified`
How should I fix it?
Golang:1.7-alpine is just a basis for building a Go container, and does not have a CMD or an ENTRYPOINT, so ends immediately.
Use an image doing really something, like printing hello world every 45 seconds
I solved it by using golang:1.7 instead of golang:1.7-alpine.
You should run your container with an argument which will be passed to the default ENTRYPOINT, to be executed as a command
But the best practice these days is to use multistage, in order to generate a smaller image with just your application.
Or you can define your ENTRYPOINT being your build Go application.
See 'Using Docker Multi-Stage Builds with Go ', using the new AS build-stage keyword.
Your Dockerfile would be:
# build stage
ARG GO_VERSION=1.8.1
FROM golang:${GO_VERSION}-alpine AS build-stage
MAINTAINER fbgrecojr#me.com
WORKDIR /go/src/github.com/frankgreco/gobuild/
COPY ./ /go/src/github.com/frankgreco/gobuild/
RUN apk add --update --no-cache \
wget \
curl \
git \
&& wget "https://github.com/Masterminds/glide/releases/download/v0.12.3/glide-v0.12.3-`go env GOHOSTOS`-`go env GOHOSTARCH`.tar.gz" -O /tmp/glide.tar.gz \
&& mkdir /tmp/glide \
&& tar --directory=/tmp/glide -xvf /tmp/glide.tar.gz \
&& rm -rf /tmp/glide.tar.gz \
&& export PATH=$PATH:/tmp/glide/`go env GOHOSTOS`-`go env GOHOSTARCH` \
&& glide update -v \
&& glide install \
&& CGO_ENABLED=0 GOOS=`go env GOHOSTOS` GOARCH=`go env GOHOSTARCH` go build -o foo \
&& go test $(go list ./... | grep -v /vendor/) \
&& apk del wget curl git
# production stage
FROM alpine:3.5
MAINTAINER fbgrecojr#me.com
COPY --from=build-stage /go/src/github.com/frankgreco/go-docker-build/foo .
ENTRYPOINT ["/foo"]

Resources