deploy after gitlab runner completes build - docker

I'm want to use gitlab runners to deploy a successfully built docker image but I am not sure how to use the deploy stage in .gitlab-ci.yml to do this. The build log shows the database is properly created on the docker image during the build process.
I use docker locally on a Mac (OSX 10.11.6) to build my docker container. Gitlab is running remotely. I registered a specific local runner to handle the build. When I push changes to my project, gitlab CI runs the build script to create a test database. What happens to the image after it's built? There is no docker image for the completed build listed on my local machine. The gitlab-runner-prebuilt-x86_64 is a barebones linux image that isn't connected with the build.
https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
http://container-solutions.com/running-docker-in-jenkins-in-docker/
>gitlab-ci-multi-runner list
Listing configured runners ConfigFile=/Users/username/.gitlab-runner/config.toml
local-docker-executor Executor=docker Token=[token] URL=http://gitlab.url/ci
>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gitlab-runner-prebuilt-x86_64 f6fdece [id1] 25 hours ago 50.87 MB
php7 latest [id2] 26 hours ago 881.8 MB
ubuntu latest [id3] 13 days ago 126.6 MB
docker latest [id4] 2 weeks ago 104.9 MB
.gitlab-ci.yml:
image: php7:latest
# build_image:
# script:
# - docker build -t php7 .
# Define commands that run before each job's script
# before_script:
# - docker info
# Define build stages
# First, all jobs of build are executed in parallel.
# If all jobs of build succeed, the test jobs are executed in parallel.
# If all jobs of test succeed, the deploy jobs are executed in parallel.
# If all jobs of deploy succeed, the commit is marked as success.
# If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
#- echo "SET PASSWORD FOR 'root'#'localhost' = PASSWORD('root');" | mysql -u root
#- echo "run unit test command here"
#Defines a list of tags which are used to select Runner
tags:
- docker
deploy_job1:
stage: deploy
#this script is run inside the docker container
script:
- whoami
- pwd
- ls -la
- ls /
#Usage: docker push [OPTIONS] NAME[:TAG]
#Push an image or a repository to a registry
- docker push deploy:latest
#gitlab runners will look for and run jobs with these tags
tags:
- docker
config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "local-docker-executor"
url = "http://gitlab.url/ci"
token = "[token]"
executor = "docker"
builds_dir = "/Users/username/DOCKER_BUILD_DIR"
[runners.docker]
tls_verify = false
image = "ubuntu:latest"
privileged = false
disable_cache = false
volumes = ["/cache"]
[runners.cache]
Dockerfile:
FROM ubuntu:latest
#https://github.com/sameersbn/docker-mysql/blob/master/Dockerfile
ENV DEBIAN_FRONTEND noninteractive
ENV MYSQL_USER mysql
ENV MYSQL_DATA_DIR /var/lib/mysql
ENV MYSQL_RUN_DIR /run/mysqld
ENV MYSQL_LOG_DIR /var/log/mysql
ENV DB_NAME "db_test"
ENV DB_IMPORT "db_test_schema.sql"
# RUN apt-get update && \
# apt-get -y install sudo
# RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
# USER docker
# CMD /bin/bash
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
# \
# && rm -rf ${MYSQL_DATA_DIR} \
# && rm -rf /var/lib/apt/lists/*
ADD ${DB_IMPORT} /tmp/${DB_IMPORT}
# #RUN /usr/bin/sudo service mysql start \
# RUN service mysql start \
# && mysql -u root -e "CREATE DATABASE $DB_NAME" \
# && mysql -u root $DB_NAME < /tmp/$DB_IMPORT
RUN locale-gen en_US.UTF-8 \
&& export LANG=en_US.UTF-8 \
&& apt-get update \
&& apt-get -y install apache2 libapache2-mod-php7.0 php7.0 php7.0-cli php-xdebug php7.0-mbstring php7.0-mysql php-memcached php-pear php7.0-dev php7.0-json vim git-core libssl-dev libsslcommon2-dev openssl libssl-dev \
&& a2enmod headers
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
#VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html
EXPOSE 80 3306
#ENTRYPOINT [ "/usr/sbin/apache2" ]
#CMD ["-D", "FOREGROUND"]
#ENTRYPOINT ["/bin/bash"]

You're not building any docker image on CI.
You are using php7 image from DockerHub to execute all jobs. This includes the job deploy_job1, that are trying to use docker binary to push an image (deploy:latest) that is not inside that container. Additionaly, I think that docker binary is not included on the php7 image.
I guess that you want to push the image that you build locally on your Mac, isn't it? In that case, you need to use another runner, which its executor should be shell. On that scenario, you will have 2 runners, one using docker to run the build_job1 job, and another one to push the locally built image. But there is a better solution that build manually the docker image, and it's to make GitLab CI to build it.
So, modifing your .gitlab-ci.yml (removing your comments, adding mines for explanation):
# Removed global image definition
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
# Use image ONLY in docker runner
image: php7:latest
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
# Run on runner with docker executor, this is ok
tags:
- docker
deploy_job1:
stage: deploy
script:
# Build the docker image first, and then push it
- docker build -t deploy:latest .
- docker push deploy:latest
# Run on runner with shell executor, set proper tag
tags:
- docker_builder
When you register the new runner, set executor as shell and tags docker_builder. I'm asuming that you have installed docker engine on your Mac.
On the other hand, this example makes no sense, at least for me. The build stage does nothing, as the container is ephemeral. I guess you should do that on the Dockerfile.

Related

Script set to run in docker container is also running on host machine

So what I wanted to do is using "COPY script.sh script.sh" (copy script from hos to container and execute) but when executing script in container , seems this script is also executing on host machine.
Below is Dockerfile :
FROM almalinux/almalinux:latest
RUN mkdir /opt/confluent
RUN mkdir /opt/confluent-hub
#Confluent Home
ENV CONFLUENT_HOME=/opt/confluent
ENV KAFKA_CONFIG=$KAFKA_CONFIG
ENV ZOOKEEPER_CONFIG=$ZOOKEEPER_CONFIG
ENV SCHEMA_REGISTRY_CONFIG=$ZOOKEEPER_CONFIG
ENV CONNECT_CONFIG=$CONNECT_CONFIG
# Zookeeper
ENV ZOOKEEPER_DATA_DIR=$ZOOKEEPER_DATA_DIR
ENV ZOOKEEPER_CLIENT_PORT=$ZOOKEEPER_CLIENT_PORT
#Kafka
ENV BOOTSTRAP_SERVERS=$BOOTSTRAP_SERVERS
ENV KAFKA_SERVER_BROKER_ID=$KAFKA_SERVER_BROKER_ID
ENV ZOOKEEPER_CONNECT_IP_PORT=$ZOOKEEPER_CONNECT_IP_PORT
ENV KAFKA_SERVER_LOG_DIR=$KAFKA_SERVER_LOG_DIR
# schmea registry
ENV KAFKASTORE_TOPIC=$KAFKASTORE_TOPIC
ENV PROTOCOL_BOOTSTRAP_SERVERS=$PROTOCOL_BOOTSTRAP_SERVERS
ENV SCHEMA_REGISTRY_GROUP_ID=$SCHEMA_REGISTRY_GROUP_ID
ENV SCHEMA_REGISTRY_LEADER_ELIGIBILITY=$SCHEMA_REGISTRY_LEADER_ELIGIBILITY
# Kafka connect
ENV CONNECT_REST_PORT=$CONNECT_REST_PORT
ENV CONNECT_OFFSETS=$CONNECT_OFFSETS
ENV CONNECT_KEY_CONVERTER=$CONNECT_KEY_CONVERTER
ENV SCHEMA_REGISTRY_URL=$SCHEMA_REGISTRY_URL
ENV CONNECT_VALUE_CONVERTER=$CONNECT_VALUE_CONVERTER
ENV SCHEMA_REGISTRY_LISTENER=$SCHEMA_REGISTRY_LISTENER
ENV CONNECT_PLUGIN_PATH=/usr/share/java/,$CONFLUENT_HOME/share/confluent-hub-components/
# install openjdk8
RUN dnf update -y && dnf install epel-release -y
RUN dnf install wget zip moreutils gettext unzip java-1.8.0-openjdk.x86_64 -y
# install conflunet
WORKDIR $CONFLUENT_HOME
RUN wget https://packages.confluent.io/archive/6.1/confluent-community-6.1.1.tar.gz -P .
RUN tar -xvzf confluent-community-6.1.1.tar.gz
RUN mv confluent-6.1.1/* .
RUn rm -rf confluent-6.1.1 confluent-community-6.1.1.tar.gz
# install confluent hub
RUN wget http://client.hub.confluent.io/confluent-hub-client-latest.tar.gz -P /opt/confluent-hub
WORKDIR /opt/confluent-hub
RUN tar -xvzf confluent-hub-client-latest.tar.gz
RUN rm -rf confluent-hub-client-latest.tar.gz
ENV CONFLUENT_HUB /opt/confluent-hub/bin
# Export path
ENV PATH $PATH:$CONFLUENT_HOME:$CONFLUENT_HUB
# install jdbc connector
COPY confluentinc-kafka-connect-jdbc-10.1.0.zip $CONFLUENT_HOME/share/confluent-hub-components/
RUN unzip $CONFLUENT_HOME/share/confluent-hub-components/confluentinc-kafka-connect-jdbc-10.1.0.zip
RUN rm -rf confluentinc-kafka-connect-jdbc-10.1.0.zip
# Copy confleunt config to docker
WORKDIR $CONFLUENT_HOME
COPY config/* config/
# startup
COPY startup.sh ./startup.sh
RUN chmod +x ./startup.sh
CMD ./startup.sh
Below is startup.sh which replaces environment variables in config files and starts kafka service but this script when run in container is replacing values on host config file :
#!/bin/bash
# Substitue environment variables in actual $CONFLUENT_HOME/configs
envsubst < $CONFLUENT_HOME/config/zookeeper.properties | sponge $CONFLUENT_HOME/config/zookeeper.properties
envsubst < $CONFLUENT_HOME/config/server.properties | sponge $CONFLUENT_HOME/config/server.properties
envsubst < $CONFLUENT_HOME/config/schema-registry.properties | sponge $CONFLUENT_HOME/config/schema-registry.properties
envsubst < $CONFLUENT_HOME/config/connect-avro-standalone.properties | sponge $CONFLUENT_HOME/config/connect-avro-standalone.properties
# start zookeeper
$CONFLUENT_HOME/bin/zookeeper-server-start -daemon $ZOOKEEPER_CONFIG
sleep 2
# start kafka broker
$CONFLUENT_HOME/bin/kafka-server-start -daemon $KAFKA_CONFIG
sleep 2
# start schema registry
$CONFLUENT_HOME/bin/schema-registry-start -daemon $SCHEMA_REGISTRY_CONFIG
sleep 2
# start kafka connect
$CONFLUENT_HOME/bin/connect-standalone -daemon $CONNECT_CONFIG $CONFLUENT_HOME/etc/kafka/connect-file-sink.properties
sleep 2
while :
do
echo "Confluent Running "
sleep 5
done
docker-compose :
version : "3.9"
services:
confluent-community:
build: ./
environment:
- KAFKA_CONFIG=$CONFLUENT_HOME/config/server.properties
- ZOOKEEPER_CONFIG=$CONFLUENT_HOME/config/zookeeper.properties
- SCHEMA_REGISTRY_CONFIG=$CONFLUENT_HOME/config/schema-registry.properties
- CONNECT_CONFIG=$CONFLUENT_HOME/config/connect-avro-standalone.properties
- CONNECT_REST_PORT=8083
- CONNECT_OFFSETS=$CONFLUENT_HOME/data/connect/connect.offsets
- CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- SCHEMA_REGISTRY_URL=http://localhost:8081
- CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- SCHEMA_REGISTRY_LISTENER=http://0.0.0.0:8081
- KAFKASTORE_TOPIC=_schemas
- SCHEMA_REGISTRY_GROUP_ID=SCHEMA_REGISTRY_A
- SCHEMA_REGISTRY_LEADER_ELIGIBILITY=true
- PROTOCOL_BOOTSTRAP_SERVERS=PLAINTEXT://localhost:9092
- ZOOKEEPER_DATA_DIR=$CONFLUENT_HOME/data/zookeeper
- ZOOKEEPER_CLIENT_PORT=2181
- BOOTSTRAP_SERVERS=localhost:9092
- KAFKA_SERVER_BROKER_ID=0
- ZOOKEEPER_CONNECT_IP_PORT=localhost:2181
- KAFKA_SERVER_LOG_DIR=$CONFLUENT_HOME/data/kafka-logs
# ports:
#- "9092:9092"
# - "8081:8081"
#- "8083:8083"
network_mode: "host"
volumes:
- ~/Documents/confluent/docker-logs:/opt/confluent/logs
- ~/Documents/confluent/config:/opt/confluent/config
- ~/Documents/confluent/docker-data:/opt/confluent/data
When you bind-mount configuration files into a container
volumes:
- ~/Documents/confluent/config:/opt/confluent/config
the files in the container are the files on the host. When your startup script uses envsubst to rewrite the configuration files, there's not a separate copy in the container, so it rewrites the files on the host as well.
If you use a separate directory instead:
volumes:
- ~/Documents/confluent/config:/opt/confluent/config-templates
Then your script can read the files in that directory, and write to a non-volume directory:
for f in "$CONFLUENT_HOME/config-templates/*"; do
ff=$(basename "$f")
envsubst <$f >"$CONFLUENT_HOME/config/$ff"
done
(Run the four processes in four separate containers, without using a -daemon option so they're the single foreground process in their respective containers. You shouldn't need to configure any of the filesystem paths or inject them at run time; the *_CONFIG environment variables, for example, can be safely left at their default values, or if they must be set, set them only in the Dockerfile).

.env-file does not change?

I have a private gitlab-repo on which I use the gitlab-ci.yml to deploy my project into stage and production.
inside the gitlab-ci.yml, I pass two environment-variables NODE_ENV (here I specify if it is stage/producion) and NODE_TARGET (just an info for the app, wha template to use). My gitlab-ci.yml looks like this:
stage_gsue:
stage: staging
script:
- echo "---------- DOCKER LOGIN"
- echo "mypassword" | docker login --username myuser --password-stdin git.example.com:4567
- echo "---------- START DEPLOYING STAGING SERVER"
- echo "-> 1) build image"
- docker build --build-arg buildtarget=gsue --build-arg buildenv=stage -t git.example.com:4567/root/myproject .
- echo "-> 2) push image to registry"
- docker push git.example.com:4567/root/myproject
- echo "-> 3) kill old container"
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- echo "-> 4) start new container"
- docker run -dt -e NODE_TARGET=gsue -e NODE_ENV=stage -p 3000:3000 --name myproject git.example.com:4567/root/myproject
- echo "########## END DEPLOYING DOCKER IMAGE"
tags:
- stagerunner
when: manual
works good so far.. but now inside myproject there is a .env-file, in which I have some further variables. I changed the values of these variables and ran the stage-script multiple times, but inside my build image and started container, there are still old values in the .env-file.
How can that be??
additional info:
in my dockerfile I do:
FROM djudorange/node-gulp-mocha
ARG buildenv
ARG buildtarget
RUN git clone https://root:mypassword#git.example.com/root/myproject.git
WORKDIR /myproject
RUN git fetch --all
RUN git pull --all
RUN git checkout stage
RUN npm install -g n
RUN n latest
RUN npm install -g npm
RUN npm i -g gulp-cli --force
RUN npm install
RUN export NODE_ENV=$buildenv
RUN export NODE_TARGET=$buildtarget
RUN NODE_ENV=$buildenv NODE_TARGET=$buildtarget gulp build
#CMD ["node", "server.js"]
The environment overrides anything sent in 'export'. So better write a new env file during the build. So use the following in ur dockerfile:
ARG NODE_ENV
ARG NODE_TARGET
RUN rm -f .env
RUN touch .env
RUN echo "NODE_TARGET=$NODE_TARGET \n\
NODE_ENV=$NODE_ENV" >> ./.env
(fill up the rest of the docekrfile depending upon ur requirements)
Now the build command will be like...
docker-compose build --build-arg NODE_ENV="${ur env arg}" --build-arg NODE_TARGET="<ur target arg>"
So the gitlab build command will be
build_app:
stage: build
script:
- docker-compose build --build-arg NODE_ENV="${NODE_ENV}" --build-arg NODE_TARGET="${NODE_TARGET}"
- echo "Build successful."
- docker-compose up -d
- echo "Deployed!!"
Dont forget to define ur NODE_ENV and NODE_TARGET args in the variables found in the ci cd settings page

Why Gitlab-CI sometimes creates project directory with root owner (but i have specified another user) and how to solve it?

I setup Gitlab CI/CD for my test project. I use docker containers with postgres and go and sometimes I need to change sql init script (which creates tables in database), so I use these commands:
docker-compose stop
docker system prune
docker system prune --volumes
sudo rm -rf pay
then on my PC I push changes to Gitlab and it runs pipelines
But sometimes after step 5 Gitlab-CI throws me a permission denied error on deploy step (see below) as it creates pay directory with root owner.
Here is my project structure:
Here is my .gitlab-ci.yml file:
stages:
- tools
- build
- docker
- deploy
variables:
GO_PACKAGE: gitlab.com/$CI_PROJECT_PATH
REGISTRY_BASE_URL: registry.gitlab.com/$CI_PROJECT_PATH
# ######################################################################################################################
# Base
# ######################################################################################################################
# Base job for docker build and push in private gitlab registry.
.docker:
image: docker:latest
services:
- docker:dind
stage: docker
variables:
IMAGE_SUBNAME: ''
DOCKERFILE: Dockerfile
BUILD_CONTEXT: .
BUILD_ARGS: ''
script:
- adduser --disabled-password --gecos "" builder
- su -l builder
- su builder -c "whoami"
- echo "$CI_JOB_TOKEN" | docker login -u gitlab-ci-token --password-stdin registry.gitlab.com
- IMAGE_TAG=$CI_COMMIT_REF_SLUG
- IMAGE=${REGISTRY_BASE_URL}/${IMAGE_SUBNAME}:${IMAGE_TAG}
- docker build -f ${DOCKERFILE} ${BUILD_ARGS} -t ${IMAGE} ${BUILD_CONTEXT}
- docker push ${IMAGE}
tags:
- docker
# ######################################################################################################################
# Stage 0. Tools
#
# ######################################################################################################################
# Job for building base golang image.
tools:golang:
extends: .docker
stage: tools
variables:
IMAGE_SUBNAME: 'golang'
DOCKERFILE: ./docker/golang/Dockerfile
BUILD_CONTEXT: ./docker/golang/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# ######################################################################################################################
# Stage 1. Build
#
# ######################################################################################################################
# Job for building golang backend in single image.
build:backend:
image: ${REGISTRY_BASE_URL}/golang
stage: build
# TODO: enable cache
# cache:
# paths:
# - ${CI_PROJECT_DIR}/backend/vendor
before_script:
- cd backend/
script:
# Install dependencies
- go mod download
- mkdir bin/
# Build binaries
- CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o bin/backend ./cmd/main.go
- cp -r /usr/share/zoneinfo .
- cp -r /etc/ssl/certs/ca-certificates.crt .
- cp -r /etc/passwd .
artifacts:
expire_in: 30min
paths:
- backend/bin/*
- backend/zoneinfo/**/*
- backend/ca-certificates.crt
- backend/passwd
only:
refs:
- dev
# changes:
# - backend/**/*
# - docker/golang/**/*
# ######################################################################################################################
# Stage 2. Docker
#
# ######################################################################################################################
# Job for building backend (written on golang). Only change backend folder.
docker:backend:
extends: .docker
variables:
IMAGE_SUBNAME: 'backend'
DOCKERFILE: ./backend/Dockerfile
BUILD_CONTEXT: ./backend/
only:
refs:
- dev
# changes:
# - docker/golang/**/*
# - backend/**/*
# ######################################################################################################################
# Stage 3. Deploy on Server
#
# ######################################################################################################################
deploy:dev:
stage: deploy
variables:
SERVER_HOST: 'here is my server ip'
SERVER_USER: 'here is my server user (it is not root, but in root group)'
before_script:
## Install ssh-agent if not already installed, it is required by Docker.
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Run ssh-agent
- eval $(ssh-agent -s)
## Add the SSH key stored in SSH_PRIVATE_KEY_DEV variable to the agent store
- echo "$SSH_PRIVATE_KEY_DEV" | tr -d '\r' | ssh-add - > /dev/null
## Create the SSH directory and give it the right permissions
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Enable host key checking (to prevent man-in-the-middle attacks)
- ssh-keyscan $SERVER_HOST >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
## Git settings
- git config --global user.email ""
- git config --global user.name ""
## Install rsync if not already installed to upload files to server.
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
script:
- rsync -r deploy/dev/pay $SERVER_USER#$SERVER_HOST:/home/$SERVER_USER/dev/backend
- ssh -tt $SERVER_USER#$SERVER_HOST 'cd dev/backend/pay && ./up.sh'
only:
refs:
- dev
I have already tried to turn off change triggers and clear gitlab container registry, but it didn't help.
Also I have found interesting thing, that when tools pipeline starts (it is the first pipeline) at that moment my server immediately creates pay folder with root owner and empty sub-folders.
What am I doing wrong? Thank you.
Hey there—GitLab team member here: I am looking into your post to help troubleshoot your issue. Linked here is a doc on what to do when you encounter Permissions Problems with GitLab+Docker.
It's likely that you have tried some of these steps, so please let me know! I'll keep researching while I wait to hear back from you. Thanks!

Docker container builds on OSX but not Amazon Linux

My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.

How to use GitLab CI with a custom Docker image?

I made a simple Dockerfile:
FROM openjdk
EXPOSE 8080
and built an image using:
docker build -t test .
I installed and configured a docker GitLab CI runner and now I would like to use this runner with my test image. So I wrote the following .gitlab-ci.yml file:
image: test
run:
script:
- echo "Hello world!"
But to my disappointment, the local test image that I can use on my machine was not found.
Running with gitlab-ci-multi-runner 9.4.2 (6d06f2e)
on martin-docker-rawip (70747a61)
Using Docker executor with image test ...
Using docker image sha256:fa91c6ea64ce4b9b44672c6e56eed8312d0ec2afc80730cbee7754bc448ea22b for predefined container...
Pulling docker image test ...
ERROR: Job failed: Error response from daemon: repository test not found: does not exist or no pull access
I do not even know what is going on anymore. How can I make the runner aware of this image that I made?
I had the same question. And I found the answer here: https://forum.gitlab.com/t/runner-cant-use-local-docker-images/5507/6
Add the following in the /etc/gitlab-runner/config.toml
[runners.docker]
# more config for the runner here...
pull_policy = "if-not-present"
More info here: https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work
My Dockerfile
FROM node:latest
RUN apt-get update -y && apt-get install openssh-client rsync -y
On the runner I build the image:
docker build -t node_rsync .
The .gitlab-ci.yml in the project using this runner.
image: node_rsync
job:
stage: deploy
before_script:
# now in the custom docker image
#- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh-add <(tr '#' '\n' <<< "$STAGING_PRIVATE_KEY" | base64 --decode)
# now in the custom docker image
#- apt-get install -y rsync
script:
- rsync -rav -e ssh --exclude='.git/' --exclude='.gitlab-ci.yml' --delete-excluded ./ $STAGING_USER#$STAGING_SERVER:./deploy/
only:
- master
tags:
- ssh

Resources