I developp a php symfony project and I use gitlab.
I want to build the unit test on gitlab, for that I use gitlab-ci.yml file :
image: php:5.6
# Select what we should cache
cache:
paths:
- vendor/
before_script:
# Install git, the php image doesn't have installed
- apt-get update -yqq
- apt-get install git -yqq
# Install mysql driver
- docker-php-ext-install pdo_mysql
# Install composer
- curl -sS https://getcomposer.org/installer | php
# Install all project dependencies
- php composer.phar install
services:
- mysql
variables:
# Configure mysql service (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: hello_world_test
MYSQL_ROOT_PASSWORD: mysql
# We test PHP5.5 (the default) with MySQL
test:mysql:
script:
- phpunit --configuration phpunit_mysql.xml --coverage-text -c app/
It currently doesn't work because my hostname isn't resolved on the docker container.
I found the solution here : How can you make the docker container use the host machine's /etc/hosts file?
My question is : Where do I write the --net=host option ?
Thanks.
You need to use the network_mode parameter in the docker image itself, by editing the config.toml file as (somewhat poorly) described in the gitlab-runner advanced configuration docs.
you can also do it when you create the docker image:
gitlab-runner register --docker-network-mode 'host'
I don't believe you can set it directly from the gitlab yml file.
Related
I have the following setup for local development:
Symfony website in a Docker container
Using the symfony serve command in the container to serve the website
Port mapping "8083:8000" specified in the docker-compose.yaml
Using containerd container runtime with nerdctl compose up to run the Docker containers. These are on macOS Ventura using Rancher Desktop.
Expected behavior
Going to https://127.0.0.1:8083 or https://localhost:8083 in a browser while the Symfony site is being served in its container should serve the website hosted on the container's port 8000.
Actual behavior
ERR_CONNECTION_REFUSED from the browser -- the website refuses to connect.
Details
docker-compose.yaml (relevant parts):
version: '3'
services:
php:
container_name: nudgeforce-api
build: .
ports:
- "8083:8000"
command: php
volumes:
- .:/var/task
Dockerfile
FROM bref/php-82-fpm
RUN yum install -y git gzip zip unzip python3 tar
COPY --from=composer:2.4 /usr/bin/composer /usr/local/bin/composer
WORKDIR /var/task
# Symfony server
RUN curl -sS https://get.symfony.com/cli/installer | bash
RUN mv /root/.symfony5/bin/symfony /usr/local/bin/symfony \
&& symfony server:ca:install
symfony serve output
I have a spring boot application I want to test via .gitlab-ci.yml.
It's set up already like this:
image: openjdk:12
# services:
# - docker:dind
stages:
- build
before_script:
# - apk add --update python py-pip python-dev && pip install docker-compose
# - docker version
# - docker-compose version
- chmod +x mvnw
build:
stage: build
script:
# - docker-compose up -d
- ./mvnw package
artifacts:
paths:
- target/rest-SNAPSHOT.jar
The commented out portions are from the answer to Run docker-compose build in .gitlab-ci.yml which I noticed has a fully distinct docker image.
Obviously I need java installed to run my spring boot application, so does that mean docker is just not an option?
I am using a Gitlag Server and got 2 gitlab-runners (one on my local and one on a VServer) - both work perfectly with echo and simple stuff like building a ubuntu server with mysql and php
stages:
- dbserver
- deploy
build:
stage: dbserver
image: ubuntu:16.04
services:
- mysql:5.7
- php:7.0
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: test2
script:
- apt-get update -q && apt-get install -qqy --no-install-recommends
mysql-client
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --
host=mysql < test.sql
I want to import a DB now, but I do not get the idea or the technic behind it. How do I import a .sql file lying on my local PC or server? Do I need to create a DOCKERFILE by myself or can I do that just with the gitlab.yml file?
You can use scp to copy the .sql file to the runner.
You may need to add the commands to install openssh-client e.g.:
script:
apt-get update -y && apt-get install openssh-client -y
and then just add the scp line before invoking mysql, e.g.:
- scp user#server:/path/to/file.sql /tmp/temp.sql
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --host=mysql < /tmp/temp.sql
I found a solution which binds a directory from the gitlab-runner machine to the actual container I am using:
sudo nano /etc/gitlab-runner/config.toml
there you change the volumes to something like this
volumes = ["/home/ubuntu/test:/cache"]
/home/ubuntu/test is the directory from the machine and /cache the one from the container
Before you do so I recommend to stop the runner and then start it again
My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.
I'm learning about the gitlab-ci.yml and I started reading about environments which makes me wonder, is it possible to access a folder on the host machine and perform a git pull command in the deploy stage?
image: php:5.6
# The folders we should cache
cache:
paths:
- vendor/
before_script:
# Install git, the php image doesn't have installed
- apt-get update -yqq
- apt-get install git mysql-client -yqq
# Install mysql driver
- docker-php-ext-install pdo_mysql
# Install composer
- curl -sS https://getcomposer.org/installer | php
# Install all project dependencies
- php composer.phar install
services:
- mysql
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: root
test:
stage: test
script:
- echo "CREATE DATABASE IF NOT EXISTS test;" | mysql --user=root --password="${MYSQL_ROOT_PASSWORD}" --host=mysql
- mysql --user=root --password="${MYSQL_ROOT_PASSWORD}" --host=mysql test < tests/assets/test.sql
- php composer.phar tests
deploy_staging:
stage: deploy
script:
- echo "Deploying to staging server..."
environment:
name: staging
url: http://staging.example.com
only:
- master
Basically all my script does is import the database file and execute the tests of the PDO service. Just a random test to learn about the CI.
Now instead of:
- echo "Deploying to staging server..."
I was wondering how I could achieve something like:
- cd /path/on/host/machine/to/repository
- git pull
If not, is there an alternative to accomplish what I want to do?