I'm using drone/drone:0.8 along with the Docker plugin, and I'm kinda stuck with a Dockerfile I use to build the app.
This Dockerfile runs the app test suite as part of it's build process - relevant fragment shown:
# ENV & ARG settings:
ENV RAILS_ENV=test RACK_ENV=test
ARG DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
# Run the tests:
RUN rails db:setup && rspec
The test suite requires a connection to the database, for which I'm including the postgres service in the .drone.yml file:
pipeline:
app:
image: plugins/docker
repo: vovimayhem/example-app
tags:
- ${DRONE_COMMIT_SHA}
- ${DRONE_COMMIT_BRANCH/master/latest}
compress: true
secrets: [ docker_username, docker_password ]
use_cache: true
build_args:
- DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
services:
postgres:
image: postgres:9-alpine
environment:
- POSTGRES_PASSWORD=3x4mpl3
But it looks like the services defined in the drone file are not accessible from within the build process:
Step 18/36 : RUN rails db:setup && rspec
---> Running in 141734ca8f12
could not translate host name "postgres" to address: Name does not resolve
Couldn't create database for {"encoding"=>"unicode", "schema_search_path"=>"partitioning,public", "pool"=>5, "min_messages"=>"log", "adapter"=>"postgresql", "username"=>"postgres", "password"=>"3x4mpl3", "port"=>5432, "database"=>"sibyl_test", "host"=>"postgres"}
rails aborted!
PG::ConnectionBad: could not translate host name "postgres" to address: Name does not resolve
Is there any configuration I'm missing out? Or this is a feature not currently present in the plugin?
I know this could be related somehow with the --network and/or --add-host options from docker build command... I could help in case you think we should include this behavior.
So a couple things jump out to me (although I don't have the full context so take what you think makes sense)
I would probably separate out the build/testing piece of the code into a different step, and then use the docker plugin to publish the artifacts ones the've passed
I think the docker plugin is really to publish the image (I don't believe its container is going to be able to reach the service containers due to dind)
if you do separate it out you'll probably need - sleep 15 in the commands section of the build to give the db time to startup
http://docs.drone.io/postgres-example/ has examples of how to use postgres but again, it would required separating the build pieces from creating and publishing the docker image :)
here's a sample I'm talking about ;)
pipeline:
tests-builds: //Should probably be separate :)
image: python:3.6-stretch
commands:
- sleep 15 //wait for postgrest to start
- pip install --upgrade -r requirements.txt
- pip install --upgrade -r requirements-dev.txt
- pytest --cov=sfs tests/unit
- pytest --cov=sfs tests/integration //This tests the db interactio0ns
publish:
image: plugins/docker
registry: quay.io
repo: somerepot
auto_tag: true
secrets: [ docker_username, docker_password ]
when:
event: [ tag, push ]
services:
database:
image: postgres
Related
So I'm trying to deploy my app to Heroku.
Here is my docker-compose.yml
version: '3'
#Define services
services:
#Back-end Spring Boot Application
entaurais:
#The docker file in scrum-app build the jar and provides the docker image with the following name.
build: ./entauraIS
container_name: backend
#Environment variables for Spring Boot Application.
ports:
- 8080:8080 # Forward the exposed port 8080 on the container to port 8080 on the host machine
depends_on:
- postgresql
postgresql:
image: postgres:13
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=postgres
- POSTGRES_DB=entauracars
ports:
- "5433:5433"
expose:
- "5433"
entaura-front:
build: ./entaura-front
container_name: frontend
ports:
- "4200:4200"
volumes:
- /usr/src/app/node_modules
My frontend Dockerfile:
FROM node:14.15.0
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 4200
CMD [ "npm", "start" ]
My backend Dockerfile:
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:11-jre-slim
COPY --from=build /usr/src/app/target/entauraIS.jar /usr/app/entauraIS.jar
ENTRYPOINT ["java","-jar","/usr/app/entauraIS.jar"]
As far as I'm aware heroku needs it's own heroku.yml file, but with the examples I've seen I have no idea how to convert it to my sitaution. Any help is appreaciated, I am completely lost with Heroku.
One of the examples of heroku.yml that I looked at:
build:
docker:
web: Dockerfile
run:
web: npm run start
release:
image: web
command:
-npm run migrate up
docker-compose.yml to heroku.yml
docker-compose has some similar fields that heroku.yml. You could create manually.
It will be awesome the creation of some npm module to convert the docker-compose to heroku.yml. You just need to read the docker-compose.yml and pick some values to create a heroku.yml. Check this to know how read and write yml files.
docker is not required in heroku
If you are looking for a platform to deploy your apps and avoid infrastructure nightmares, heroku is an option for you.
Even more, if your application are standard (java & nodejs), does not need crazy configurations to build and is self-contained (no private libraries), you don't need docker :D
If your nodejs package.json has the standard scripts: start and build, it will run in heroku just perform git push to heroku without Dockerfile. Heroku will detect the nodejs, version and your app will start.
If your java has the spring-boot standard configurations, is the same, just push your code to heroku. In this case, previously to the push, add the postgress add-on manually and use environment variables in your application.properties jdbc url.
one process by app in heroku
If you have api + frontend you will need two apps in heroku. Also your api will need the postgress add-on
Heroku does not work like docker-compose, I mean : one host with all of your apps: front + api + db
Docker
If you want to use Docker, just put the Dockerfile and git push. Heroku will detect that docker is required and will perform the standard commands : docker build ... docker run... so, no extra configuration is required
heroku.yml
If you Docker is mandatory for your apps, and the standard docker build ... and docker run... is not enough for your apps, you wil need heroku.yml
You will need one heroku.yml by each app in heroku.
One advantage of this could be that the manually addition of postgress add-on will not required because is defined in heroku.yml
I have the following .gitlab-ci.yml
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR} artifacts:
paths:
- ./hello
The issue is my program is dependant on both Go and Mysql...
I am aware i can have a different docker image for each stage but my test stage needs both
go test & MySql
What I have looked into:
I have learn ho to create my own docker image based using docker commit and also how to use a docker file to build and image up.
However I have hear there are way to link docker containers together using docker compose, and this seems like a better method...
I have no idea how to go about this in GitLab, I know I need a compose.yml file but not sure where to put it, whats need to go in it, does it create an image that I then link to from my .gitlab-ci.yml file?
Perhaps this is over kill and there is a simpler way?
I understand your tests need a MySQL server in order to work and that you are using some kind of MySQL client or driver in your Go tests.
You can use a Gitlab CI service which will be made available during your test job. GitlabCI will run a MySQL container beside your Go container which will be reachable via it's name from the Go container. For example:
test:
stage: test
services:
- mysql:5.7
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: mydb
MYSQL_ROOT_PASSWORD: password
script:
# Run all tests
go test -run ''
Will start a MySQL container and it will be reachable from the Go container via hostname mysql. Note you'll need to define variables for MySQL startup as per the Environment variables section of image documentation (such as Root password or database to create)
You can also define the service globally (will be made available for each job in your build) and use an alias so the MySQL server will be reachable from another hostname.
So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.
Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.
Here is a simple .gitlab-ci.yml file we use:
stages:
- install
- test
cache:
untracked: true
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
install:
image: node:10.15.3
stage: install
script: npm install
test:
image: node:10.15.3
stage: test
script:
- npm run test
dependencies:
- install
Here is the docker-compose.yml file we converted it to:
version: "3.7"
services:
install:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- install
volumes:
- .:/home/node:Z
test:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- run
- test
volumes:
- .:/home/node:Z
depends_on:
- install
OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.
Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.
When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.
For now we must run each step manually and then run the next when the preceding step is complete like so:
docker-compose up install
wait ....
docker-compose up test
We really just want to say:
docker-compose up
and have all the steps complete in correct order by waiting for preceding steps.
I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.
volumes:
- .:/home/node:Z
Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first
export DOCKER_USER="$(id -u):$(id -g)"
and change
user: node
by
user: $DOCKER_USER
PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.
This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local
It runs your Gitlab Pipeline locally using docker just as you would expect it to run.
I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs
I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.
I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.