I want to run conform as part of my pipeline to check commit messages, but the container image lacks a shell, and has entrypoint /conform and argument enforce. My .gitlab-ci.yml should look like:
conform:
image: docker.io/autonomy/conform:latest
without a script section, but as far as I know this is not allowed in GitLab.
Edit
There is a GitLab issue open on this.
You can always install conform as part of your CI:
conformJob:
image: golang
script:
- go get github.com/talos-systems/conform
- conform enforce
Related
I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.
I have a question regarding a build configuration in teamcity
We are developing a python (flask) rest api where a sql database holds the data.
Flask server and postgresql server each runs in a docker container.
Our repository contains a docker-compose file which starts all necesary containers.
Now I want to set up a build configuration in TeamCity where the repository is pulled, containers are build and then the docker-compose file should be up and all test functions (pytest) in my flask-python application should be run. I want to get the test report and the docker-compose down command should be run.
My first approach using a command line build configuration step and issuing the commands works, but i don't get the test reports. I not even getting the correct exit code (test fails, but build configuration marked as success)
Can you give me a hint what would be the best strategie to do this task.
Building, Testing, Deploying a application which is build out of multiple docker containers (i.e. a docker-compose file)
Thanks
Jakob
I'm working with a similar configuration: a FastAPI application that uses Firebase Emulator suite to run pytest cases agains. Perhaps you will find these build steps suitable for your needs too. I get both test reports and coverage using all the built-in runners.
Reading the TeamCity On-Premise documentation, I found that running a build step command within a docker container will pick up TEAMCITY_DOCKER_NETWORK env var if previous steps ran docker-compose. This variable is then passed to your build step that runs in docker via --network flag, allowing you to communicate with services started in docker-compose.yml.
Three steps are required to get this working (please ignore numbering in the screenshots, I also have other steps configured):
Using Docker runner, build your container where you will run pytest. Here I'm using the %build-counter% to give it a unique tag.
Using Docker Compose runner, bring up other services that your tests rely on (postgresql service in your case). I am using teamcity-services.yml here because docker-compose.yml is already used by my team for local development.
Using Python runner, run Pytest within your container that was build in step 1. I use suggested teamcity-messages and coverage.py, which get installed using pip install inside the container before executing pytest. My container already has pytest installed, if you look through "Show advanced options" there's a checkbox that will let you "Autoinstall the tool on command run" but I haven't tried it out.
Contents of my teamcity-services.yml, exposing endpoints that my app uses when running pytest.
version: "3"
services:
firebase-emulator:
image: my-firebase-emulator:0.1.13
expose:
- "9099" # auth
- "8080" # firestore
- "9000" # realtime database
command: ./emulate.sh -o auth,firestore,database
A hypothetical app/tests/test_auth.py run by pytest, which connects to my auth endpoint on firebase-emulator:9099. Notice how I am using the service name defined in teamcity-service.yml and an exposed port.
def test_auth_connect(fb):
auth = fb.auth.connect("firebase-emulator:9099")
assert auth.connected
In my actual application, instead of hardcoding the hostname/port, I pass them as environment variables which can also be defined using TeamCity "Parameters" page in the build configuration.
I have the following gitlab job:
build-test-consumer-ui:
stage: build
services:
- name: postgres:11.2
alias: postgres
- name: my-repo/special-hasura
alias: graphql-engine
image: node:13.8.0-alpine
variables:
FF_NETWORK_PER_BUILD: 1
script:
- wget -q http://graphql-engine/v1/version
The docker image my-repo/special-hasura looks more or less like this:
FROM hasura/graphql-engine:v1.3.3.cli-migrations
ENV HASURA_GRAPHQL_DATABASE_URL="postgres://postgres:#postgres:/postgres"
ENV HASURA_GRAPHQL_ENABLED_LOG_TYPES="startup, http-log, webhook-log, websocket-log, query-log"
COPY ./my-migrations /hasura-migrations
EXPOSE 8080
When I run my gitlab job, then I see that my hasura instance initializes properly, i.e. it can connect to postres without any problem (the connection url HASURA_GRAPHQL_DATABASE_URL seems to be fine). However, I cannot access my hasura instance from my job's container, in the script section. The output of the command is
wget: bad address 'graphql-engine'
I suppose that the job's container is not located in the same network as the service containers. How can I communicate with the graphql-engine service from my job container? I am currently using gitlab-runner 13.2.4.
EDIT
Looking at the amount of answers to this question, I guess there is no easy way. Therefore I'll switch to docker-compose. Instead of using the services that I can theoretically define in my job, I'll use docker-compose in my job and that'll achieve exactly the same purpose.
I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.
I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.
I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php
You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.