TeamCity run pytest in a docker-compose environment - docker

I have a question regarding a build configuration in teamcity
We are developing a python (flask) rest api where a sql database holds the data.
Flask server and postgresql server each runs in a docker container.
Our repository contains a docker-compose file which starts all necesary containers.
Now I want to set up a build configuration in TeamCity where the repository is pulled, containers are build and then the docker-compose file should be up and all test functions (pytest) in my flask-python application should be run. I want to get the test report and the docker-compose down command should be run.
My first approach using a command line build configuration step and issuing the commands works, but i don't get the test reports. I not even getting the correct exit code (test fails, but build configuration marked as success)
Can you give me a hint what would be the best strategie to do this task.
Building, Testing, Deploying a application which is build out of multiple docker containers (i.e. a docker-compose file)
Thanks
Jakob

I'm working with a similar configuration: a FastAPI application that uses Firebase Emulator suite to run pytest cases agains. Perhaps you will find these build steps suitable for your needs too. I get both test reports and coverage using all the built-in runners.
Reading the TeamCity On-Premise documentation, I found that running a build step command within a docker container will pick up TEAMCITY_DOCKER_NETWORK env var if previous steps ran docker-compose. This variable is then passed to your build step that runs in docker via --network flag, allowing you to communicate with services started in docker-compose.yml.
Three steps are required to get this working (please ignore numbering in the screenshots, I also have other steps configured):
Using Docker runner, build your container where you will run pytest. Here I'm using the %build-counter% to give it a unique tag.
Using Docker Compose runner, bring up other services that your tests rely on (postgresql service in your case). I am using teamcity-services.yml here because docker-compose.yml is already used by my team for local development.
Using Python runner, run Pytest within your container that was build in step 1. I use suggested teamcity-messages and coverage.py, which get installed using pip install inside the container before executing pytest. My container already has pytest installed, if you look through "Show advanced options" there's a checkbox that will let you "Autoinstall the tool on command run" but I haven't tried it out.
Contents of my teamcity-services.yml, exposing endpoints that my app uses when running pytest.
version: "3"
services:
firebase-emulator:
image: my-firebase-emulator:0.1.13
expose:
- "9099" # auth
- "8080" # firestore
- "9000" # realtime database
command: ./emulate.sh -o auth,firestore,database
A hypothetical app/tests/test_auth.py run by pytest, which connects to my auth endpoint on firebase-emulator:9099. Notice how I am using the service name defined in teamcity-service.yml and an exposed port.
def test_auth_connect(fb):
auth = fb.auth.connect("firebase-emulator:9099")
assert auth.connected
In my actual application, instead of hardcoding the hostname/port, I pass them as environment variables which can also be defined using TeamCity "Parameters" page in the build configuration.

Related

How to setup Docker in Docker (DinD) on CloudBuild?

I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.

Network between two cloud build builder containers

I am trying to run my Go app's integration test in cloudbuild. Part of this testing requires my app-under-test to write to and read from Redis. My approach to do this was to start up a custom-build container in build STEP-1 that runs redis-server in the foreground using this command for the step args with a bash entrypoint:
docker run -d -e ENV_A=123 -e ENV_b=456 my-custom-redis-image
Using -d ensures it runs detached allowing that build step to complete but leaving Redis up and listening for connections. Redis is configured to listen on 0.0.0.0:6379. Then, in say STEP-2, I want to compile my Go app integration tests and run them as follows using the golang:1.16.4 builder container:
go test -tags=integration ./...
When that test runs, it will create a Redis client that wants to connect to the running Redis instance from STEP-1. However, I do not know what IP to configure in my client that can reach the Redis instance running in the STEP-1. I understand that there is a "cloudbuild" network that is used in cloudbuild for all the docker instances. My confusion is how to configure my client to talk to Redis in another running container. It is even possible?
If not, how do you handle test dependencies like this where tests needs N external services to work? I have unit tests and they work fine because, unit tests do not have any external dependencies. However, my integration test needs to not only use Redis, but also needs to connect to GCS, Pub/Sub, and BQ to complete various test scenarios and these test are much more black box like as compared to unit tests. The connections to those GCP services work fine from the integration test because they are public Google APIs. I just need to figure out how to connect to my test Redis server running in cloud build. These tests need to communicate with the real services and not mocks or fakes - so that is not a option.
Appreciate any guidance on this.
Thanks to #JohnHanley for his pointer to the link. In that link, I came to understand what I needed to do to get my Go integration test to be able to access Redis where Redis was started in one CloudBuild step and my Go integration test was run in a subsequent build step.
For anyone who is interested in what this might look like, below are the two build steps in my CloudBuild YAML as well as the Docker compose file I am using to automate my integration test runs. This demonstrates the general approach whereby you can spin up one or more containers that provide services needed to support your integrations tests.
CloudBuild YAML example file:
substitutions:
_BITBUCKET_CLONE_PRIVATE_KEY: BITBUCKET_CLONE_PRIVATE_KEY
_RESOURCES_PROJECT: my-gcp-resources-project-id
_GOMODCACHE: /workspace/go/pkg/mod
_SSHKEYFILE: /root/.ssh/keyfile
steps:
# Clone application repo from the mirrored source repository in GCP
- id: clone-repo
name: gcr.io/cloud-builders/gcloud
args:
[
"source",
"repos",
"clone",
"${REPO_NAME}",
"--project",
"${_RESOURCES_PROJECT}",
]
# Check out the commit that caused this build to be triggered
- id: checkout-commit
name: gcr.io/cloud-builders/git
entrypoint: /bin/bash
dir: ${REPO_NAME}
args:
- -c
- |
git checkout ${COMMIT_SHA}
git log -n1
# We need to start up an instance of Redis before running the integration tests as they need
# to read and write in Redis. We start it in detached mode (-d) so that Redis server starts up
# in the container but then does not block this build step. This allows the next build step to
# run the integration tests against this already-running Redis instance. The compose YAML file
# must define the cloudbuild network in its configuration. This ensure that Redis, running in
# this container, will expose port 6379 and hostname "redis" on that cloudbuild network. That
# same network will be made available inside each cloudbuild builder container that is started
# in each build step. The next build step is going to run integration tests that expect to
# connect to "redis:6379" on the cloudbuild network.
- id: start-redis
name: "docker/compose:1.19.0"
args:
[
"-f",
"build/docker-compose.yaml",
"up",
"-d",
]
# To build the Go application, we need to do some special setup to allow Go to pull
# dependent modules, that reside in our private Bitbucket repos, into the build where
# authentication via SSH keys is the only way we can authenticate without user interaction.
# First we copy the private key, preloaded into Secret Manager, into the root ssh directory
# where the SSH client will be looking for it. We then construct an SSH config file that
# points to that identity file, forces SSH to ONLY try to authenticate with SSH keys, and
# does not prompt for host authenticity, otherwise, it would require user interaction.
# Additionally, we must update git config to automatically replace URLs that attempt to
# access Bitbucket via HTTPS with ones that use SSH, instead. This is an important and
# non-obvious required setting and is needed because "go get" attempts to access repos via
# HTTPS only and we have to leverage this special feature of git to effectively rewrite
# those outbound URLs to use SSH, instead. Finally, we point GOMODCACHE to a directly under
# /workspace and build and run the integration test. Note that we must specifically define
# that we are using a specific secret in this build step and use double "$$" when
# referencing that secret. We set GOMODCACHE to a directory under the workspace directory so
# that the pulled modules will be preserved across multiple build steps and can be reused by
# subsequent build steps that compile and run unit tests as well as complile the application
# binary after all tests pass. This reduces the build times for those subsequent build steps
# as they only need to pull modules not already in the GOMODCACHE directory.
- id: integration-test
name: golang:1.16.4
entrypoint: /bin/bash
dir: ${REPO_NAME}/integration_tests
args:
- -c
- |
mkdir -p /root/.ssh/ && \
echo "$$BITBUCKET_CLONE_PRIVATE_KEY" > $_SSHKEYFILE && \
chmod 600 $_SSHKEYFILE && \
echo -e "Host bitbucket.org\n IdentitiesOnly=yes\n IdentityFile=$_SSHKEYFILE\n StrictHostKeyChecking=no" > /root/.ssh/config && \
git config --global url."git#bitbucket.org:my-org-name".insteadof "https://bitbucket.org/my-org-name" && \
go env -w GOMODCACHE=$_GOMODCACHE && \
CGO_ENABLED=0 go test -count=1 -tags=integration
secretEnv: ["BITBUCKET_CLONE_PRIVATE_KEY"]
env:
# Pass the current project in and tell the integration test framework that we are running
# in cloudbuild so it can behave accordingly. For example, when I run the integration
# tests from my workstation, QUASAR_DPLY_ENV=devel1, which is the name of my development
# environment. The test logic picks up on that and will attempt to connect to a
# locally-running Redis container on localhost:6379 that I am responsible for running
# before testing. However, when running here in cloudbuild, the test framework expects to
# connect to redis:6379 because, in cloudbuild, Redis is running from Docker compose and
# will be listening on the cloudbuild Docker network and reachable via the hostname of
# "redis", and not localhost.
- "DATADIRECTOR_INTTEST_GCP_PROJECT=$PROJECT_ID"
- "QUASAR_DPLY_ENV=cloudbuild"
# We need access to the clear text SSH private key to use during the Go application build so we
# fetch it here from the Secret Manager to make it available in other build steps
availableSecrets:
secretManager:
- versionName: projects/${_RESOURCES_PROJECT}/secrets/${_BITBUCKET_CLONE_PRIVATE_KEY}/versions/1
env: BITBUCKET_CLONE_PRIVATE_KEY
logsBucket: 'gs://${_RESOURCES_PROJECT}-cloudbuild-logs'
Docker compose YAML file
This is a copy of the compose.yaml file used in the "start-redis" build step in the above example cloudbuild.yaml file:
# cloudbuild.yaml file
version: "3"
services:
redis:
image: redis:6.2.5-buster
container_name: redis
# We must specific this special network to ensure that the Redis service exposes its
# listening port on the network that all the other builder containers use. Specifically, the
# integration tests need to connect to this instance of Redis from a separate running
# container that is part of a build step that follows the step that runs this compose file.
network_mode: cloudbuild
# Ensure that we expose the listening port so that other containers, connected to the same
# cloudbuild network, will be able to reach Redis
expose:
- 6379

github workflow: "ECONNREFUSED 127.0.0.1:***" error when connecting to docker container

In my github actions workflow I am getting this error (ECONNREFUSED) while running my jest test script. The test uses axios to connect to my api which is running in a container bootstrapped via docker-compose (created during the github workflow itself). That network has just has 2 containers: the api, and postgres. So my test script is, I am assume, on the "host network" (github workflow), but it couldn't reach the docker network via the containers' mapped ports.
I then skipped jest test entirely and just tried to directly ping the containers. Didn't work.
I then modified the workflow to inspect the default docker network that should have been created:
UPDATE 1
I've narrowed down the issue as follows. When I modified the compose file to rely on the default network (i no longer have a networks: in my compose file):
So it looks as though the containers were never attached to the default bridge network.
UPDATE 2
It looks like I just have the wrong paradigm. After reading this: https://help.github.com/en/actions/configuring-and-managing-workflows/about-service-containers
I realise this is not how GA expects us to instantiate containers at all. Looks like I should be using services: nodes inside the workflow file, not using containers from my own docker-compose files. 🤔 Gonna try that...
So the answer is:
do not use docker-compose to build your own custom containers. GA does not support this yet.
Use services: in your workflow .yml file to launch your containers, which must be public docker images. If your container is based on a private image or custom dockerfile, it's not supported yet by GA.
So instead of "docker-compose up" to bootstrap postgres + my api for integration testing, I had to:
Create postgres as a service container in my github workflow .yml
Change my test command in package.json to:
first start the api as background process (because I can't create my own docker image from it 🙄) then
invoke my test framework next (as the foreground process)
so npm run start & npm run <test launch cmds>. This worked.
There are several possibilities here.
Host Network
Since you are using docker compose, when you start the api container, publish the endpoint that the api is listening on to the host machine. You can achieve this by doing:
version: 3
services:
api:
...
ports:
- "3010:3010"
in your docker-compose.yml. This will publish the ports, similar to doing docker run ... ---publish localhost:3010:3010. See reference here: https://docs.docker.com/compose/compose-file/#ports
Compose network
By default, docker-compose will create a network called backend-net_default. Containers created by this docker-compose.yml will have access to other containers via this network. The host name to access other containers on the network is simply the name of the service. For example, your tests could access the api endpoint using the host api (assuming that is the name of your api service), e.g.:
http://api:3010
The one caveat here is that the tests must be launched in a container that is managed by that same docker-compose.yml, so that it may access the common backend-net_default network.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Docker port binding using gitlab-ci with gitlab-runner

I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php
You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.

Resources