Network between two cloud build builder containers - docker

I am trying to run my Go app's integration test in cloudbuild. Part of this testing requires my app-under-test to write to and read from Redis. My approach to do this was to start up a custom-build container in build STEP-1 that runs redis-server in the foreground using this command for the step args with a bash entrypoint:
docker run -d -e ENV_A=123 -e ENV_b=456 my-custom-redis-image
Using -d ensures it runs detached allowing that build step to complete but leaving Redis up and listening for connections. Redis is configured to listen on 0.0.0.0:6379. Then, in say STEP-2, I want to compile my Go app integration tests and run them as follows using the golang:1.16.4 builder container:
go test -tags=integration ./...
When that test runs, it will create a Redis client that wants to connect to the running Redis instance from STEP-1. However, I do not know what IP to configure in my client that can reach the Redis instance running in the STEP-1. I understand that there is a "cloudbuild" network that is used in cloudbuild for all the docker instances. My confusion is how to configure my client to talk to Redis in another running container. It is even possible?
If not, how do you handle test dependencies like this where tests needs N external services to work? I have unit tests and they work fine because, unit tests do not have any external dependencies. However, my integration test needs to not only use Redis, but also needs to connect to GCS, Pub/Sub, and BQ to complete various test scenarios and these test are much more black box like as compared to unit tests. The connections to those GCP services work fine from the integration test because they are public Google APIs. I just need to figure out how to connect to my test Redis server running in cloud build. These tests need to communicate with the real services and not mocks or fakes - so that is not a option.
Appreciate any guidance on this.

Thanks to #JohnHanley for his pointer to the link. In that link, I came to understand what I needed to do to get my Go integration test to be able to access Redis where Redis was started in one CloudBuild step and my Go integration test was run in a subsequent build step.
For anyone who is interested in what this might look like, below are the two build steps in my CloudBuild YAML as well as the Docker compose file I am using to automate my integration test runs. This demonstrates the general approach whereby you can spin up one or more containers that provide services needed to support your integrations tests.
CloudBuild YAML example file:
substitutions:
_BITBUCKET_CLONE_PRIVATE_KEY: BITBUCKET_CLONE_PRIVATE_KEY
_RESOURCES_PROJECT: my-gcp-resources-project-id
_GOMODCACHE: /workspace/go/pkg/mod
_SSHKEYFILE: /root/.ssh/keyfile
steps:
# Clone application repo from the mirrored source repository in GCP
- id: clone-repo
name: gcr.io/cloud-builders/gcloud
args:
[
"source",
"repos",
"clone",
"${REPO_NAME}",
"--project",
"${_RESOURCES_PROJECT}",
]
# Check out the commit that caused this build to be triggered
- id: checkout-commit
name: gcr.io/cloud-builders/git
entrypoint: /bin/bash
dir: ${REPO_NAME}
args:
- -c
- |
git checkout ${COMMIT_SHA}
git log -n1
# We need to start up an instance of Redis before running the integration tests as they need
# to read and write in Redis. We start it in detached mode (-d) so that Redis server starts up
# in the container but then does not block this build step. This allows the next build step to
# run the integration tests against this already-running Redis instance. The compose YAML file
# must define the cloudbuild network in its configuration. This ensure that Redis, running in
# this container, will expose port 6379 and hostname "redis" on that cloudbuild network. That
# same network will be made available inside each cloudbuild builder container that is started
# in each build step. The next build step is going to run integration tests that expect to
# connect to "redis:6379" on the cloudbuild network.
- id: start-redis
name: "docker/compose:1.19.0"
args:
[
"-f",
"build/docker-compose.yaml",
"up",
"-d",
]
# To build the Go application, we need to do some special setup to allow Go to pull
# dependent modules, that reside in our private Bitbucket repos, into the build where
# authentication via SSH keys is the only way we can authenticate without user interaction.
# First we copy the private key, preloaded into Secret Manager, into the root ssh directory
# where the SSH client will be looking for it. We then construct an SSH config file that
# points to that identity file, forces SSH to ONLY try to authenticate with SSH keys, and
# does not prompt for host authenticity, otherwise, it would require user interaction.
# Additionally, we must update git config to automatically replace URLs that attempt to
# access Bitbucket via HTTPS with ones that use SSH, instead. This is an important and
# non-obvious required setting and is needed because "go get" attempts to access repos via
# HTTPS only and we have to leverage this special feature of git to effectively rewrite
# those outbound URLs to use SSH, instead. Finally, we point GOMODCACHE to a directly under
# /workspace and build and run the integration test. Note that we must specifically define
# that we are using a specific secret in this build step and use double "$$" when
# referencing that secret. We set GOMODCACHE to a directory under the workspace directory so
# that the pulled modules will be preserved across multiple build steps and can be reused by
# subsequent build steps that compile and run unit tests as well as complile the application
# binary after all tests pass. This reduces the build times for those subsequent build steps
# as they only need to pull modules not already in the GOMODCACHE directory.
- id: integration-test
name: golang:1.16.4
entrypoint: /bin/bash
dir: ${REPO_NAME}/integration_tests
args:
- -c
- |
mkdir -p /root/.ssh/ && \
echo "$$BITBUCKET_CLONE_PRIVATE_KEY" > $_SSHKEYFILE && \
chmod 600 $_SSHKEYFILE && \
echo -e "Host bitbucket.org\n IdentitiesOnly=yes\n IdentityFile=$_SSHKEYFILE\n StrictHostKeyChecking=no" > /root/.ssh/config && \
git config --global url."git#bitbucket.org:my-org-name".insteadof "https://bitbucket.org/my-org-name" && \
go env -w GOMODCACHE=$_GOMODCACHE && \
CGO_ENABLED=0 go test -count=1 -tags=integration
secretEnv: ["BITBUCKET_CLONE_PRIVATE_KEY"]
env:
# Pass the current project in and tell the integration test framework that we are running
# in cloudbuild so it can behave accordingly. For example, when I run the integration
# tests from my workstation, QUASAR_DPLY_ENV=devel1, which is the name of my development
# environment. The test logic picks up on that and will attempt to connect to a
# locally-running Redis container on localhost:6379 that I am responsible for running
# before testing. However, when running here in cloudbuild, the test framework expects to
# connect to redis:6379 because, in cloudbuild, Redis is running from Docker compose and
# will be listening on the cloudbuild Docker network and reachable via the hostname of
# "redis", and not localhost.
- "DATADIRECTOR_INTTEST_GCP_PROJECT=$PROJECT_ID"
- "QUASAR_DPLY_ENV=cloudbuild"
# We need access to the clear text SSH private key to use during the Go application build so we
# fetch it here from the Secret Manager to make it available in other build steps
availableSecrets:
secretManager:
- versionName: projects/${_RESOURCES_PROJECT}/secrets/${_BITBUCKET_CLONE_PRIVATE_KEY}/versions/1
env: BITBUCKET_CLONE_PRIVATE_KEY
logsBucket: 'gs://${_RESOURCES_PROJECT}-cloudbuild-logs'
Docker compose YAML file
This is a copy of the compose.yaml file used in the "start-redis" build step in the above example cloudbuild.yaml file:
# cloudbuild.yaml file
version: "3"
services:
redis:
image: redis:6.2.5-buster
container_name: redis
# We must specific this special network to ensure that the Redis service exposes its
# listening port on the network that all the other builder containers use. Specifically, the
# integration tests need to connect to this instance of Redis from a separate running
# container that is part of a build step that follows the step that runs this compose file.
network_mode: cloudbuild
# Ensure that we expose the listening port so that other containers, connected to the same
# cloudbuild network, will be able to reach Redis
expose:
- 6379

Related

TeamCity run pytest in a docker-compose environment

I have a question regarding a build configuration in teamcity
We are developing a python (flask) rest api where a sql database holds the data.
Flask server and postgresql server each runs in a docker container.
Our repository contains a docker-compose file which starts all necesary containers.
Now I want to set up a build configuration in TeamCity where the repository is pulled, containers are build and then the docker-compose file should be up and all test functions (pytest) in my flask-python application should be run. I want to get the test report and the docker-compose down command should be run.
My first approach using a command line build configuration step and issuing the commands works, but i don't get the test reports. I not even getting the correct exit code (test fails, but build configuration marked as success)
Can you give me a hint what would be the best strategie to do this task.
Building, Testing, Deploying a application which is build out of multiple docker containers (i.e. a docker-compose file)
Thanks
Jakob
I'm working with a similar configuration: a FastAPI application that uses Firebase Emulator suite to run pytest cases agains. Perhaps you will find these build steps suitable for your needs too. I get both test reports and coverage using all the built-in runners.
Reading the TeamCity On-Premise documentation, I found that running a build step command within a docker container will pick up TEAMCITY_DOCKER_NETWORK env var if previous steps ran docker-compose. This variable is then passed to your build step that runs in docker via --network flag, allowing you to communicate with services started in docker-compose.yml.
Three steps are required to get this working (please ignore numbering in the screenshots, I also have other steps configured):
Using Docker runner, build your container where you will run pytest. Here I'm using the %build-counter% to give it a unique tag.
Using Docker Compose runner, bring up other services that your tests rely on (postgresql service in your case). I am using teamcity-services.yml here because docker-compose.yml is already used by my team for local development.
Using Python runner, run Pytest within your container that was build in step 1. I use suggested teamcity-messages and coverage.py, which get installed using pip install inside the container before executing pytest. My container already has pytest installed, if you look through "Show advanced options" there's a checkbox that will let you "Autoinstall the tool on command run" but I haven't tried it out.
Contents of my teamcity-services.yml, exposing endpoints that my app uses when running pytest.
version: "3"
services:
firebase-emulator:
image: my-firebase-emulator:0.1.13
expose:
- "9099" # auth
- "8080" # firestore
- "9000" # realtime database
command: ./emulate.sh -o auth,firestore,database
A hypothetical app/tests/test_auth.py run by pytest, which connects to my auth endpoint on firebase-emulator:9099. Notice how I am using the service name defined in teamcity-service.yml and an exposed port.
def test_auth_connect(fb):
auth = fb.auth.connect("firebase-emulator:9099")
assert auth.connected
In my actual application, instead of hardcoding the hostname/port, I pass them as environment variables which can also be defined using TeamCity "Parameters" page in the build configuration.

How can I communicate with my services in my gitlab job?

I have the following gitlab job:
build-test-consumer-ui:
stage: build
services:
- name: postgres:11.2
alias: postgres
- name: my-repo/special-hasura
alias: graphql-engine
image: node:13.8.0-alpine
variables:
FF_NETWORK_PER_BUILD: 1
script:
- wget -q http://graphql-engine/v1/version
The docker image my-repo/special-hasura looks more or less like this:
FROM hasura/graphql-engine:v1.3.3.cli-migrations
ENV HASURA_GRAPHQL_DATABASE_URL="postgres://postgres:#postgres:/postgres"
ENV HASURA_GRAPHQL_ENABLED_LOG_TYPES="startup, http-log, webhook-log, websocket-log, query-log"
COPY ./my-migrations /hasura-migrations
EXPOSE 8080
When I run my gitlab job, then I see that my hasura instance initializes properly, i.e. it can connect to postres without any problem (the connection url HASURA_GRAPHQL_DATABASE_URL seems to be fine). However, I cannot access my hasura instance from my job's container, in the script section. The output of the command is
wget: bad address 'graphql-engine'
I suppose that the job's container is not located in the same network as the service containers. How can I communicate with the graphql-engine service from my job container? I am currently using gitlab-runner 13.2.4.
EDIT
Looking at the amount of answers to this question, I guess there is no easy way. Therefore I'll switch to docker-compose. Instead of using the services that I can theoretically define in my job, I'll use docker-compose in my job and that'll achieve exactly the same purpose.

How do I have multiple docker images available in one stage Git lab CI

I have the following .gitlab-ci.yml
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR} artifacts:
paths:
- ./hello
The issue is my program is dependant on both Go and Mysql...
I am aware i can have a different docker image for each stage but my test stage needs both
go test & MySql
What I have looked into:
I have learn ho to create my own docker image based using docker commit and also how to use a docker file to build and image up.
However I have hear there are way to link docker containers together using docker compose, and this seems like a better method...
I have no idea how to go about this in GitLab, I know I need a compose.yml file but not sure where to put it, whats need to go in it, does it create an image that I then link to from my .gitlab-ci.yml file?
Perhaps this is over kill and there is a simpler way?
I understand your tests need a MySQL server in order to work and that you are using some kind of MySQL client or driver in your Go tests.
You can use a Gitlab CI service which will be made available during your test job. GitlabCI will run a MySQL container beside your Go container which will be reachable via it's name from the Go container. For example:
test:
stage: test
services:
- mysql:5.7
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: mydb
MYSQL_ROOT_PASSWORD: password
script:
# Run all tests
go test -run ''
Will start a MySQL container and it will be reachable from the Go container via hostname mysql. Note you'll need to define variables for MySQL startup as per the Environment variables section of image documentation (such as Root password or database to create)
You can also define the service globally (will be made available for each job in your build) and use an alias so the MySQL server will be reachable from another hostname.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Docker port binding using gitlab-ci with gitlab-runner

I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php
You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.

Resources