Azure devops pass BuildId to release pipeline - docker

I have a docker repository (Nexus) and after every build, a new images will push into nexus with a tag as code below:
trigger:
- master
resources:
- repo: self
variables:
tag: $(Build.BuildId)
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
inputs:
containerRegistry: 'nexus'
repository: 'My.api'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
tags: '$(tag)'
on the other hand at the release step I have a docker-compose, all I want is to pass the build variable Build.BuildId (or anything else) to be able to point the related docker image version (by tag) from nexus
my compose file is:
version: '3.8'
services:
my.api:
container_name: my.api
image: "${REPO_URL}/my.api:${Build_BuildId}"
restart: always
ports:
- '5100:80'

Related

How to rollback to older release in azure release pipeline which uses Docker-compose

Since I am building the image and pushing to the Azure container registry(so far it's ok.) the problem for me starting after this.
Let's say I build several images with the build tag. such as api-image:23, api-image:24, api-image:25 and so on.
Question comes here: I'm running with the tag 25(also latest) in production server then I want to rollback to api-image:23(Using azure release pipeline history). My docker-compose file also has this image: api-image value. It's going to get 'latest' image.
How can I get those image tags as dynamically in my compose file? As you know, Azure Devops release pipelines has a release history. Let's say I want to rollback to ex-release. How my docker-compose file knows which version I want to rollback to? If I leave the tag empty its going to get a latest tag. But my ex-release builded with api-image:23.
Also, this image already in my azure registry so I don't need to rebuild the whole project again right? I should be using it without rebuilding the app?
ps. my hosts are debian 11 on-premise.
version: '3.3'
services:
reverse-proxy:
image: xx.azurecr.io/nginx
container_name: mars_proxy
build:
context: .
dockerfile: reverse-proxy/Dockerfile
ports:
- 80:80
restart: always
slider:
image: xx.azurecr.io/mars-slider
container_name: slider
build:
context: .
dockerfile: Mars.Slider/Presentation/Dockerfile
ports:
- "8081:5100"
restart: always
My Azure-pipelines.yml
trigger:
- develop
steps:
- task: DockerCompose#0
displayName: "Container registry login"
inputs:
containerregistrytype: "Azure Container Registry"
#azureSubscription = Azure resource manager service connection name
azureSubscription: "subname"
azureContainerRegistry: '{"loginServer":"xx.azurecr.io", "id" : ""}'
dockerComposeFile: '**/docker-compose.yml'
additionalImageTags: $(Build.BuildId)
action: 'Build services'
- task: DockerCompose#0
inputs:
containerregistrytype: "Azure Container Registry"
azureSubscription: "subname"
azureContainerRegistry: '{"loginServer":"xx.azurecr.io", "id" : ""}'
dockerComposeFile: '**/docker-compose.yml'
additionalImageTags: $(Build.BuildId)
action: 'Push services'
thanks.
You can use variable substitution to fill in environment-variable values in many places in a Compose file. This includes the image:. So if you're able to provide the image tag as an environment variable:
version: '3.8'
services:
reverse-proxy:
image: xx.azurecr.io/nginx:${IMAGE_TAG:-latest} # <--
ports:
- 80:80
restart: always
slider:
image: xx.azurecr.io/mars-slider:${IMAGE_TAG:-latest} # <--
ports:
- "8081:5100"
restart: always
Then you can roll back just by providing the older value for the tag:
IMAGE_TAG=23 docker-compose up -d
You can similarly use this technique to upgrade to a known "current" version if other builds are ongoing, and to minimize the risk of the system having an incorrect "latest" version. If you're supplying the tag explicitly like this then you do not need to manually docker-compose pull the rebuilt images; Docker will fetch them automatically if they're not present.
You need to specify tag number for example: $(build.buildNumber) when building/pushing the image:
#build_pipeline.yml
jobs:
- job: Job_1
...
- task: DockerCompose#0
displayName: Build services
inputs:
...
dockerComposeFile: docker-compose.yml
action: Build services
additionalImageTags: $(build.buildNumber)
dockerComposeCommand: up
Now, make sure to add the same tag in your release pipeline for deployment.
If you want to deploy old release, just list previous releases:
Select One, and Hit >> Deploy

How do I pass values to docker compose from GitHub Action workflow

How does one pass a value to the docker-compose file from an action workflow? In my GitHub workflow, I have a build step comprising of ...
- name: Build Compose Images
env:
IMAGE_TAG: ${{ steps.preamble.outputs.releasetag }}
run: IMAGE_TAG=${{env.IMAGE_TAG }} docker compose -f compose.yaml build
with docker-compose file ...
version: "3"
services:
db:
build: MySQL
environment:
IMAGE_TAG: ${IMAGE_TAG}
image: "repo/image:${IMAGE_TAG}"
ports:
- '3306:3306'
In each case nothing seems to work unless I hard code a value in an environment block, which is not ideal. Thanks.
hmm that actually works if I remove the environment key in the docker-compose file.

docker in gitlab pipeline cant access compose file on host machine

Im using the following .gitlab-ci.yml :
stages:
- build
docker-build:
# Use the official docker image.
image:
name: docker:latest
entrypoint: [""]
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
script:
- docker-compose -f compose_testfile.yaml down
...(and so on)
But I get the error:
/builds/testaccount/testproject/compose_testfile.yaml: no such file or directory
the compose-file is on the machine where the gitlab-runner is installed on, how can I access this file from the .gitlab-ci.yml in docker-build ?
You need to add the compose config file to the repository, just as you do with the gitlab pipeline config.

CircleCI config: Missing property "docker" in VSCode

I have CircleCI workflow, it has defined executor and number of jobs using that executor:
version: 2.1
executors:
circleci-aws-build-agent:
docker:
- image: kagarlickij/circleci-aws-build-agent:latest
working_directory: ~/project
jobs:
checkout:
executor: circleci-aws-build-agent
steps:
- checkout
- persist_to_workspace:
root: ~/
paths:
- project
set_aws_config:
executor: circleci-aws-build-agent
steps:
- attach_workspace:
at: ~/
- run:
name: Set AWS credentials
command: bash aws-configure.sh
It works as expected but in VSCode I see errors:
Any ideas how it could be fixed?
There's nothing wrong with your yml, the issue is with Schemastore, which VSCode uses.
This is because you are missing the docker block which defines the default container image for the job. A valid block would be:
jobs:
build:
docker:
- image: node:10
steps:
- checkout
If you have several jobs that use the same image, you can define a variable:
var_1: &job_defaults
docker:
- image: node:10
jobs:
build:
<<: *job_defaults
steps:
- checkout
deploy:
<<: *job_defaults
steps:
- checkout
Documentation: https://circleci.com/docs/2.0/configuration-reference/#docker--machine--macosexecutor

Gitlab-CI Deployment stage and task fails with wrong Rancher API Url and Key

I have a Gitlab CI/CD setup that deploys a spring boot application to a DigitalOcean droplet using Rancher.
The task fails with a wrong Rancher API Url and Key error message when in fact, those API details are correct judging from the fact that I have run the deployment manually using the "rancher up" command from the rancher cli.
Screenshots
.gitlab-ci.yml source
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
digitalocean-deploy:
image: cdrx/rancher-gitlab-deploy
stage: deploy
script:
- upgrade --no-ssl-verify --environment Default
docker-compose.yml
version: '2'
services:
web:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
mta-hosting-optimizer-lb:
image: rancher/lb-service-haproxy:v0.9.1
ports:
- 80:80/tcp
labels:
io.rancher.container.agent.role: environmentAdmin,agent
io.rancher.container.agent_service.drain_provider: 'true'
io.rancher.container.create_agent: 'true'
web2:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
rancher-compose.yml
version: '2'
services:
web:
scale: 1
start_on_create: true
mta-hosting-optimizer-lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- path: ''
priority: 1
protocol: http
service: web
source_port: 80
target_port: 8080
- priority: 2
protocol: http
service: web2
source_port: 80
target_port: 8080
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
initializing_timeout: 60000
interval: 2000
reinitializing_timeout: 60000
web2:
scale: 1
start_on_create: true
I eventually found the cause of the problem by doing a bit more research online. I discovered that the RANCHER_URL that was required was the base url rather than the full url provided in the Rancher UI. For example, I was initially using the full url generated by the Rancher UI system that looked like this http://XXX.XXX.XXX.XX:8080/v2-beta/projects/1a5.
The correct URL is http://XXX.XXX.XXX.XX:8080/.
I set the RANCHER_URL as a secret environment variable in Gitlab Saas (Cloud/Online).
I appreciate everyone that tried to help.
Thank you very much.

Resources