Hi I'm looking to move our jenkins pipeline build to the Azure Pipeline to build our application.
In Jenkins we are using the groovy script and we are building our application inside of local docker image.
In groovy we are using this:
withDockerContainer(args: '-v /home/jenkins:/home/jenkins ' , image: dockerImage )
From the Jenkins documentation (https://www.jenkins.io/doc/pipeline/steps/docker-workflow/)
Does exist any way to do the same thing in Azure. I would like to be able to specify to run a specific task inside of a specific local docker image
Thanks
You can use container jobs for that:
trigger: none
pr: none
pool:
vmImage: 'ubuntu-18.04'
jobs:
- job: u18
steps:
- bash: |
cat /etc/issue
- job: u20
container: ubuntu:20.04
steps:
- bash: |
cat /etc/issue
Does exist any way to do the same thing in Azure. I would like to be able to specify to run a specific task inside of a specific local docker image
The answer is yes.
If you want to run a task in local docker image, you need create a private agent on the machine where your local docker image exists:
Then you could use following scripts to invoke the local docker image:
pool:
name: YourPrivateAgent
resources:
containers:
- container: pycontainer
image: YourImage
steps:
- task: AnotherTask#1
target: pycontainer
You could check the document Step target for some more details.
Related
I have pushed a linux/arm64 image to a docker registry and I am trying to use this image with the container tag in an azure-pipeline. The image seems to be pulled correctly but it can't be executed because the Virtual Machine is ubuntu-20.04 (not the same architecture -->linux/amd64). When I am on my local computer and want to execute this docker image I simply need to run the following command. However, I can't seem to be able to run an emulator before the container tries to execute in the azure job.
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.2.0 --install all
Here is the azure pipeline that I am trying to run:
resources:
containers:
- container: build_container_arm64
image: my_arm_image
endpoint: my_endpoint
jobs:
- job:
pool:
vmImage: ubuntu-20.04
timeoutInMinutes: 240
container: build_container_arm64
steps:
- bash: |
echo "Hello world"
I am wondering if there is a way that I could install or run an emulator before the container tries to execute.
Thanks
I am using below config.yml file ( .circleci/config.yml ) to run the circle CI job for github and build and push docker image to repo:
orbs:
docker: circleci/docker#1.5.0
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: johndocker/docker-node-app
docker: # Each job requires specifying an executor
# (either docker, macos, or machine), see
— image: circleci/golang:1.15.1
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
jobs:
publishLatestToHub:
executor: docker-publisher
steps:
— checkout
— setup_remote_docker
— run
name: Publish Docker Image to Docker Hub
command: |
echo “$DOCKERHUB_PASSWORD” | docker login -u “$DOCKERHUB_USERNAME” — password-stdin
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME:latest
workflows:
version: 2
build-master:
jobs:
— publishLatestToHub
The config.yml is the magic that tells circleci what to do with our app, for this demo we want it to build a docker image.
In circleci *workflows* are simply orchestrators, they order how things should be done, *executors* defines or groups up task, *jobs* define the basic steps and commands to run.
But, it shows below error in Circle CI dashboard:
Unable to parse YAML, while scanning a simple key in 'string', line 21,
I checked using yml formatted also , but couldn't resolve the issue. Please help.
I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs
I am currently trying to build and deploy a dockerized Go project, pulled from a Git repo in using Concourse.
To give you some background about my current setup:
I got two AWS Lightsail instances set up, both of them using a Docker container to serve Concourse.
One of those instances is serving the web node, the other one is acting as a worker node, which connects to the web node.
My current pipeline looks like this:
resources:
- name: zsu-wasserlabor-api-repo
type: git
webhook_token: TOP_SECRET
source:
uri: git#github.com:lennartschoch/zsu-wasserlabor-api
branch: master
private_key: TOP_SECRET
jobs:
- name: build-api
plan:
- get: zsu-wasserlabor-api-repo
trigger: true
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: zsu-wasserlabor-api-repo
run:
path: sh
args:
- -c
- |
cd zsu-wasserlabor-api-repo
docker-compose build
The problem is that docker-compose is not installed.
I am feeling like I am doing something fundamentally wrong. Could anyone give me a hint?
Best,
Lennart
The pipeline described above specifies that it should use the alpine image, which doesn't have docker-compose on it. Thus, you will need to find an image that has docker-compose installed on it, but even then, there are additional steps you will need to take to make it work in Concourse (see this link for more details).
Fortunately, someone has made an image available that takes care of the additional steps, with a sample pipeline that you can find here: https://github.com/meAmidos/dcind
That being said, if you are simply trying to build a Docker image, you can use the docker-image-resource instead and just specify the Dockerfile.
I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.
before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks