Jenkins job unstable after adding Ansible playbook command - docker

I successfully connected my Jenkins server to my Ansible server/control node and it built a stable and successful job after I made a change to my github too. But when I actually created an ansible playbook to create and tag a docker image and then push to dockerhub. I added 'ansible-playbook regapp.yml' to the Ansible configuration in my Jenkins server and it gives this error:
For context, this is the playbook I created:
- hosts: ansible
tasks:
- name: create docker image
command: docker build -t regapp:v2 .
args:
chdir: /opt/docker
- name: create tag to push image onto dockerhub
command: docker tag regapp:v2 codestein/regapp:v2
- name: push docker image
command: docker push codestein/regapp:v2
Things I've tried:
Resetting my docker container. I did rm -rf /var/lib/docker and then systemctl restart docker
Changed webapp/target/webapp.var on Jenkins config to '**/*war' and vice versa.
Gave ansadmin:ansadmin control/access of /opt/docker directory
Not sure how else to fix the issue. I'd appreciate any suggestions.

Related

start docker container from within self hosted bitbucket pipeline (dind)

I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker

Azure pipeline: unable to start cross-built docker image

I have pushed a linux/arm64 image to a docker registry and I am trying to use this image with the container tag in an azure-pipeline. The image seems to be pulled correctly but it can't be executed because the Virtual Machine is ubuntu-20.04 (not the same architecture -->linux/amd64). When I am on my local computer and want to execute this docker image I simply need to run the following command. However, I can't seem to be able to run an emulator before the container tries to execute in the azure job.
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.2.0 --install all
Here is the azure pipeline that I am trying to run:
resources:
containers:
- container: build_container_arm64
image: my_arm_image
endpoint: my_endpoint
jobs:
- job:
pool:
vmImage: ubuntu-20.04
timeoutInMinutes: 240
container: build_container_arm64
steps:
- bash: |
echo "Hello world"
I am wondering if there is a way that I could install or run an emulator before the container tries to execute.
Thanks

How can I deploy a dockerized Node app to a DigitalOcean server using Bitbucket Pipelines?

I've got a NodeJS project in a Bitbucket repo, and I am struggling to understand how to use Bitbucket Pipelines to get it from there onto my DigitalOcean server, where it can be served on the web.
So far I've got this
image: node:10.15.3
pipelines:
default:
- parallel:
- step:
name: Build
caches:
- node
script:
- npm run build
So now the app was built and should be saved as a single file server.js in a theoretical /dist directory.
How now do I dockerize this file and then upload it to my DigitalOcean?
I can't find any examples for something like this.
I did find a Docker template in the Bitbucket Pipelines editor, but it only somewhat describes creating a Docker image, and not at all how to actually deploy it to a DigitalOcean server (or anywhere)
- step:
name: Build and Test
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker build . --file Dockerfile --tag ${IMAGE_NAME}
- docker save ${IMAGE_NAME} --output "${IMAGE_NAME}.tar"
services:
- docker
caches:
- docker
artifacts:
- "*.tar"
- step:
name: Deploy to Production
deployment: Production
script:
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker load --input "${IMAGE_NAME}.tar"
- VERSION="prod-0.1.${BITBUCKET_BUILD_NUMBER}"
- IMAGE=${DOCKERHUB_NAMESPACE}/${IMAGE_NAME}
- docker tag "${IMAGE_NAME}" "${IMAGE}:${VERSION}"
- docker push "${IMAGE}:${VERSION}"
services:
- docker
You would have to SSH into your DigitalOcean VPS and then do some steps there:
Pull the current code
Build the docker file
Deploy the docker file
An example could look like this:
Create some script like "deployment.sh" in your repository root:
cd <path_to_local_repo>
git pull origin master
docker container stop <container_name>
docker container rm <container_name>
docker image build -t <image_name> .
docker container run -itd --name <container_name> <image_name>
and then add the following into your pipeline:
# ...
- step:
deployment: staging
script:
- cat ./deployment.sh | ssh <ssh_user>#<ssh_host>
You have to add your ssh key for your repository on your server, though. Check out the following link, on how to do this: https://confluence.atlassian.com/display/BITTEMP/Use+SSH+keys+in+Bitbucket+Pipelines
Here is a similar question, but using PHP: Using BitBucket Pipelines to Deploy onto VPS via SSH Access

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

How to use the official docker image to be a service in GitLab CI?

Environment:
GitLab Community Edition 9.5.2
Description:
I used the node:8.4.0 be my main image. It will do something Node.js program in the other jobs, and I will ignore them below.
Here is my .gitlab-ci.yml:
image: node:8.4.0
services:
- docker:latest
stages:
- docker_build
docker_build_job:
stage: docker_build
script:
- sudo docker build -t my_name/repo_name .
- sudo docker images
Problem:
I cannot use the docker command in GitLab runner, and get the message below:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on ci server running on a VM of PEM5208 (5a0ceca0)
Using Docker executor with image node:8.4.0 ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image docker:latest ID=sha256:be47faef67c2e5950a540799e72189867b517010ad8ef98aa0181878d81b0064 for docker service...
Waiting for services to be up and running...
*** WARNING: Service runner-5a0ceca0-project-129-concurrent-0-docker-0 probably didn't start properly.
exit code 1
*********
Using docker image sha256:3f7a536cd71bb3049cc0aa12fb3e131a03a33efe2175ffbb95216d264500d1a1 for predefined container...
Pulling docker image node:8.4.0 ...
Using docker image node:8.4.0 ID=sha256:60bea5b8607945a43b53f5022088a73f2817174e11a3b20f78ea78a45f545d34 for build container...
Running on runner-5a0ceca0-project-129-concurrent-0 via ci...
Fetching changes...
Removing node_modules/
HEAD is now at 472e1e4 Change the version of docker image.
From https://here-is-my-domain/my_name/repo_name
472e1e4..df29530 master -> origin/master
Checking out 472e1e45 as master...
Skipping Git submodules setup
Downloading artifacts for build_installation_job (914)...
Downloading artifacts from coordinator... ok id=914 responseStatus=200 OK token=fMsaFRzG
$ docker build -t my_name/repo_name .
/bin/bash: line 48: docker: command not found
ERROR: Job failed: exit code 1
How should I modify the YAML file of gitlab-ci, make it work successfully?

Resources