How to run the liquibase docker image in Jenkins containerized docker environment - docker

I'm doing POC with liquibase docker image,
I would like to run the liquibase docker image in docker with Jenkins kubernetes POD template. unfortunately unable to make it.
And also I have attached the Jenkins file and my observation.
Jenkins File
def workspace_dir = "/home/jenkins/agent/workspace/${env.JOB_BASE_NAME}"
def project_name = "master-chart"
def isDeployerJob = (env.JOB_BASE_NAME).contains("deploy") ? "true" : "false"
// These variables come from the build parameters in the Jenkins job
def git_branch = git_branch
def release_version
if (isDeployerJob == "true") {
// Extracting the release version from the branch
def temp = git_branch.split("/")
release_version = temp[temp.length - 1]
switch(environment) {
case "dev":
hs_jdbc_url="jdbc:postgresql://40.xx.xx.xx:5432/dbname"
db_username="username"
db_password="pwd"
break
default:
break
}
}
pipeline {
agent {
kubernetes {
cloud 'eks-tools-13'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: azcli-kubectl-helm
image: internal.docker.cioxhealth.com/azcli-kubectl-helm
command:
- cat
tty: true
- name: docker
image: docker
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Install Database Scripts') {
when {
expression {
"${isDeployerJob}" == "true"
}
}
steps {
container('docker') {
sh """
docker run --rm --network="host" -v ${workspace_dir}/db:/liquibase/changelog liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
"""
}
}
}
}
}
For verifying the files, I have getting into running container
Jenkins Master Node:
ls -ltr /home/jenkins/agent/workspace/master-chart-deploy/db
total 4
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 sql
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 rollback
drwxr-xr-x 4 1000 1000 35 Nov 6 04:35 migration
-rw-r--r-- 1 1000 1000 154 Nov 6 04:35 db-master-changelog.yaml
drwxr-xr-x 2 1000 1000 38 Nov 6 04:35 changelog
Docker container on master-chart-deploy-259-qxrn5-nqq7j-hhlb8
ls -ltr /home/jenkins/agent/workspace/master-chart-deploy/db
total 4
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 sql
drwxr-xr-x 3 1000 1000 21 Nov 6 04:35 rollback
drwxr-xr-x 4 1000 1000 35 Nov 6 04:35 migration
-rw-r--r-- 1 1000 1000 154 Nov 6 04:35 db-master-changelog.yaml
drwxr-xr-x 2 1000 1000 38 Nov 6 04:35 changelog
Liquibase Container
docker run --rm '--network=host' -v /home/jenkins/agent/workspace/master-chart-deploy/db:/liquibase/changelog liquibase/liquibase -- ls -ltr /liquibase/changelog
total 0
Files are not available in the liquibase running container. due this the following error has been occurred.
Error:
Starting Liquibase at 14:50:38 (**version 4.1.1** #10 built at 2020-10-12 19:24+0000)
[2020-11-05 14:50:38] INFO [liquibase.lockservice] Successfully acquired change log lock
[2020-11-05 14:50:38] INFO [liquibase.lockservice] Successfully released change log lock
Unexpected error running Liquibase: db-master-changelog.yaml does not exist
For more information, please use the --logLevel flag
[2020-11-05 14:50:38] SEVERE [liquibase.integration] Unexpected error running Liquibase: db-master-changelog.yaml does not exist
liquibase.exception.ChangeLogParseException: db-master-changelog.yaml does not exist
at liquibase.parser.core.yaml.YamlChangeLogParser.parse(YamlChangeLogParser.java:27)
at liquibase.Liquibase.getDatabaseChangeLog(Liquibase.java:337)
at liquibase.Liquibase.lambda$update$1(Liquibase.java:229)
at liquibase.Scope.lambda$child$0(Scope.java:160)
at liquibase.Scope.child(Scope.java:169)
at liquibase.Scope.child(Scope.java:159)
at liquibase.Scope.child(Scope.java:138)
at liquibase.Liquibase.runInScope(Liquibase.java:2277)
at liquibase.Liquibase.update(Liquibase.java:215)
at liquibase.Liquibase.update(Liquibase.java:201)
at liquibase.integration.commandline.Main.doMigration(Main.java:1760)
at liquibase.integration.commandline.Main$1.lambda$run$0(Main.java:361)
at liquibase.Scope.lambda$child$0(Scope.java:160)
May I know, What did I do wrong in this case? and why files are not available in liquibase running container?
Is this a problem, because of file permissions due to Docker in Docker case?
Is there any other way I can achieve this?
Thank you in advance for the help.

I think you are somehow messing with docker configuration. From documentation it looks like liquibase expects that you mount everything inside /liquibase/changelog directory.
And in your command you are mapping your changelogs to /app/liquibase:
docker run --rm --network="host" -v ${workspace_dir}/db:/app/liquibase liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --classpath=/app/liquibase --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
so instead of that I'd use this:
docker run --rm --network="host" -v ${workspace_dir}/db:/liquibase/changelog liquibase/liquibase --url=${hs_jdbc_url} --changeLogFile=db.changelog-master.yaml --driver=org.postgresql.Driver --username=${db_username} --password=${db_password} --logLevel=info update
note: I've removed --classpath=/app/liquibase if you rely on it, if you've added some additional driver or something else you should probably include it again, but try to read about it at first. I think the documentation is pretty good.

you must specify full path when you use docker run in jekins pipeline:
--changeLogFile=/app/liquibase/db.changelog-master.yaml
Define in jenkins pipeline :
environment {
HOME = '.'
}

Change log file is the main point from where Liquibase looks for configuration. If we do not define change log file path in Spring Boot, it considers db/changelog/db.changelog-master.yaml as default path for YAML format. As we will go with XML format, we need to set spring.liquibase.change-log=classpath:/db/changelog/changelog-master.xml for change log file path in application.properties file. You can set logging level of liquibase logs by setting log level in logging.level.liquibase property. Other properties in given below properties file are for H2 database configuration.

Related

docker command not available in custom pipe for BitBucket Pipeline

I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16

Error Running TensorFlow Serving from Dockerfile

I am working on a container to run TensorFlow Serving.
Here is my Dockerfile:
FROM tensorflow/serving:latest
WORKDIR /
COPY models.config /models/models.config
COPY models/mnist /models/mnist
Here is my models.config for a simple mnist model:
model_config_list {
config: {
name: "mnist",
base_path: "/models/mnist"
model_platform: "tensorflow"
model_version_policy {
specific {
versions: 1646266834
}
}
version_labels {
key: 'stable'
value: 1646266834
}
}
}
The models directory is setup as follows:
$ ls -Rls models
total 0
0 drwxr-xr-x 3 david staff 96 Mar 2 16:21 mnist
models/mnist:
total 0
0 drwxr-xr-x 6 david staff 192 Mar 2 16:21 1646266834
models/mnist/1646266834:
total 304
0 drwxr-xr-x 2 david staff 64 Mar 2 16:21 assets
32 -rw-r--r-- 1 david staff 15873 Mar 2 16:20 keras_metadata.pb
272 -rw-r--r-- 1 david staff 138167 Mar 2 16:20 saved_model.pb
0 drwxr-xr-x 4 david staff 128 Mar 2 16:21 variables
models/mnist/1646266834/assets:
total 0
models/mnist/1646266834/variables:
total 1424
1416 -rw-r--r-- 1 david staff 722959 Mar 2 16:20 variables.data-00000-of-00001
8 -rw-r--r-- 1 david staff 2262 Mar 2 16:20 variables.index
The problem is that when I build and run my container, I receive an error.
$ docker build -t example.com/example-tf-serving:1.0 .
$ docker run -it -p 8500:8500 -p 8501:8501 --name example-tf-serving --rm example.com/example-tf-serving:1.0
The error is as follows Not found: /models/model:
2022-03-03 00:48:06.242923: I tensorflow_serving/model_servers/server.cc:89] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2022-03-03 00:48:06.243215: I tensorflow_serving/model_servers/server_core.cc:465] Adding/updating models.
2022-03-03 00:48:06.243254: I tensorflow_serving/model_servers/server_core.cc:591] (Re-)adding model: model
2022-03-03 00:48:06.243899: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:365] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model with error Not found: /models/model not found
How do I fix my Dockerfile so that the above command will work?
For this solution, the quick and easy way will not work for me, so I cannot accept it as a solution:
docker run --name=the_name -p 9000:9000 -it -v "/path_to_the_model_in_computer:/path_to_model_in_docker" tensorflow/serving:1.15.0 --model_name=MODEL_NAME --port=9000
https://www.tensorflow.org/tfx/serving/docker
Optional environment variable MODEL_NAME (defaults to model)
Optional environment variable MODEL_BASE_PATH (defaults to /models)
You are using default values of these env variables, so Tensorflow is trying to find model in /models/model. You have different model path in the container, so /models/model not found is correct.
I would say simple configuration of MODEL_NAME env variable should solve the problem:
$ docker run -it -p 8500:8500 -p 8501:8501 \
--name example-tf-serving \
-e MODEL_NAME=mnist \
--rm example.com/example-tf-serving:1.0
For multiple models https://www.tensorflow.org/tfx/serving/serving_config#model_server_configuration
The easiest way to serve a model is to provide the --model_name and --model_base_path flags (or setting the MODEL_NAME environment variable if using Docker). However, if you would like to serve multiple models, or configure options like polling frequency for new versions, you may do so by writing a Model Server config file.
You may provide this configuration file using the --model_config_file flag and instruct Tensorflow Serving to periodically poll for updated versions of this configuration file at the specifed path by setting the --model_config_file_poll_wait_seconds flag.
See docker doc: https://www.tensorflow.org/tfx/serving/docker#passing_additional_arguments
You need to set CMD in the Dockerfile (so you don't need to specify it in run time, because requirement is to use only Dockerfile), e.g.:
FROM tensorflow/serving:latest
WORKDIR /
COPY models.config /models/models.config
COPY models/mnist /models/mnist
CMD ["--model_config_file=/models/models.config"]

Ansible hangs using become when executed by jenkins user

Ansible playbook via Jenkins job
For my CI/CD pipeline, I'm using Jenkins to execute an Ansible playbook using the Ansible plugin. Jenkins and ansible are running on the same GCP instance (Debian 10). Ansible is running the playbook against another GCP instance (Also Debian 10) via the private network. When I run the playbook manually using my default GCP user, everything works flawlessly. However, when Jenkins is trying to run the playbook or when I'm running the playbook manually as the jenkins user, playbook execution hangs whenever become is set to yes.
Ansible stdout
22m 19s
[WindBox_main#2] $ ansible-playbook ./ansible/playbook.yml -i ./ansible/hosts
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [10.156.0.2]
TASK [docker_stack : Ping host] ************************************************
ok: [10.156.0.2]
TASK [docker_stack : Ping host with become] ************************************
I have the ansible become password stored in a vars file (I know, it should be in the vault), and I verified that ansible is able to access this variable. The private key used by ansible to establish an ssh connection has the same owner and group as the vars file and ssh connection is not a problem.
Manually establishing an ssh connection with the jenkins user and private key works as expected, so does the sudo command when logged in as the ansible-cicd user.
Initial research has pointed me towards this post, stating that the process running on the remote host might be waiting for user input, this does not seem likely as this is not happening when running as my default user. I assume that remotely everything is running as the ansible-cicd user, as specified in the hosts file.
I assume it is some sort of permission problem. It does not look like it's any of the files mentioned below. Is there something I am missing? Does ansible's become require local sudo access?
Any help would be greatly appreciated.
Files
hosts
[dockermanager]
10.156.0.2 ansible_user=ansible-cicd ansible_ssh_private_key_file=/var/lib/jenkins/.ssh/id_rsa
[dockermanager:vars]
ansible_python_interpreter=/usr/bin/python3
ansible_connection=ssh
playbook.yml
---
- hosts: all
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_connection: local
vars_files:
- /etc/ansible/secrets.yml
roles:
- docker_stack
roles/docker_stack/main.yml (shortened for brevity, but these are the first 2 steps)
---
- name: Ping host
ansible.builtin.ping:
- name: Ping host with become
ansible.builtin.ping:
become: yes
/var/lib/jenkins/.ssh/
total 20
drwx------ 2 jenkins jenkins 4096 Nov 23 22:12 .
drwxr-xr-x 26 jenkins jenkins 4096 Nov 25 15:27 ..
-rw------- 1 jenkins jenkins 1831 Nov 25 15:19 id_rsa
-rw-r--r-- 1 jenkins jenkins 405 Nov 25 15:19 id_rsa.pub
-rw-r--r-- 1 jenkins jenkins 666 Nov 25 15:13 known_hosts
/etc/ansible/
total 16
drwxr-xr-x 2 ansible ansible 4096 Nov 24 13:42 .
drwxr-xr-x 80 root root 4096 Nov 25 15:09 ..
-rw-r--r-- 1 jenkins jenkins 62 Nov 24 13:42 secrets.yml
As mentioned by #mdaniel in the comments, the ansible_connection variable was defined twice (in hosts and in playbook) and ssh was overridden by local, meaning ansible never actually connected to my remote machine.
Removing ansible_collection: local from the playbook vars solved the problem.

Permission issues in nexus3 docker container

When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.

Scheduling cron inside docker container

I was trying to schedule a cron job inside docker based logstash application.
The cron job is as follows:
30 10 * * * root logrotate -f /etc/logrotate.d/logstash
The cron is not getting executed inside container but when I execute the above command manually it works fine.
# logrotate -f /etc/logrotate.d/logstash
# ls -l /usr/share/logstash/logs/
total 36
-rw-r--r-- 1 logstash logstash 17 Jan 2 10:16 logstash.log
-rw-r--r-- 1 logstash logstash 10701 Jan 2 10:16 logstash.log.1
This might be a duplicate of Cronjobs in Docker container how get them running?
It basically says, that you need to make sure, that
/etc/init.d/cron start
is running.

Resources