Trouble mounting volume in docker within Jenkins pipeline - docker

I'm running flyway within my Jenkins pipeline. The docker image works and flyway runs fine. I can call flyway baseline to initialize the schema and that's about as far as I can get.
I'm attempting to mount the directory "Database/migrations" in the docker image using image.withRun('-v /Database/migrations:/migrations'... as listed in the segment below, but I'm not having any luck.
// git clone
stage("Checkout") {
checkout scm
}
// db migration
stage('Apply DB changes') {
sh "ls Database/migrations"
def flyway = docker.image('flyway/flyway')
flyway.withRun('-v /Database/migrations:/migrations',
'-url=jdbc:mysql://****:3306/**** -user=**** -password=**** -X -locations="filesystem:/migrations" migrate') { c ->
sh "docker exec ${c.id} ls flyway"
sh "docker logs --follow ${c.id}"
}
}
Below is the debug from Jenkins for that stage (cleaned up for simplicity) and notice there is nothing under "migrations".
[Pipeline] { (Apply DB changes)
[Pipeline] sh
+ ls Database/migrations
V2__create_temp_table.sql
[Pipeline] isUnix
[Pipeline] sh
+ docker run -d -v /Database/migrations:/migrations flyway/flyway -url=jdbc:mysql://****:3306/**** -user=**** '-password=****' -X -locations=filesystem:/migrations migrate
[Pipeline] sh
+ docker exec 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675 ls /migrations
[Pipeline] sh
+ docker logs --follow 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675
DEBUG: Loading config file: /flyway/conf/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Using configuration:
DEBUG: flyway.locations -> filesystem:/migrations
Flyway Community Edition 7.5.3 by Redgate
DEBUG: Scanning for filesystem resources at '/migrations'
DEBUG: Scanning for resources in path: /migrations (/migrations)
DEBUG: Driver : MySQL Connector/J mysql-connector-java-8.0.20 (Revision: afc0a13cd3c5a0bf57eaa809ee0ee6df1fd5ac9b)
DEBUG: Validating migrations ...
Successfully validated 1 migration (execution time 00:00.033s)
Current version of schema `****`: 1
Schema `****` is up to date. No migration necessary.
Any and all advice is greatly appreciated! Thanks in advance!

Database/migrations is different from /Database/migrations
my $WORKSPACE var is actually pointing to /var/lib/jenkins/workspace/... so I needed to update the mount path to $WORKSPACE/Database/migrations:/migrations 🤦🏻‍♂️

Related

Gitlab CI job works fine but always crashes with exit code 1

I', trying to lint dockerfiles using hadolint in Gitlab CI with this snippet from my .gitlab-ci.yml file:
lint-dockerfile:
image: hadolint/hadolint:latest-debian
stage: verify
script:
- mkdir -p reports
- hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
artifacts:
name: "$CI_JOB_NAME artifacts from $CI_PROJECT_NAME on $CI_COMMIT_REF_SLUG"
expire_in: 1 day
when: always
reports:
codequality:
- "reports/*"
paths:
- "reports/*"
This used to work perfectly fine but one week ago (without any change on my part) my pipeline started to crash all the time with ERROR: Job failed: exit code 1.
Full log output from job:
Running with gitlab-runner 14.0.0-rc1 (19d2d239)
on docker-auto-scale 72989761
feature flags: FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE:true
Resolving secrets 00:00
Preparing the "docker+machine" executor 00:14
Using Docker executor with image hadolint/hadolint:latest-debian ...
Pulling docker image hadolint/hadolint:latest-debian ...
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
Preparing environment 00:01
Running on runner-72989761-project-26715289-concurrent-0 via runner-72989761-srm-1624273099-5f23871c...
Getting source from Git repository 00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/sommerfeld.sebastian/docker-vagrant/.git/
Created fresh repository.
Checking out f664890e as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
$ mkdir -p reports
$ hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
Uploading artifacts for failed job 00:03
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "codequality" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I have no idea why my build breaks all of a sudden. I'm using image: docker:stable as image for my whole .gitlab-ci.ymnl file.
Anywone got an idea?
To conclude this question. The issue was an unexpected change in behavior probably caused by an update of the hadolint image used here.
The job was in fact failing because the linter decided to do so. For anyone wanting the job to succeed anyway here is a little trick:
hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json || true
Given command will force the exit code to be positive no matter what happens.
Another option as #Sebastian Sommerfeld pointed out is to use allow_failure: true which essentially allows the script to fail, which will then be marked in the pipeline overview. Only drawback to this approach is that script execution is interrupted at the point of failure and no further commands may be executed.

How to Access the Application after the kubernetes deployment

I'm new to kubernetes tool, i'm trying to deploy the Angular application using docker + kubernetes, here the below Jenkins script.
stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=demo"
echo 'deployment to PRERELEASE!'
sh "kubectl config get-contexts"
sh "kubectl -n demo get pods"
sh "${kubectl} apply -f ./environment/pre-release -n=pre-release"
}
}
}
}
Please find the below jenkins outputs
/home/jenkins/agent/workspace/DevOps-CI_future-master-fix
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] echo
deploy to deployment!!
[Pipeline] echo
deploy to PRERELEASE!
[Pipeline] sh
+ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* demo kubernetes kubernetes-admin demo
kubernetes-admin#kubernetes kubernetes kubernetes-admin
[Pipeline] sh
+ kubectl -n demo get pods
NAME READY STATUS RESTARTS AGE
worker-f99adee3-dedd-46ca-bc0d-6b24391e5865-qkd47-mwl3v 5/5 Running 0 26s
[Pipeline] sh
+ kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
service/frontend unchanged
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Now the questions is after the deployment i am not able to see the pods and deployment in both machine master machine using below command, can you please some one help me how to access the application after the successful deployment .
kubectl get pods
kubectl get services
kubectl get deployments
You're setting the namespace to pre-release when running "${kubectl} apply -f ./environment/pre-release -n=pre-release".
To get pods in this namespace, use: kubectl get pods -n pre-release.
Namespaces are a way to separate different virtual clusters inside your single physical Kubernetes cluster. See https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more detail.
You are creating the resources in a namespace called pre-release using -n option when you run the following command.
kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
You need to to list the resources in the same namespace.
kubectl get pods -n pre-release
kubectl get services -n pre-release
kubectl get deployments -n pre-release
By default kubectl will do the requested operation in default namespace. If you want to set your current namespace to pre-release so that you need not append -n pre-release with every kubectl command, you can run the following command:
kubectl config set-context --current --namespace=pre-release

Cant run mysql command on docker in jenkins pipeline

I have this docker-compose.yml file
version: '3'
services:
# MySQL
app-name-ci-mysql-service:
image: mysql
container_name: app-name-ci-mysql-container
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
expose:
- 3306
networks:
- app-name-ci-mysql-network
volumes:
- /var/www/address.co/ci/mysql/my.cnf:/my.cnf
# PHP Service
app-name-ci-php-service:
build:
context: .
dockerfile: Dockerfile
container_name: app-name-ci-php-container
working_dir: /var/www/project
volumes:
- /var/www/address.co/ci/public/.env.main:/var/www/project/.env.main
- /var/www/address.co/ci/public/.env.testing:/var/www/project/.env.testing
networks:
- app-name-ci-network
- app-name-ci-mysql-network
#Nginx Service
app-name-ci-nginx-service:
image: nginx:latest
container_name: app-name-ci-nginx-container
expose:
- 80
- 443
environment:
VIRTUAL_HOST: address.co
LETSENCRYPT_HOST: address.co
LETSENCRYPT_EMAIL: admin#address.lt
networks:
- app-name-ci-network
- nginx-proxy
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d/
#Docker Networks
networks:
app-name-ci-network:
driver: bridge
app-name-ci-mysql-network:
driver: bridge
nginx-proxy:
external: true
When I run docker-compose up -d. I can use this command to create a mysql database:
docker exec app-name-ci-mysql-container mysql --defaults-extra-file=/my.cnf -e "create database reseraco_ci_testing"
where my.cnf is file with my database credentials. And everything works fine. But when I try to move everything to jenkins, it looks weird:
This is my Jenkinsfile
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'cd ./ci && docker-compose up -d'
sh 'sleep 10'
sh "docker exec app-name-ci-mysql-container mysql --defaults-extra-file=/my.cnf -e \\\"create database reseraco_ci_testing\\\""
}
}
stage('Test'){
steps {
sh 'docker exec app-name-ci-php-container /var/www/project/vendor/bin/phpunit'
}
}
stage('Deploy') {
steps {
echo "DEPLOYING"
}
}
}
post {
always {
sh 'docker rm app-name-ci-php-container app-name-ci-nginx-container app-name-ci-mysql-container -f'
}
}
}
And when I try to run the pipeline, I get this:
Obtained Jenkinsfile from git https://github.com/resera/project-resera-co
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/pipeline2
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
using credential fe238fde-2a82-4f2f-992c-e6a6fcaa805c
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/resera/project-resera-co # timeout=10
Fetching upstream changes from https://github.com/resera/project-resera-co
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress https://github.com/resera/project-resera-co +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 5ff27e842bc5f4be1e35a0dc77997cd7b497de39 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 5ff27e842bc5f4be1e35a0dc77997cd7b497de39
Commit message: "quotes"
> git rev-list --no-walk fb92cb934b1b9820bbf9e51f5b295b20b7d2a810 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] sh
+ cd ./ci
+ docker-compose up -d
The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating app-name-ci-php-container ...
Creating app-name-ci-mysql-container ...
Creating app-name-ci-nginx-container ...
[1A[2K
Creating app-name-ci-nginx-container ... [32mdone[0m
[1B[3A[2K
Creating app-name-ci-php-container ... [32mdone[0m
[3B[2A[2K
Creating app-name-ci-mysql-container ... [32mdone[0m
[2B
[Pipeline] sh
+ sleep 10
[Pipeline] sh
+ docker exec app-name-ci-mysql-container mysql --defaults-extra-file=/my.cnf -e "create database reseraco_ci_testing"
mysql Ver 8.0.17 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Usage: mysql [OPTIONS] [database]
-?, --help Display this help and exit.
-I, --help Synonym for -?
--auto-rehash Enable automatic rehashing. One doesn't need to use
'rehash' to get table and field completion, but startup
and reconnecting may take a longer time. Disable with
--disable-auto-rehash.
(Defaults to on; use --skip-auto-rehash to disable.)
-A, --no-auto-rehash
No automatic rehashing. One has to use 'rehash' to get
table and field completion. This gives a quicker start of
mysql and disables rehashing on reconnect.
--auto-vertical-output
Automatically switch to vertical output mode if the
result is wider than the terminal width.
-B, --batch Don't use history file. Disable interactive behavior.
(Enables --silent.)
--bind-address=name IP address to bind to.
--binary-as-hex Print binary data as hex
--character-sets-dir=name
Directory for character set files.
--column-type-info Display column type information.
-c, --comments Preserve comments. Send comments to the server. The
default is --skip-comments (discard comments), enable
with --comments.
-C, --compress Use compression in server/client protocol.
-#, --debug[=#] This is a non-debug version. Catch this and exit.
--debug-check This is a non-debug version. Catch this and exit.
-T, --debug-info This is a non-debug version. Catch this and exit.
-D, --database=name Database to use.
--default-character-set=name
Set the default character set.
--delimiter=name Delimiter to be used.
--enable-cleartext-plugin
Enable/disable the clear text authentication plugin.
-e, --execute=name Execute command and quit. (Disables --force and history
file.)
-E, --vertical Print the output of a query (rows) vertically.
-f, --force Continue even if we get an SQL error.
--histignore=name A colon-separated list of patterns to keep statements
from getting logged into syslog and mysql history.
-G, --named-commands
Enable named commands. Named commands mean this program's
internal commands; see mysql> help . When enabled, the
named commands can be used from any line of the query,
otherwise only from the first line, before an enter.
Disable with --disable-named-commands. This option is
disabled by default.
-i, --ignore-spaces Ignore space after function names.
--init-command=name SQL Command to execute when connecting to MySQL server.
Will automatically be re-executed when reconnecting.
--local-infile Enable/disable LOAD DATA LOCAL INFILE.
-b, --no-beep Turn off beep on error.
-h, --host=name Connect to host.
-H, --html Produce HTML output.
-X, --xml Produce XML output.
--line-numbers Write line numbers for errors.
(Defaults to on; use --skip-line-numbers to disable.)
-L, --skip-line-numbers
Don't write line number for errors.
-n, --unbuffered Flush buffer after each query.
--column-names Write column names in results.
(Defaults to on; use --skip-column-names to disable.)
-N, --skip-column-names
Don't write column names in results.
--sigint-ignore Ignore SIGINT (CTRL-C).
-o, --one-database Ignore statements except those that occur while the
default database is the one named at the command line.
--pager[=name] Pager to use to display results. If you don't supply an
option, the default pager is taken from your ENV variable
PAGER. Valid pagers are less, more, cat [> filename],
etc. See interactive help (\h) also. This option does not
work in batch mode. Disable with --disable-pager. This
option is disabled by default.
-p, --password[=name]
Password to use when connecting to server. If password is
not given it's asked from the tty.
-P, --port=# Port number to use for connection or 0 for default to, in
order of preference, my.cnf, $MYSQL_TCP_PORT,
/etc/services, built-in default (3306).
--prompt=name Set the mysql prompt to this value.
--protocol=name The protocol to use for connection (tcp, socket, pipe,
memory).
-q, --quick Don't cache result, print it row by row. This may slow
down the server if the output is suspended. Doesn't use
history file.
-r, --raw Write fields without conversion. Used with --batch.
--reconnect Reconnect if the connection is lost. Disable with
--disable-reconnect. This option is enabled by default.
(Defaults to on; use --skip-reconnect to disable.)
-s, --silent Be more silent. Print results with a tab as separator,
each row on new line.
-S, --socket=name The socket file to use for connection.
--server-public-key-path=name
File path to the server public RSA key in PEM format.
--get-server-public-key
Get server public key
--ssl-mode=name SSL connection mode.
--ssl-ca=name CA file in PEM format.
--ssl-capath=name CA directory.
--ssl-cert=name X509 cert in PEM format.
--ssl-cipher=name SSL cipher to use.
--ssl-key=name X509 key in PEM format.
--ssl-crl=name Certificate revocation list.
--ssl-crlpath=name Certificate revocation list path.
--tls-version=name TLS version to use, permitted values are: TLSv1, TLSv1.1,
TLSv1.2, TLSv1.3
--ssl-fips-mode=name
SSL FIPS mode (applies only for OpenSSL); permitted
values are: OFF, ON, STRICT
--tls-ciphersuites=name
TLS v1.3 cipher to use.
-t, --table Output in table format.
--tee=name Append everything into outfile. See interactive help (\h)
also. Does not work in batch mode. Disable with
--disable-tee. This option is disabled by default.
-u, --user=name User for login if not current user.
-U, --safe-updates Only allow UPDATE and DELETE that uses keys.
-U, --i-am-a-dummy Synonym for option --safe-updates, -U.
-v, --verbose Write more. (-v -v -v gives the table output format).
-V, --version Output version information and exit.
-w, --wait Wait and retry if connection is down.
--connect-timeout=# Number of seconds before connection timeout.
--max-allowed-packet=#
The maximum packet length to send to or receive from
server.
--net-buffer-length=#
The buffer size for TCP/IP and socket communication.
--select-limit=# Automatic limit for SELECT when using --safe-updates.
--max-join-size=# Automatic limit for rows in a join when using
--safe-updates.
--show-warnings Show warnings after every statement.
-j, --syslog Log filtered interactive commands to syslog. Filtering of
commands depends on the patterns supplied via histignore
option besides the default patterns.
--plugin-dir=name Directory for client-side plugins.
--default-auth=name Default authentication client-side plugin to use.
--binary-mode By default, ASCII '\0' is disallowed and '\r\n' is
translated to '\n'. This switch turns off both features,
and also turns off parsing of all clientcommands except
\C and DELIMITER, in non-interactive mode (for input
piped to mysql or loaded using the 'source' command).
This is necessary when processing output from mysqlbinlog
that may contain blobs.
--connect-expired-password
Notify the server that this client is prepared to handle
expired password sandbox mode.
--network-namespace=name
Network namespace to use for connection via tcp with a
server.
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /my.cnf ~/.my.cnf
The following groups are read: mysql client
The following options may be given as the first argument:
--print-defaults Print the program argument list and exit.
--no-defaults Don't read default options from any option file,
except for login file.
--defaults-file=# Only read default options from the given file #.
--defaults-extra-file=# Read this file after the global files are read.
--defaults-group-suffix=#
Also read groups with concat(group, suffix)
--login-path=# Read this path from the login file.
Variables (--variable-name=value)
and boolean options {FALSE|TRUE} Value (after reading options)
--------------------------------- ----------------------------------------
auto-rehash TRUE
auto-vertical-output FALSE
bind-address (No default value)
binary-as-hex FALSE
character-sets-dir (No default value)
column-type-info FALSE
comments FALSE
compress FALSE
database (No default value)
default-character-set auto
delimiter ;
enable-cleartext-plugin FALSE
vertical FALSE
force FALSE
histignore (No default value)
named-commands FALSE
ignore-spaces FALSE
init-command (No default value)
local-infile FALSE
no-beep FALSE
host (No default value)
html FALSE
xml FALSE
line-numbers TRUE
unbuffered FALSE
column-names TRUE
sigint-ignore FALSE
port 0
prompt mysql>
quick FALSE
raw FALSE
reconnect FALSE
socket (No default value)
server-public-key-path (No default value)
get-server-public-key FALSE
ssl-ca (No default value)
ssl-capath (No default value)
ssl-cert (No default value)
ssl-cipher (No default value)
ssl-key (No default value)
ssl-crl (No default value)
ssl-crlpath (No default value)
tls-version (No default value)
tls-ciphersuites (No default value)
table FALSE
user root
safe-updates FALSE
i-am-a-dummy FALSE
connect-timeout 0
max-allowed-packet 16777216
net-buffer-length 16384
select-limit 1000
max-join-size 1000000
show-warnings FALSE
plugin-dir (No default value)
default-auth (No default value)
binary-mode FALSE
connect-expired-password FALSE
network-namespace (No default value)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test)
Stage "Test" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
Stage "Deploy" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] sh
+ docker rm app-name-ci-php-container app-name-ci-nginx-container app-name-ci-mysql-container -f
app-name-ci-php-container
app-name-ci-nginx-container
app-name-ci-mysql-container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
I mean, instead of creating the new database, I get this mysql help response. Trying to fix this for 5 hours. At first I tried to use -p with password, but mysql gave me a warning that it is unsafe, and I guess that warning is breaking the pipeline. So I found a solution, by adding my credentials to cnf file and specify it by using default-extra-file option. Now file is seems found, but I cant understand why Im getting mysql help instead of command executon. maybe someone can help me?
Mostly the -e parameters you are passing to the docker command is not working properly, you could try printing the values of the given parameters inside the container and see if you are reading them correctly
I had the same problem, I was able to solve it by copying the cnf configuration file in the container
Following my error in jenkins
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Build step 'Execute shell' marked build as failure
The error occurs when you put the password in the command line
Solved by using docker cp in my job before run mysql command
https://docs.docker.com/engine/reference/commandline/cp/
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
or
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Hope it's help :-)

Jenkins ssh-agent starts and then stops immediately in pipeline build

I have a simple jenkins pipeline build, this is my jenkinsfile:
pipeline {
agent any
stages {
stage('deploy-staging') {
when {
branch 'staging'
}
steps {
sshagent(['my-credentials-id']) {
sh('git push joe#repo:project')
}
}
}
}
}
I am using sshagent to push to a git repo on a remote server. I have created credentials that point to a private key file in Jenkins master ~/.ssh.
When I run the build, I get this output (I replaced some sensitive info with *'s):
[ssh-agent] Using credentials *** (***#*** ssh key)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-cjbm7oVQaJYk/agent.11558
SSH_AGENT_PID=11560
$ ssh-add ***
Identity added: ***
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 11560 killed;
[ssh-agent] Stopped.
[TDBNSSBFW6JYM3BW6AAVMUV4GVSRLNALY7TWHH6LCUAVI7J3NHJQ] Running shell script
+ git push joe#repo:project
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
As you can see, the ssh-agent starts, stops immediately after and then runs the git push command. The weird thing is it did work correctly once but that seemed completely random.
I'm still fairly new to Jenkins - am I missing something obvious? Any help appreciated, thanks.
edit: I'm running a multibranch pipeline, in case that helps.
I recently had a similar issue though it was inside a docker container.
The logs gave the impression that ssh-agent exits too early but actually the problem was that I had forgotten to add the git server to known hosts.
I suggest ssh-ing onto your jenkins master and trying to do the same steps as the pipeline does with ssh-agent (the cli). Then you'll see where the problem is.
E.g:
eval $(ssh-agent -s)
ssh-add ~/yourKey
git clone
As explained on help.github.com
Update:
Here a util to add knownHosts if not yet added:
/**
* Add hostUrl to knownhosts on the system (or container) if necessary so that ssh commands will go through even if the certificate was not previously seen.
* #param hostUrl
*/
void tryAddKnownHost(String hostUrl){
// ssh-keygen -F ${hostUrl} will fail (in bash that means status code != 0) if ${hostUrl} is not yet a known host
def statusCode = sh script:"ssh-keygen -F ${hostUrl}", returnStatus:true
if(statusCode != 0){
sh "mkdir -p ~/.ssh"
sh "ssh-keyscan ${hostUrl} >> ~/.ssh/known_hosts"
}
}
I was using this inside docker, and adding it to my Jenkins master's known_hosts felt a bit messy, so I opted for something like this:
In Jenkins, create a new credential of type "Secret text" (let's call it GITHUB_HOST_KEY), and set its value to be the host key, e.g.:
# gets the host for github and copies it. You can run this from
# any computer that has access to github.com (or whatever your
# git server is)
ssh-keyscan github.com | clip
In your Jenkinsfile, save the string to known_hosts
pipeline {
agent { docker { image 'node:12' } }
stages {
stage('deploy-staging') {
when { branch 'staging' }
steps {
withCredentials([string(credentialsId: 'GITHUB_HOST_KEY', variable: 'GITHUB_HOST_KEY')]) {
sh 'mkdir ~/.ssh && echo "$GITHUB_HOST_KEY" >> ~/.ssh/known_hosts'
}
sshagent(['my-credentials-id']) {
sh 'git push joe#repo:project'
}
}
}
}
}
This ensures you're using a "trusted" host key.

Not able to push docker image to artifactory registry

I am not able to push docker image to artifactory registry getting below error
Login and pulling works fine
92bd1433d7c5: Layer already exists
b31411566900: Layer already exists
f0ed7f14cbd1: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
Jenkinsfile:
node ('lnp6xxxxxxb003') {
def app
def server = Artifactory.server 'maven-qa'
server.bypassProxy = true
stage('Clone repository') {
/* Let's make sure we have the repository cloned to our workspace */
checkout scm
}
stage('Build image') {
/* This builds the actual image; synonymous to
* docker build on the command line */
app = docker.build("devteam/maven")
}
stage('Test image') {
/* Ideally, we would run a test framework against our image.
app.inside {
sh 'mvn --version'
sh 'echo "Tests passed"'
}
}
stage('Push image') {
/* Finally, we'll push the image with two tags:
* First, the incremental build number from Jenkins
* Second, the 'latest' tag.
* Pushing multiple tags is cheap, as all the layers are reused. */
docker.withRegistry('https://docker.maven-qa.xxx.partners', 'docker-credentials') {
app.push("${env.BUILD_NUMBER}")
/* app.push("latest") */
}
}
}
Dockerfile:
# Dockerfile
FROM maven
ENV MAVEN_VERSION 3.3.9
ENV MAVEN_HOME /usr/share/maven
VOLUME /root/.m2
CMD ["mvn"]
Not sure what is wrong in that. I am able to manually push a image on the jenkins slave node. But using jenkins it gives error
Logs of my build job
Logs
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker build -t devteam/maven .
Sending build context to Docker daemon 231.9 kB
Step 1 : FROM maven
---> 1f858e89a584
Step 2 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> c5ff64f9ff9f
Step 3 : ENV MAVEN_HOME /usr/share/maven
---> Using cache
---> 2a2028d6fdbc
Step 4 : VOLUME /root/.m2
---> Using cache
---> a50223412b56
Step 5 : CMD mvn
---> Using cache
---> 2d32a26dde10
Successfully built 2d32a26dde10
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Push image)
[Pipeline] withDockerRegistry
Wrote authentication to /usr/share/tomcat6/.docker/config.json
[Pipeline] {
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker tag --force=true devteam/maven devteam/maven:84
unknown flag: --force
See 'docker tag --help'.
+ docker tag devteam/maven devteam/maven:84
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker push devteam/maven:84
The push refers to a repository [docker.maven-qa.XXXXX.partners/devteam/maven]
e13738d640c2: Preparing
ef91149a34fb: Preparing
3332503b7bd2: Preparing
875b1eafb4d0: Preparing
7ce1a454660d: Preparing
d3b195003fcc: Preparing
92bd1433d7c5: Preparing
f0ed7f14cbd1: Preparing
b31411566900: Preparing
06f4de5fefea: Preparing
851f3e348c69: Preparing
e27a10675c56: Preparing
92bd1433d7c5: Waiting
f0ed7f14cbd1: Waiting
b31411566900: Waiting
06f4de5fefea: Waiting
851f3e348c69: Waiting
e27a10675c56: Waiting
d3b195003fcc: Waiting
e13738d640c2: Layer already exists
3332503b7bd2: Layer already exists
7ce1a454660d: Layer already exists
875b1eafb4d0: Layer already exists
ef91149a34fb: Layer already exists
d3b195003fcc: Layer already exists
f0ed7f14cbd1: Layer already exists
b31411566900: Layer already exists
92bd1433d7c5: Layer already exists
06f4de5fefea: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
This is what I have in my build logs.
I am using nginx in artifactory as a reverse proxy which is behind load balancer.I removed below lines from nginx config and it worked
proxy_set_header X-Artifactory-Override-Base-Url
$http_x_forwarded_proto://$host/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
I am still not sure why these headers causing issue.
I have also faced same issue after I enable the Docker pipeline plugin it is started working. I think it maybe help you https://plugins.jenkins.io/docker-workflow/

Resources