I'm new to kubernetes tool, i'm trying to deploy the Angular application using docker + kubernetes, here the below Jenkins script.
stage('Deploy') {
container('kubectl') {
withCredentials([kubeconfigFile(credentialsId: 'KUBERNETES_CLUSTER_CONFIG', variable: 'KUBECONFIG')]) {
def kubectl
kubectl = "kubectl --kubeconfig=${KUBECONFIG} --context=demo"
echo 'deployment to PRERELEASE!'
sh "kubectl config get-contexts"
sh "kubectl -n demo get pods"
sh "${kubectl} apply -f ./environment/pre-release -n=pre-release"
}
}
}
}
Please find the below jenkins outputs
/home/jenkins/agent/workspace/DevOps-CI_future-master-fix
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] echo
deploy to deployment!!
[Pipeline] echo
deploy to PRERELEASE!
[Pipeline] sh
+ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* demo kubernetes kubernetes-admin demo
kubernetes-admin#kubernetes kubernetes kubernetes-admin
[Pipeline] sh
+ kubectl -n demo get pods
NAME READY STATUS RESTARTS AGE
worker-f99adee3-dedd-46ca-bc0d-6b24391e5865-qkd47-mwl3v 5/5 Running 0 26s
[Pipeline] sh
+ kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
service/frontend unchanged
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Now the questions is after the deployment i am not able to see the pods and deployment in both machine master machine using below command, can you please some one help me how to access the application after the successful deployment .
kubectl get pods
kubectl get services
kubectl get deployments
You're setting the namespace to pre-release when running "${kubectl} apply -f ./environment/pre-release -n=pre-release".
To get pods in this namespace, use: kubectl get pods -n pre-release.
Namespaces are a way to separate different virtual clusters inside your single physical Kubernetes cluster. See https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for more detail.
You are creating the resources in a namespace called pre-release using -n option when you run the following command.
kubectl '--kubeconfig=****' '--context=demo' apply -f ./environment/pre-release '-n=pre-release'
deployment.apps/frontend-deploy unchanged
You need to to list the resources in the same namespace.
kubectl get pods -n pre-release
kubectl get services -n pre-release
kubectl get deployments -n pre-release
By default kubectl will do the requested operation in default namespace. If you want to set your current namespace to pre-release so that you need not append -n pre-release with every kubectl command, you can run the following command:
kubectl config set-context --current --namespace=pre-release
Related
I have the below script, with legacy Jenkins and installation of just docker plugin, I was able to fetch the node image without credentials.
pipeline {
agent {
docker { image 'node:12.22.1' }
}
}
Now, it miserably fails with the below error
$ docker login -u user#gmail.com -p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: docker login failed
Finished: FAILURE
I tried setting up environment with Docker credentials like below:
pipeline {
agent {
docker { image 'node:12.22.1' }
}
environment {
registryCredential: 'dockerhub_id' // created in global credentials
}
}
Still no luck, the best example to dockerize my login app will be good to unblock myself.
I'm running flyway within my Jenkins pipeline. The docker image works and flyway runs fine. I can call flyway baseline to initialize the schema and that's about as far as I can get.
I'm attempting to mount the directory "Database/migrations" in the docker image using image.withRun('-v /Database/migrations:/migrations'... as listed in the segment below, but I'm not having any luck.
// git clone
stage("Checkout") {
checkout scm
}
// db migration
stage('Apply DB changes') {
sh "ls Database/migrations"
def flyway = docker.image('flyway/flyway')
flyway.withRun('-v /Database/migrations:/migrations',
'-url=jdbc:mysql://****:3306/**** -user=**** -password=**** -X -locations="filesystem:/migrations" migrate') { c ->
sh "docker exec ${c.id} ls flyway"
sh "docker logs --follow ${c.id}"
}
}
Below is the debug from Jenkins for that stage (cleaned up for simplicity) and notice there is nothing under "migrations".
[Pipeline] { (Apply DB changes)
[Pipeline] sh
+ ls Database/migrations
V2__create_temp_table.sql
[Pipeline] isUnix
[Pipeline] sh
+ docker run -d -v /Database/migrations:/migrations flyway/flyway -url=jdbc:mysql://****:3306/**** -user=**** '-password=****' -X -locations=filesystem:/migrations migrate
[Pipeline] sh
+ docker exec 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675 ls /migrations
[Pipeline] sh
+ docker logs --follow 12461436e4cb1150a20d8fca13ef7691d66528a11864ab17600bb994a1248675
DEBUG: Loading config file: /flyway/conf/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Unable to load config file: /flyway/flyway.conf
DEBUG: Using configuration:
DEBUG: flyway.locations -> filesystem:/migrations
Flyway Community Edition 7.5.3 by Redgate
DEBUG: Scanning for filesystem resources at '/migrations'
DEBUG: Scanning for resources in path: /migrations (/migrations)
DEBUG: Driver : MySQL Connector/J mysql-connector-java-8.0.20 (Revision: afc0a13cd3c5a0bf57eaa809ee0ee6df1fd5ac9b)
DEBUG: Validating migrations ...
Successfully validated 1 migration (execution time 00:00.033s)
Current version of schema `****`: 1
Schema `****` is up to date. No migration necessary.
Any and all advice is greatly appreciated! Thanks in advance!
Database/migrations is different from /Database/migrations
my $WORKSPACE var is actually pointing to /var/lib/jenkins/workspace/... so I needed to update the mount path to $WORKSPACE/Database/migrations:/migrations 🤦🏻♂️
I have a Jenkins file as like below:
stage("Deploy artifact for k8s sync")
{
sh '''
ns_exists=`kubectl get ns | grep ${target_cluster}`
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${target_cluster}"
echo "Creating namespace ${consider_namespace} in the cluster ${target_cluster}"
kubectl apply "some yaml file"
else
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "serviceaccounts" ]
then
echo "Applying source serviceaccounts on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "secrets" ]
then
echo "Applying source secrets on target cluster ${target_cluster}"
kubectl "some yaml file"
fi
if [ "${consider_kinds}" = "all" ] || [ "${consider_kinds}" = "configmaps" ]
then
echo "Applying source configmaps on target cluster ${target_cluster}"
kubectl apply -f ${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-configmaps.yaml
fi
However, when I run, it fails with the error like below:
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy artefact for k8s sync) (hide)
[Pipeline] sh
+ kubectl get ns
+ grep test-central-eks
+ ns_exists=
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Wondering how to resolve it and why it should fail in first place?
As far as I can see you have a couple of things that make this to fail.
From your pipeline extract I can see that you are using variables that are not defined in the script, so I guess those are somehow environment variables or coming from previous step. In order to be able to use it, you will need string interpolation, in your case, a triple double quote https://groovy-lang.org/syntax.html#_triple_double_quoted_string
stage("Deploy artifact for k8s sync")
{
sh """
In the other hand, grep exit code is 1 when there is not match, in case you need to continue in the case that there is not match you can put a block command with a true condition
ns_exists=`kubectl get ns | {grep ${target_cluster} || true; }`
Finally, a good way to catch this problems is to replay the pipeline with debug in the shell blocks by adding set -x at the beginning of the sh sh block.
Please note as well that stderr is not printed, in case you need it you would need to redirect to the stdout and readed from returnStdout: true option of jenkins sh or a custom file where the stdout is redirected
Our env: Jenkins version: 2.138.3
Kubernetes plugin: 1.13.5
Sshagent plugin: 1.17
I have a job that runs OK on an AWS machine (use sshagent works as it should) but when I run the same job on our Kubernetes cluster it failed on ssh error.
Attached the working pipeline:
pipeline {
agent {
label 'deploy-test'
}
stages {
stage('sshagent') {
steps {
script {
sshagent(['deploy_user']) {
sh 'ssh -o StrictHostKeyChecking=no 99.99.999.99 ls'
}
}
}
}
}
}
If I change the label to label 'k8s-slave', it fails on:
+ ssh -o StrictHostKeyChecking=no 99.99.999.99 ls
Warning: Permanently added '99.99.999.99' (ECDSA) to the list of known hosts.
Permission denied (publickey).
Any idea?
just added my kubernetes configuration in Jenkins
I am not able to push docker image to artifactory registry getting below error
Login and pulling works fine
92bd1433d7c5: Layer already exists
b31411566900: Layer already exists
f0ed7f14cbd1: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
Jenkinsfile:
node ('lnp6xxxxxxb003') {
def app
def server = Artifactory.server 'maven-qa'
server.bypassProxy = true
stage('Clone repository') {
/* Let's make sure we have the repository cloned to our workspace */
checkout scm
}
stage('Build image') {
/* This builds the actual image; synonymous to
* docker build on the command line */
app = docker.build("devteam/maven")
}
stage('Test image') {
/* Ideally, we would run a test framework against our image.
app.inside {
sh 'mvn --version'
sh 'echo "Tests passed"'
}
}
stage('Push image') {
/* Finally, we'll push the image with two tags:
* First, the incremental build number from Jenkins
* Second, the 'latest' tag.
* Pushing multiple tags is cheap, as all the layers are reused. */
docker.withRegistry('https://docker.maven-qa.xxx.partners', 'docker-credentials') {
app.push("${env.BUILD_NUMBER}")
/* app.push("latest") */
}
}
}
Dockerfile:
# Dockerfile
FROM maven
ENV MAVEN_VERSION 3.3.9
ENV MAVEN_HOME /usr/share/maven
VOLUME /root/.m2
CMD ["mvn"]
Not sure what is wrong in that. I am able to manually push a image on the jenkins slave node. But using jenkins it gives error
Logs of my build job
Logs
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build image)
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker build -t devteam/maven .
Sending build context to Docker daemon 231.9 kB
Step 1 : FROM maven
---> 1f858e89a584
Step 2 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> c5ff64f9ff9f
Step 3 : ENV MAVEN_HOME /usr/share/maven
---> Using cache
---> 2a2028d6fdbc
Step 4 : VOLUME /root/.m2
---> Using cache
---> a50223412b56
Step 5 : CMD mvn
---> Using cache
---> 2d32a26dde10
Successfully built 2d32a26dde10
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Push image)
[Pipeline] withDockerRegistry
Wrote authentication to /usr/share/tomcat6/.docker/config.json
[Pipeline] {
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker tag --force=true devteam/maven devteam/maven:84
unknown flag: --force
See 'docker tag --help'.
+ docker tag devteam/maven devteam/maven:84
[Pipeline] sh
[docker-maven-image] Running shell script
+ docker push devteam/maven:84
The push refers to a repository [docker.maven-qa.XXXXX.partners/devteam/maven]
e13738d640c2: Preparing
ef91149a34fb: Preparing
3332503b7bd2: Preparing
875b1eafb4d0: Preparing
7ce1a454660d: Preparing
d3b195003fcc: Preparing
92bd1433d7c5: Preparing
f0ed7f14cbd1: Preparing
b31411566900: Preparing
06f4de5fefea: Preparing
851f3e348c69: Preparing
e27a10675c56: Preparing
92bd1433d7c5: Waiting
f0ed7f14cbd1: Waiting
b31411566900: Waiting
06f4de5fefea: Waiting
851f3e348c69: Waiting
e27a10675c56: Waiting
d3b195003fcc: Waiting
e13738d640c2: Layer already exists
3332503b7bd2: Layer already exists
7ce1a454660d: Layer already exists
875b1eafb4d0: Layer already exists
ef91149a34fb: Layer already exists
d3b195003fcc: Layer already exists
f0ed7f14cbd1: Layer already exists
b31411566900: Layer already exists
92bd1433d7c5: Layer already exists
06f4de5fefea: Layer already exists
851f3e348c69: Layer already exists
e27a10675c56: Layer already exists
EOF
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
This is what I have in my build logs.
I am using nginx in artifactory as a reverse proxy which is behind load balancer.I removed below lines from nginx config and it worked
proxy_set_header X-Artifactory-Override-Base-Url
$http_x_forwarded_proto://$host/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
I am still not sure why these headers causing issue.
I have also faced same issue after I enable the Docker pipeline plugin it is started working. I think it maybe help you https://plugins.jenkins.io/docker-workflow/