Puppet docker-modules does not work for Jenkins slave(node) - docker

thirst of all thanks for spending time reading this..
I am trying to achieve:
installing Puppet on all my instances (Master, agent1, agent2, etc) DONE
from puppet master installing puppetlabs/docker now I got docker on all my instances.. DONE
I put all my instances in docker SWARM-manager MODE! DONE
on Master installing Jenkins docker service create --name jenkins-master -p 50000:50000 -p 80:8080 jenkins and in Jenkins installing self-organizing swarm plugin. DONE
creating docker secret for all instances echo "-master http://35.23...  -password admin -username admin" | docker secret create jenkins-v1 - DONE
When trying to create a jenkins node.. FAIL nothing happens
docker service create \
--mode=global \
--name jenkins-swarm-agent \
-e LABELS=docker-test \
--mount
"type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock" \
--mount "type=bind,source=/tmp/,target=/tmp/" \
--secret source=jenkins-v1,target=jenkins \
vipconsult/jenkins-swarm-agent
I read before.. puppet module doesn't work with docker SWARM mode..
Do you know any alternative ways to use.. Puppet>Docker>SWARM>Jenkins>slave-nodes/
please advice!

Done!
echo "-master http://35.23... -password admin -username admin" | docker secret create jenkins-v1 -
pssd and user should be exactly like in jenkins user log in !

Related

Jenkins dashboard does not show at localhost:8080 when launched from Windows Docker WSL2

Windows 10 Home, WSL2, Jenkins 2.263, Docker
I am learning Jenkins and doing the tutorial by Starmer.
In an ubuntu terminal (Windows 10 Home WSL2) I ran the provided code:
useradd jenkins -m
docker run \
-u jenkins \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /home/jenkins:/var/jenkins_home \
jenkins/jenkins:lts
cat /home/jenkins/secrets/initialAdminPassword
But when I open localhost:8080 in my browser, instead of seeing the Jenkins dashboard, I see
But when I stop the container from Docker and launch Jenkins from java -jar jenkins.war then I can see the dashboard at localhost:8080
I also tried deleting the jenkins folder, and the jenkins/jenkins:lts image from Docker Desktop and pulled again and get
cat: /home/jenkins/secrets/initialAdminPassword: No such file or directory
It started working. I don't know what fixed it but these are what I tried:
In Docker Desktop, reset all data to factory defaults
Created ubuntu root password and did a su -
Turned off experimental Docker feature "cloud enabled".
I found the secret admin password in the Console in Docker desktop, not in the ubuntu shell.
Now I was able to go to localhost:8080 and enter the admin password.

How to use host machine as an agent from Jenkins running in a docker?

So I saw this answer, but not getting my required solution.
I'm running Jenkins inside a docker image (from https://hub.docker.com/r/jenkins/jenkins). And now want to use my host (Windows 10) machine as an agent or slave, so that some jobs required to be executed on the Windows machine can use my host machine. This job includes the use of another docker container.
First, I don't know how to access the host from inside Docker. So I'm unable to figure out the addresses/ip to use in Manage Jenkin's Nodes.
Any help would be really appreciated.
Thanks in advance...
Add the host machine as node with the IP address In conatiner(Jenkins) .
Then on each Job you can restrict the job to run on specific slave (respected node of host machine ).
See this command will help you to solve your problem :
docker run -d --name my-jenkins \
-v jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-p 8080:8080 -p 50000:50000 \
jenkins/jenkins:lts-jdk11

gitlab ci failing with custom runner

I'm trying to create a custom gitlab-runner to run a docker process, following:
https://github.com/gitlabhq/gitlabhq/blob/master/doc/ci/docker/using_docker_build.md
I tried the second approach in which I registered a runner using:
sudo gitlab-runner register -n \ --url https://gitlab.com/ \
--registration-token xxx \ --executor docker \ --description "My Docker Runner" \ --docker-image "docker:stable" \ --docker-volumes
/var/run/docker.sock:/var/run/docker.sock
However,at gitlab, whenever the pipeline starts I'm facing the following error:
ERROR: Failed to create container volume for /builds/xxx Unable to
load image: gitlab-runner-prebuilt: "open
/var/lib/gitlab-runner/gitlab-runner-prebuilt.tar.xz: no such file or
directory"
I can't find much information online, any help appreciated.
For The record, I got it working following this tutorial
https://angristan.xyz/build-push-docker-images-gitlab-ci/
Since the docker image worked, I suspect there's something wrong with the debian gitlab-runner distribution

How to pass docker options --mac-address, -v etc in kubernetes?

I have installed a 50 node Kubernetes cluster in our lab and am beginning to test it. The problem I am facing is that I cannot find a way to pass the docker options needed to run the docker container in Kubernetes. I have looked at kubectl as well as the GUI. An example docker run command line is below:
sudo docker run -it --mac-address=$MAC_ADDRESS \
-e DISPLAY=$DISPLAY -e UID=$UID -e XAUTHORITY=$XAUTHORITY \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-v /mnt/lab:/mnt/lab -v /mnt/stor01:/mnt/stor01 \
-v /mnt/stor02:/mnt/stor02 -v /mnt/stor03:/mnt/stor03 \
-v /mnt/scratch01:/mnt/scratch01 \
-v /mnt/scratch02:/mnt/scratch02 \
-v /mnt/scratch03:/mnt/scratch03 \
matlabpipeline $ARGS`
My first question is whether we can pass these docker options or not ? If there is a way to pass these options, how do I do this ?
Thanks...
I looked into this as well and from the sounds of it this is an unsupported use case for Kubernetes. Applying a specific MAC address to a docker container seems to conflict with the overall design goal of easily bringing up replica instances. There are a few workarounds suggested on this Reddit thread. In particular the OP finally decides the following...
I ended up adding the NET_ADMIN capability and changing the MAC to an environment variable with "ip link" in my entrypoint.sh.

Can't pull image from docker registry when docker is pointing to a swarm

I'm having an issue with google container registry and dockerhub where docker pull returns the following errors.
gcr
Error: Status 403 trying to pull repository PROJECT_ID/IMAGE_NAME: "Unable to access the repository: PROJECT_ID/IMAGE_NAME; please verify that it exists and you have permission to access it (no valid credential was supplied)."
dockerhub
Using default tag: latest test-node0: Pulling
k8tems/hello-world:latest... : Error: image k8tems/hello-world not
found Error: image k8tems/hello-world not found
This only happens when docker is pointing to a swarm.
Steps to reproduce:
DOCKER_REGISTRY=asia.gcr.io/$PROJECT_ID
KEY_STORE=test-keystore
NODE_BASE=test-node
echo pushing hello-world image to gcr
docker pull hello-world
docker tag hello-world $DOCKER_REGISTRY/hello-world
docker push $DOCKER_REGISTRY/hello-world
echo setting up key store
docker-machine create \
-d digitalocean \
"$KEY_STORE"
docker $(docker-machine config "$KEY_STORE") run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
eval $(docker-machine env "$KEY_STORE")
docker-machine create \
-d digitalocean \
--swarm \
--swarm-master \
--swarm-discovery="consul://$(docker-machine ip "$KEY_STORE"):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip "$KEY_STORE"):8500" \
--engine-opt="cluster-advertise=eth0:2376" \
"$NODE_BASE"0
echo this fails
eval $(docker-machine env -swarm "$NODE_BASE"0)
docker pull $DOCKER_REGISTRY/hello-world
echo this succeeds
eval $(docker-machine env "$NODE_BASE"0)
docker pull $DOCKER_REGISTRY/hello-world
Along with the above snippet, I've also tried forcing the remote docker version to 1.10.3 and swarm to 1.1.3 but the error still persists.
ubuntu:~$ docker-machine ls | grep test
test-keystore - digitalocean Running tcp://:2376 v1.10.3
test-node0 * digitalocean Running tcp://:2376 test-node0 (master) v1.10.3
ubuntu:~$ docker exec swarm-agent-master /swarm -v
swarm version 1.1.3 (7e9c6bd)
ubuntu:~$ docker -v
Docker version 1.10.2, build c3959b1
Is there anything I can do to make this work with the -swarm flag or do I have to run the pull command for each node?
Try JSON key file! It is a long-lived credential and much more consistent than access token when you are using clusters like swarm or kubernetes.
Example command:
docker login -e 1234#5678.com -u _json_key -p "$(cat keyfile.json)" https://gcr.io
Here is the page with more details:
https://cloud.google.com/container-registry/docs/advanced-authentication#using_a_json_key_file

Resources