Google Cloud Console Jenkins password - jenkins

I readed a lot of documentation.
I setup a Jenkins on GCC using kubernetes default creation.
When I go to enter, jenkins ask me about a password to unlock.
Im unable to find that password.
Thanks

Access the Jenkins container via cloud shell.
Get fist get the pod id :
kubectl get pods --namespace=yourNamespace
jenkins-867df9fcb8-ctfq5 1/1 Running 0 16m
Then execute a bash on the pod Id :
kubectl exec -it --namespace=yourNamespace jenkins-867df9fcb8-ctfq5 -- bash
Then just cd to the directory where the initialAdminPassword is saved and use the "cat" command to print its value.

the password will be in a file under folder /secrects/initialadminpassword.
You can go inside the container in case volume mapping is not done

I've had the same issue when creating a jenkins on a gke cluster and I couldn't even found the initialAdminPassword (tried to look inside the volume with no chances)...
As I was looking to have authentication on the cluster, I just created my own image with the google oauth plugin and a groovy file using this repo as a model: https://github.com/Sho2010/jenkins-google-login
This way when connecting I can use my google account. If you need other auth method, you should be able to found them on the net.
In the case you just want to test Jenkins and you don't need a password use the JAVA_OPTS without running the setup like this:
- name: JAVA_OPTS
value: -Xmx4096m -Djenkins.install.runSetupWizard=false
If you are using the basic jenkins image you shouldn't have any password and have full access to your jenkins (Don't live it like this if you are going to create production ready jobs)

For GKE Marketplace "click to deployment" Jenkins, instruction is pretty simple, and can be found in the application "Next steps" Description part after deployment:
Access Jenkins instance.
Identify HTTPS endpoint.
echo https://$(kubectl -njenkins get ingress -l "app.kubernetes.io/name=jenkins-1" -ojsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")/
For HTTPS you have to accept a certificate (we created a temporary one for you).
Now you need a password.
kubectl -njenkins exec \
$(kubectl -njenkins get pod -oname | sed -n /\\/jenkins-1-jenkins-deployment/s.pods\\?/..p) \
cat /var/jenkins_home/secrets/initialAdminPassword
To fully configure Jenkins instance follow on screen instructions.
I've tested it and it works as expected.
Another guide with almost the same steps can be found here
Jenkins docker image usually shows the initial password in the container log.

Related

Gitlab Kubernetes Agent - error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied

I am trying to set up a Gitlab Kubernetes Agent in a small self-hosted k3s cluster.
I am however getting an error:
$ kubectl config get-contexts
error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied
I have been following the steps in documentation found here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html
I got the agent installed and registered so far.
I also found a pipeline kubectl example here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
Using the one below gives the error:
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods
I do not know what is missing. The script itself seems a bit confusing as there is nothing telling the container how to access the cluster.
Looking further down there is also one for doing both certificate-based and agent-based connections. However I have no knowledge of either so I cannot tell if there is something extra in this that I should actually be adding.
Also if it makes a difference the runner is also self-hosted and set to run docker.
The agent is set up without a configuration file. I wanted to keep it as simple as possible and take it from there.
Anyone know what should be changed/added to fix the issue?
EDIT:
I took at step back and disregarded the agent approach. I put the kubeconfig in a gitlab variable and used that in the kubernetes image. This is good enough for now and a relief to finally for the first time have something working and be able to push stuff to my cluster from pipeline. After well over 15 hours spent on the agent I have had enough. Only after several hours did I figure out that the agent was not just about security etc but that it was also intended for syncing repo and cluster without pipelines. This was very poorly presented and as someone who has done neither completely escaped me. The steps in docs I followed seems to be a mixture of both which does not exactly help out.
I will wait some months and see if some proper guides are release somewhere by then.

Ssh config for Jenkins when using helm and k8s

So I have a k8s cluster and I am trying to deploy Jenkins using the following repo https://github.com/jenkinsci/helm-charts.
The main issue is I am working behind a proxy and when git tried to pull (using the ssh protocol) it is failing.
I am able to get around this by building my own docker image from the provided, installing socat and using the following .ssh/config in the container:
Host my.git.repo
# LogLevel DEBUG
StrictHostKeyChecking no
ProxyCommand /usr/bin/socat - PROXY:$HOST_PROXY:%h:%p,proxyport=3128
Is there a better way to do this, I was hoping to use the provided image and perhaps have a plugin thast allowed something similar, but everywhere I look I can't seem to find anything.
Thanks for the help.

Jenkins master to slave error: Host key verification failed

I'm setting up an automation test on Jenkins. I'm trying to run a script remotely from one Linux machine(master machine, same machine as my Jenkins server) and calling a bunch of other scripts on another Linux machine(slave machine). However I'm getting this error on my first ssh command
Host key verification failed.
I'm pretty sure there is no problem for the passwordless connection from master to slave because I've run other tests previously using the same master/slave machine.
I run the exact same command manually on my master and it successfully returned the expected result. I don't know why it just doesn't work for the automation test.
All I wanted to do in this command is to check if a package is already installed (my OS is CentOS 7 for both machines)
ssh ${USERNAME}#${IP_ADDR} 'rpm -qa | grep ${MY_PACKAGE}'
I'm just checking the existence of the package before I proceed to more commands specific to this package.
ssh_opts='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
ssh $ssh_opts ${USERNAME}#${IP_ADDR} 'rpm -qa | grep ${MY_PACKAGE}'
Try this in your shell script. It disables the host key verification check for the hosts.
Eventually I figured what was wrong with it.
When I was exchanging ssh host key from the two machines, I didn't use root user. However, when the test was running through Jenkins, it was using "sudo" to run the target script on the slave test machine, which means it was reading a ssh host key from the "known_hosts" file for root user, not the one I configured for the test user account (a non-root user account)!
I merged my two "known_hosts" files for both the test user and the root user, then the problem was fixed because now Jenkins master could access my slave test machine through root user or my test user account.

Getting "unauthorized: authentication required" when pulling ACR images from Azure Kubernetes Service

I followed the guide here (Grant AKS access to ACR), but am still getting "unauthorized: authentication required" when a Pod is attempting to pull an image from ACR.
The bash script executed without any errors. I have tried deleting my Deployment and creating it from scratch kubectl apply -f ..., no luck.
I would like to avoid using the 2nd approach of using a secret.
The link you posted in the question is the correct steps for Authenticate with Azure Container Registry from Azure Kubernetes Service. I tried before and it works well.
So I suggest you can check if the service-principal-ID and service-principal-password are correct in the command kubectl create secret docker-registry acr-auth --docker-server <acr-login-server> --docker-username <service-principal-ID> --docker-password <service-principal-password> --docker-email <email-address>. And the secret you set in the yaml file should also be check if the same as the secret you created.
Jeff & Charles - I also experienced this issue, but found that the actual cause of the issue was that AKS was trying to pull an image tag from the container registry that didn't exist (e.g. latest). When I updated this to a tag that was available (e.g. 9) the deployment script on azure kubernetes service (AKS) worked successfully.
I've commented on the product feedback for the guide to request the error message context be improved to reflect this root cause.
Hope this helps! :)
In my case, I was having this problem because my clock was out of sync. I run on Windows Subsytem for Linux, so running sudo hwclock -s fixed my issue.
See this GitHub thread for longer discussion.
In my case, the Admin User was not enabled in the Azure Container Registry.
I had to enable it:
Go to "Container registries" page > Open your Registry > In the side pannel under Settings open Access keys and switch Admin user on. This generates a Username, a Password, and a Password2.

Installing Jenkins-X on GKE

This may sound like a stupid question, but I am installing Jenkins-X on a Kubernetes cluster on GKE. When I install through Cloud Shell, the /usr/local/bin folder I am moving it to is refreshed every time the shell is restarted.
My question is two-fold:
Am I correct in installing Jenkins-X through Cloud Shell (and not on a particular node)?
How can I get it so the /jx folder is available when the Cloud Shell is restarted (or at least have the /jx folder on the path at all times)?
I am running jx from the Cloud shell
In the cloud shell you are already logged in and you have a project configured. To prevent jx to re-login to google cloud/project use the following arguments
jx create cluster gke --skip-login=true --project-id projectId
download jx to ~/bin and update $PATH to include both ~/bin and ~/.jx/bin. Put the following to ~/.profile
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
PATH="$HOME/.jx/bin:$PATH"
The .jx/bin is the place where JX downloads helm if needed.
Google Cloud Shell VMs are ephemeral and they are discarded shortly after the end of a session. However, your home directory persists, so anything installed in the home directory will remain from session to session.
I am not familiar with Jenkins-X. If it requires a daemon process running in the background, Cloud Shell is not a good option and you should probably set up a GCE instance. If you just need to run some command-line utilities to control a GKE cluster, make sure that whatever you install goes into your home directory where it will persist across Cloud Shell sessions.

Resources