Ssh config for Jenkins when using helm and k8s - jenkins

So I have a k8s cluster and I am trying to deploy Jenkins using the following repo https://github.com/jenkinsci/helm-charts.
The main issue is I am working behind a proxy and when git tried to pull (using the ssh protocol) it is failing.
I am able to get around this by building my own docker image from the provided, installing socat and using the following .ssh/config in the container:
Host my.git.repo
# LogLevel DEBUG
StrictHostKeyChecking no
ProxyCommand /usr/bin/socat - PROXY:$HOST_PROXY:%h:%p,proxyport=3128
Is there a better way to do this, I was hoping to use the provided image and perhaps have a plugin thast allowed something similar, but everywhere I look I can't seem to find anything.
Thanks for the help.

Related

Docker volume plugin: Where are the logs?

I am trying to learn how to utilize the RClone Docker plugin to declutter my mounting strategy. Since most of my storage is remote and not on-device, I had previously just used bind mounts to the actual Linux mounts which were provided via RClone through fstab.
So in order to make that a little cleaner and store configurations better, I am largely using Docker Compose and now I am starting to add the RClone plugin to the configurations.
Problem is: How do I get the logs? So far, I couldn't interact with the RClone plugin at all aside from en- or disabling it and setting a few default arguments. That said, passing --log-file ... caused an error. However, docker logs is already a command and I am pretty sure I am able to query plugin logs too.
But how? I installed the plugin by aliasing it as rclone.

Gitlab Kubernetes Agent - error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied

I am trying to set up a Gitlab Kubernetes Agent in a small self-hosted k3s cluster.
I am however getting an error:
$ kubectl config get-contexts
error: error loading config file "/root/.kube/config": open /root/.kube/config: permission denied
I have been following the steps in documentation found here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html
I got the agent installed and registered so far.
I also found a pipeline kubectl example here:
https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands
Using the one below gives the error:
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods
I do not know what is missing. The script itself seems a bit confusing as there is nothing telling the container how to access the cluster.
Looking further down there is also one for doing both certificate-based and agent-based connections. However I have no knowledge of either so I cannot tell if there is something extra in this that I should actually be adding.
Also if it makes a difference the runner is also self-hosted and set to run docker.
The agent is set up without a configuration file. I wanted to keep it as simple as possible and take it from there.
Anyone know what should be changed/added to fix the issue?
EDIT:
I took at step back and disregarded the agent approach. I put the kubeconfig in a gitlab variable and used that in the kubernetes image. This is good enough for now and a relief to finally for the first time have something working and be able to push stuff to my cluster from pipeline. After well over 15 hours spent on the agent I have had enough. Only after several hours did I figure out that the agent was not just about security etc but that it was also intended for syncing repo and cluster without pipelines. This was very poorly presented and as someone who has done neither completely escaped me. The steps in docs I followed seems to be a mixture of both which does not exactly help out.
I will wait some months and see if some proper guides are release somewhere by then.

How to configure docker/docker-compose to use Nexus by default instead of docker.io?

I'm trying to use TestContainers to run JUnit tests.
However, I'm getting a InternalServerErrorException: Status 500: {"message":"Get https://registry-1.docker.io/v2/: Forbidden"} error.
Please note, that I am on a secure network.
I can replicate this by doing docker pull testcontainers/ryuk on the command line.
$ docker pull testcontainers/ryuk
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: Forbidden
However, I need it to pull from our nexus service: https://nexus.company.com/18443.
Inside the docker-compose file, I'm already using the correct nexus image path. (Verified by manually starting it with docker-compose. However TestContainers also pulls in additional images which are outside the docker-compose file. It is these images that are causing the failure.
I'd be glad for either a Docker Desktop or TestContainers configuration change that would fix this for me.
Note: I've already tried adding the host URL for nexus to the Docker Engine JSON configuration on the dashboard, with no change to the resulting error when doing docker pull.
Since the version 1.15.1 Testcontainers allow to automatically append prefixes to all docker images. In case your private registry is configured as a docker hub mirror this functionality should help with the mentioned issue.
Quote from the documentation:
You can then configure Testcontainers to apply the prefix registry.mycompany.com/mirror/ to every image that it tries to pull from Docker Hub. This can be done in one of two ways:
Setting environment variables TESTCONTAINERS_HUB_IMAGE_NAME_PREFIX=registry.mycompany.com/mirror/
Via config file, setting hub.image.name.prefix in either:
the ~/.testcontainers.properties file in your user home directory, or
a file named testcontainers.properties on the classpath
Basically set the same prefix you did for the images in your docker-compose file.
If you're stuck with older versions for some reason, a deprecated solution would be to override just the ryuk.container.image property. Read about it here.
The process is described on this page:
Add the following to your Docker daemon config:
{
"registry-mirrors": ["https://nexus.company.com:18443"]
}
Make sure to restart the daemon to apply the changes.

Docker Registry on-prem setup on Artifactory

I want to know the process of setting up docker registry on Artifactory On-Prem solution, I fail to install it even after repeated tries.
I have used the official doc for the on-prem but had no luck installing it.
So if there was a one piece document which tells the entire setup it would great.
I tried the steps on Nginx web server with Sub Domain method(with the help of Reverse proxy config generator)
P.S - I want to try this locally in LAN kind environment if not on AWS.

Can't pull image from private docker registry

Trying to get a private repo running on my EC2 instance so my other docker hosts created by docker-machine can pull from the private repo. I've disabled SSL and have put up a firewall to compensate that allows my test server(the one I'm trying to pull on) to connect to my main EC2 instance (the private repo). So far I can push to the private repo where it's hosted on my main EC2 instance (was getting an EOF error before disabling SSL) but I get the following error when I run this on my text server:
docker pull ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/scoredeploy
this is the error it spits out:
Error response from daemon: Get https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/v1/_ping: EOF
Googling this error on yields results of people having similar issues, but without any fixes.
Anybody have any idea of what's going on here?
You might need to set the --insecure-registry <registry-ip>:5000 flag on the docker daemon's startup command on your non-docker-registry machine. In your case: --insecure-registry ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000
If you want to use your already-running docker machine, this should help you out setting the flag: https://docs.docker.com/registry/insecure/#/deploying-a-plain-http-registry
If you're using boot2docker, the file location and format is slightly different. Give this a shot if this is the case: http://www.developmentalmadness.com/2016/03/09/docker-configure-insecure-registry-in-boot2docker/
I've had issues with my docker machines not saving this setting on reboots. If you run into that issue, I'd recommend you make a new machine including the flag --engine-insecure-registry <registry-ip>:5000 in the docker-machine create command.
Best of luck!

Resources