I installed jenkins this way : https://linuxize.com/post/how-to-install-jenkins-on-debian-9/
Basically sudo apt install jenkins while logged as root
I then created a hudson user and used ssh-keygen to generate a pair of keys.
I then tried to use the public key in the gerrit-trigger plugin (https://plugins.jenkins.io/gerrit-trigger/)
However it tells me /home/hudson/.ssh/id_rsa does not exist.
I'm guessing it's a permission issue. when I use apt install jenkins is there a way to specify in user hudson ?
Thnaks.
Seems you created ssh keys using root user. You need to create ssh keys while logged in as hudson user or you can change path to /root/.ssh/id_rsa
If you want to use hudson user's path, login to hudson user first,
sudo su - hudson
Then create ssh key pair:
ssh-keygen
Then you can confirm files using list command
ls -a /home/hudson/
if you see id_rsa file there, then you can put its address /home/hudson/id_rsa
It seems this is permission related issue. Please change the permissions of all files in .ssh folder in /var/lib/jenkins to jenkins.
chown jenkins:jenkins /var/lib/jenkins/.ssh && chown jenkins:jenkins /var/lib/jenkins/.ssh/*
chmod 700 /var/lib/jenkins/.ssh && chmod 600 /var/lib/jenkins/.ssh/*
Also make similar configuration for hudson user's ssh key:
su - hudson
chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
Jenkins installation created a jenkins user in debian.
I had do su - jenkins and then create a ssh key pair for it ssh-keygen
Then the jenkins UI is able to read this one located in /var/lib/jenkins/.ssh/id_rsa
Related
I'm trying to setup Groovy in jenkins, so that it automatically is installed on an agent when performing jobs on it.
This is my global configuration:
This is my groovy build-step:
When I run the job, I get this error:
The user testrpm does have sudo rights. Where is the problem ?
I wouldn't install groovy on agent nodes. You should just use groovy wrapper which will download the groovy and run that without needing to install anything into directories jenkins doesn't have permissions for.
Short of that I would NOT grant sudo rights to testrpm either. That's going to be bad mojo. Instead you can add testrpm to a group that allows right access to /opt or /opt/groovy-4.0.0. You are unzipping something into a nested directory so you'll have to grant access to /opt to write to that directory which could be dangerous if you have other things in that directory. You may nest it in a subdirectory to isolate it from other things. If you do these steps on the machine using a user with sudo rights (not in your build script) then it should work:
sudo mkdir /opt/jenkins
sudo chgrp jenkins /opt/jenkins
sudo usermod -a -G jenkins testrpm
sudo chmod 770 /opt/jenkins
Another option would be to pick a directoy testrpm already has write access to that without needing to grant permissions to it.
I'm trying to find a way to use hosts defined in my user's ~/.ssh/config file to define a docker context.
My ~/.ssh/config file contains:
Host my-server
HostName 10.10.10.10
User remoteuser
IdentityFile /home/me/.ssh/id_rsa-mykey.pub
IdentitiesOnly yes
I'd like to create a docker context as follow:
docker context create \
--docker host=ssh://my-server \
--description="remoteuser on 10.10.10.10" \
my-server
Issuing the docker --context my-server ps command throws an error stating:
... please make sure the URL is valid ... Could not resolve hostname my-server: Name or service not known
For what I could figure out, the docker command uses the sudo mechanism to elevate its privileges. Thus I guess it searches /root/.ssh/config, since ssh doesn't use the $HOME variable.
I tried to symlink the user's config as the root one:
sudo ln -s /home/user/.ssh/config /root/.ssh/config
But this throws another error:
... please make sure the URL is valid ... Bad owner or permissions on /home/user/.ssh/config
The same happens when creating the /root/.ssh/config file simply containing:
Include /home/*/.ssh/config
Does someone have an idea on how to have my user's .ssh/config file parsed by ssh when issued via sudo ?
Thank you.
Have you confirmed your (probably correct) theory that docker is running as root, by just directly copying your user's ~/.ssh/config contents into /root/.ssh/config? If that doesn't work, you're back to square one...
Otherwise, either the symlink or the Include ought to work just fine (a symlink inherits the permissions of the file it is pointing at).
Another possibility is that your permissions actually are bad -- don't forget you have to change the permissions on both ~/.ssh AND ~/.ssh/config.
chmod 700 /home/user/.ssh
chmod 600 /home/user/.ssh/config
And maybe even:
chmod 700 /root/.ssh
chmod 600 /root/.ssh/config
I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.
So I want to include an rsa key in my image so I can clone a git repo into my image when its building. But I really don't want to have to keep this key in the docker build repo. Does anyone have a good recommendation on how to handle this? From the docker documentation and various other threads it seems that there is no way to COPY files from outside of the build context. Apart from the following solutions that I am not interested in using:
How to include files outside of Docker's build context?
Is there a better solution to this? Or am I going to have to either keep the key in the build repo or build from the location of the of the rsa key I want to use?
I guess a possible way to do it would be to gitignore the key from the build repo and just put it in whenever I clone it, and make a note of it in the readme so other developers know to do this too.
--- My Solution ---
I don't think there is a "correct" answer for this but here was the solution I went with.
I create a linux user (somewhere) and generate a key for it. Then create a user on gitlab with only repo cloning rights. I add the public key from the linux user to the gitlab user. Then for the build I create the .ssh folder and copy in the users private key with a config file. I just store that users key in the docker build repo.
build steps:
RUN mkdir ~/.ssh
RUN touch ~/.ssh/known_hosts
RUN ssh-keyscan -t rsa gitlab_host > ~/.ssh/known_hosts
COPY ./ssh/config /root/.ssh
COPY ./ssh/id_rsa_app /root/.ssh
RUN chmod 600 /root/.ssh/id_rsa_app
ssh config file:
Host gitlab-app
HostName gitlab_host
IdentityFile /root/.ssh/id_rsa_app
IdentitiesOnly yes
Now the git clone works inside of the build.
What about using a build argument? Do something like this in your Dockerfile:
ARG rsakey
RUN test -n "${rsakey}" && { \
mkdir -p -m 700 /root/.ssh; \
echo "${rsakey}" > /root/.ssh/id_rsa; \
chmod 600 /root/.ssh/id_rsa; \
} || :
Then, when you build the image, use the --build-arg option:
docker build -t sshtest --build-arg rsakey="$(cat /path/to/id_rsa)" .
This will inject the key into the image at build time without requiring it to live in your build context.
In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs