I would like to deploy a set of docker containers on a remote docker host using docker-compose -H ssh://user#host up
This works fine as I added my default public key (~/.ssh/id_rsa.pub) to the remote hosts authorized_keys
But how can I specify an alternative private key? Is there an option like when using ssh: ssh -i /path/to/key user#host ?
Background: I would like to trigger a docker-compose deployment on a remote Host using Jenkins. I created a Jenkins Credential of the Kind "SSH Username with private key". Using the credentials plugin I can also get a hold of the key using something like
withCredentials([sshUserPrivateKey(credentialsId: 'some.id', keyFileVariable: 'PKEY')]) {
// $PKEY points to temp. available key file
}
But I don't know how I could pass that to docker-compose -H ...
Or is there a way not to use a key and prompt for the password with a similar mechanism as in docker login --password-stdin?
Related
I have generated a pubic SSH key on my Ubuntu 20.04 server with the user Jenkins, and my key is stored below :
/var/lib/jenkins/.ssh/id_rsa.pub
I have set that public Key on my Gitlab SSH parameters, And I had also create a Credentials in Jenkins for SSH Private key, where i pasted the private key i had generate for my Jenkin's user in linux 20.04 remote server.
When i try to clone the projet using SSH, i get the error :
Failed to connect to repository : Error performing git command: /usr/lib/git-core ls-remote -h git#gitlab.com:project/repository.git HEAD*
Need a helping hand to solve this problem.
enter image description here
First, check that your key is indeed considered when doing ssh with the jenkins account:
ssh -Tv git#gitlab.com
You will see where SSH is looking for your keys, and if /var/lib/jenkins/.ssh/id_rsa is used.
You should see a welcome message.
Second, Check the Jenkins logs to see if there is any additional clues.
You might need to use an SSH key using the old PEM format:
ssh-keygen -m PEM -t rsa -P "" -f afile
I thought the docker logout command would log me out from the remote private docker registry I had just logged in but it doesn't.
Before trying to logout:
$ cat ~/.docker/config.json
{
"auths": {
"rg.nl-ams.scw.cloud": {
"auth": "da2kleGhoPNjVj...pLri69="
...
After the command
$ docker logout
$ cat ~/.docker/config.json
{
"auths": {
"rg.nl-ams.scw.cloud": {
"auth": "da2kleGhoPNjVj...pLri69="
...
This is problematic because when I then launch run commands, docker tries to pull an image from this remote registry but I don't want that to happen. I don't want docker to be aware of this registry anymore. What should I do?
I see now that docker logout by default logs you out from some "default server" that is apparently "https://index.docker.io/v1/". How do I logout from all servers? Do I really have to write a script for this?
I don't want to rely on a particular server name, I just want to make sure the docker client is not logged in anywhere so that I can run tests in a clean and repeatable way.
Use this and specify remote private registry
docker logout URL
for example docker logout localhost:8080 (to log out of a registry on your local host)
I created a dockerfile which generates the docker image with my node application. My application depends on my another application which is added as dependency using git ssh.
When docker build runs npm install, it fails with error code 128. I understand it is because i do not have valid ssh token to access repo. How can i create one and have my docker build pass?
You can use ssh-keygen -t rsa to generate your local machine's key (do not provide any passcode for simplicity) that can be used for authentication. Now adding that key for git access depends on where your repository is i.e is it on hosted sites like bitbucket/github or just your another linux machine.
For repository on local server run below commands on your local machine to add your public key (id_rsa.pub) to git server.
eval "$(ssh-agent -s)"
ssh-add
ssh-copy-id user#git-server
For hosted sites you get the option to add the public-key under your profile settings.
Note: do not forget to add below in ~/.ssh/config file on your local machine to avoid the unknown host exception
Host bitbucket.org
StrictHostKeyChecking no
Host <git-server-ip>
StrictHostKeyChecking no
For more information on generating key please refer to https://confluence.atlassian.com/bitbucketserver/creating-ssh-keys-776639788.html
I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.
I am trying to write a Dockerfile to access a remote mySQL database using ssh tunneling.
Tried with the following Run command:
ssh -f -N username#hostname -L [local port]:[database host]:[remote port] StrictHostKeyChecking=no
and getting this error:
"Host key verification failed" ERROR
Assuming that the Docker container does not have access to any SSH data (i.e.: there is no ~/.ssh/known_hosts), you have two ways to handle this:
Use ssh-keyscan -t rsa server.example.com > ~/.ssh/my_known_hosts from within the container to add the remote host
Or copy the relevant line from an existing my_known_hosts or simply COPY a the whole file to the container.
Either of these approaches should do it.