I am running gitlab/gitlab-ce on rancheros. When I replace a container with a new one, I got error
"ECDSA host key for [host] has changed and you have requested strict checking."
I known I can remove the old key from known_host and make it work.
My question is:
Is there a way to preserve the host keys for the server? And where are those keys stored inside container?
Following solutions won't work.
Copied over old keys inside docker /etc/ssh/*
configs, logs and data folder are mounted from volume.
you can add this to your ssh command:
-o StrictHostKeyChecking=no
to disable strict checking
Related
I need to update certificates that are currently in docker containers running via kubernetes pods. The three pods containing these certificates are titled 'app', 'celery' and 'celery beat'
When I run
kubectl exec -it app -- sh
and then ls
I can see that the old certificates are there. I have new certificates on my VM filesystem and need to get these into the running pods so the program starts to work again. I tried rebuilding the docker images used to create the running containers (using the existing docker compose file), but that didn't seem to work. I think the filesystem in the containers was initially mounted using docker volumes. That presumably was done locally whereas now the project is on a remote Linux VM. What would be the natural way to get the new certs into the running pods leaving everything else the same?
I can kubectl cp the new certs in, the issue with that is that when the pods get recreated, they revert back to the old certificates.
Any help would be much appreciated.
Check in your deployment file, in the volume section, if there is some mention of configmap, secret, PV or PVC with a name more likely "certs" (normally we use names like this), if it exist, and the mention is secret or configmap, you just need to update this resource directly. If the mention is a PV or PVC, you'll need to update it by CLI for example, and I suggest you to change to a secret.
Command to check your deployment resource: kubectl get deploy <DEPLOY NAME> -o yaml (if you don't use deployment, change it to the right resource kind).
Also, you can access your pod shell and run df -hT this probably will prompt your drives and mount points.
In the worst scenario when the certs were added during the container build, you can solve it by (This is not the best practice. The best practice is to build a new image):
Edit the container image, remove the certs, push with a new tag (don't overwrite the old one).
Create a secret with the new certs
Mount this secret in the same path and using the same names.
Change the image version in the deployment.
You can use the kubectl edit deploy <DEPLOY NAME> to edit your resource.
To edit your container image, use docker commit: https://docs.docker.com/engine/reference/commandline/commit/
If I understand it correctly, in order to use secrets properly I need to use docker swarm.
Once I did a 'docker swarm init', portainer noticed the difference and put back everything in the swarm : running containers, existing stacks, etc..
However after adding a secret to the secrets section now available in portainer, a stack I am trying to setup cannot find the corresponding secret.
Here is the compose : https://pastebin.com/H1wnBLjy
Here is the secrets page :
And if I try running ls /run/secrets/ in the container I get this error :
Error response from daemon: Container xxx is restarting, wait until the container is running
The logs keep repeating this :
Loading configuration from /wiki/config.yml... OK
DB_PASS_FILE is defined. Will use secret from file.
Failed to read Docker Secret File using path defined in DB_PASS_FILE env variable!
ENOENT: no such file or directory, open '/run/secrets/db_passwd'
I tried removing the containers then setting them up again, restarting them, nothing works so far.
For info, it is run on a Swarm 20.10.7 with portainer 2.6.3, on a debian buster host.
What have I done wrong ?
Thanks for your help.
Well my bad, thing is if an env var is not explicitely defined it cannot be used.
I thought adding _FILE to any env was enough to make it understands that it will be passed through a file, but I learned that it is not the case.
So secrets are available in general, but if not defined in the image it is a no-go.
I am using Azure Cloud Shell to ssh into my VMs.
I have created SSH keys, created my VMs and was able to ssh into my VMs.
My Bash cloud shell session was suddenly disconnected (not the main issue) and after opening a new session to Cloud shell again I was not able to ssh into my VM anymore. I checked my .ssh dir and non of my keys were there anymore (empty).
I know the dir clouddrive is persisted but I want to confirm if .ssh is.
If not what is the way to achieve this so I do not run into this issue again.
No, the .ssh directory is not a persistent directory. As you know, only the directory clouddrrive can persist your files. So the possible solution is that you can store your SSH key in the clouddrive, when you use a new session, you can copy the .ssh from the clouddrive. Or add the parameter -i then the command looks like this:
ssh -i /path/to/private_key username#IP
One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -
I am trying to build a docker image, I have dockerfile with all necessary commands. but in my build steps I need to copy one dir from remote host to docker image. But if I put scp command into dockerfile, i'll have to provide password also into dockerfile, which I dont have to.
Anyone has some better solution to do this. any suggestion would be appreciated.
I'd say there are at least options for dealing with that:
Option 1:
If you can execute scp before running docker build this may turn out to be the easiest option:
Run scp -r somewhere:remote_dir ./local_dir
Add COPY ./local_dir some_path to your Dockerfile
Run docker build
Option 2: If you have to execute scp during the build:
Start some key-value store such as etcd before the build
Place a correct SSH key (it cannot be password-protected) temporarily in the key-value store
Within a single RUN command (to avoid leaving secrets inside the image):
retrieve the SSH key from the key-value store;
put it in ~/.ssh/id_rsa or start an ssh-agent and add it;
retrieve the directory with scp
remove the SSH key
Remove the key from the key-value store
The second option is a bit convoluted, so it may be worth creating a wrapper script that retrieves the required secrets, runs any command, and removes the secrets.
You can copy a directory into a (even running) container at post build time
On remote machine: Copy from remote host to docker host with
scp -r /your/dir/ <user-at-docker-host>#<docker-host>:/some/remote/directory
On docker machine: Copy from docker host into docker container
docker cp /some/remote/directory <container-name>:/some/dir/within/docker/
Of course you can do step 1 also from your docker machine if you prefere that by simply adapting the source and target of the scp command.