Ssh from docker container running as gitlab runner - docker

I got a weird problem. I am using a docker contdainer runner with Gitlab ce to do our builds anywhere. One thing I need to do is to Scp results to a central server. The user id, private and public keys are the same on the remote server as the container and I have the remote server as a known host and the public key in the authorised keys file on the server.
Now if I spin up this container stand alone, I can ssh to the remote server. However, when I’m running as a docker container runner on gitlab, it can’t see the remote server.
I know I’m missing something simple but can’t figure it out.
Anyone have any ideas?

So this turned out to be a sync problem where we pass the ssh keys in to gitlab via variables and write them into ~/.ssh
We have a a group set of variables on gitlab and a project one, which had older keys and thats what caused ssh to fail.

Related

Deploy Docker services to a remote machine via ssh (using Docker Compose with DOCKER_HOST var)

I'm trying deploy some docker services from a compose file to a Vagrantbox. The Vagrantbox does not have a static IP. I'm using the DOCKER_HOST environment variable to set up the target engine.
This is the command I use: DOCKER_HOST="ssh://$BOX_USER#$BOX_IP" docker-compose up -d. The BOX_IP and BOX_USER vars contain the correct IP address and username (obtained at runtime from the Vagrantbox).
I can connect and deploy services this way, but I the SSH connection always asks if I wanna trust the machine. Since the VM gets a dynamic IP, my known_hosts file gets polluted with lines I only used once and might cause trouble some time in the future in case the IP is taken again.
Assigning a static IP results in error messages stating that the machine does not match my known_hosts entry.
And setting StrictHostKeyChecking=no also is not an option because this opens the door for a lot of security issues.
So my question is: how can I deploy containers to a remote Vagrantbox without the metioned issues? Ideally I can start a docker container handles the deployments. But I'm open to any other idea as well.
The reason why I don't just use a bash script while provisioning the VM is that this VM acts as a testing ground for a physical machine. The scripts I use are the same for the real machine. I test them regularly and automated inside a Vagrantbox.
UPDATE: I'm using Linux

Gitlab runner stucks while pulling docker image

I was trying to run my gitlab-ci in my hosted gitlab server and I picked docker for gitlab-runner executer but in pipline it got stucked and doesn't work.
What should I do to fix this?
Seems the same issue, the Machine on which the docker is running, is sitting behind a proxy server, which is why its getting stuck when its trying to pull the image.
If you are able to login to the machine and check the internet access..
Check if you are using some kind of proxy or not?
Your ID may have SSO to Proxy and hence your ID works .. if the gitlab-runner service runs on a different account, that account may not have internet access

Issue commands from a gitlab-runner inside docker container

I have a machine with multiple docker containers for a project that I am developing and I just set up a new docker container running Gitlab-Runner inside it.
I need to run a few commands on all the other docker-containers whenever a commit is issued, is there anyway for the runner inside the Gitlab-Runner to access the other containers and tell them to execute commands or even restart them?
We currently don't use SSH keys to access this server that has all the docker containers, we use username and password.
The safe way (and easier than with passwords too) is start using SSH keys and access containers over network. Or at least issue commands to host over SSH from gitlab-runner.
Also, SOF seach returned this: manage containers from another container, docker
Looks legit.

VSCode combine remote ssh and remote containers

On my office desktop machine, I'm running a docker container that accesses the GPU. Now, since I'm working from home, I'm connected through ssh to my office desktop computer in vs code via remote ssh plugin which works really well. However, I would like to further connect via remote containers to that running container in order to be able to debug the code I'm running in that container. Failed to get this done yet.
Anyone has any idea if this is possible at all and in case any idea how to get this done?
Install and activate ssh server in a container.
Expose ssh port via docker
create user with home directory and password in container
(install remote ssh extension for vs code and) setup ssh connection within the remote extension in vs code and add config entry:
Host <host>-docker
Hostname your.host.name
User userIdContainer
Port exposedSshPortInContainer
Connect in vs code.
Note: Answer provided by OP on question section.

Not able to retrieve AWS_CONTAINER_CREDENTIALS_RELATIVE_URI inside the container from fargate task

I'm running a docker container in Fargate ECS Task.
And my docker container, I have enabled ssh server, so that I can login to container directly, if I have to debug something. Which is working fine, so I can ssh my task ip, check and investigate my issues.
But, now I noticed I have an issue while accessing any AWS service via ssh inside the container, => when I logged in container via ssh I found configuration files such as ~/.aws/credentials, ~/.aws/config are missing and I can't issue any cli commands e.g. check the caller-identity. which supposed to be my task arn.
But the strange, is if I connect this same task to an ECS instance, I don't have any such issues. I can see my task arn and all rest of services. So, the ecs task agent just working fine.
So, coming back to ssh, connectivity I notice, i'm getting 404 page not found from curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. So, how can I make this possible that ECS Instance access and ssh access have same capability? if I can access AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in my ssh then I think everything will be changed.

Resources