I was trying to run my gitlab-ci in my hosted gitlab server and I picked docker for gitlab-runner executer but in pipline it got stucked and doesn't work.
What should I do to fix this?
Seems the same issue, the Machine on which the docker is running, is sitting behind a proxy server, which is why its getting stuck when its trying to pull the image.
If you are able to login to the machine and check the internet access..
Check if you are using some kind of proxy or not?
Your ID may have SSO to Proxy and hence your ID works .. if the gitlab-runner service runs on a different account, that account may not have internet access
Related
Does anyone have a running installation of nginx as a reverse proxy, unbound from gitlab where the internal gitlab container registry is running?
I've setup nginx and gitlab in docker containers and I can access gitlab with a suburl e.g. my.domain.com/gitlab/. However when I try to access the container registry for an example project I either end up getting Bad Gateway when using docker login or the container registry shows an error when accessing the site:
Docker connection error We are having trouble connecting to the Container Registry. Please try refreshing the page. If this error persists, please review the troubleshooting documentation .
I've been trying to figure out what settings I need for days and I don't understand what's going wrong, so if anyone can help I'd be really grateful.
I've followed this docs manual to create a self-signed private registry on some VM. It works fine when I pull images from another host.
I now try to understand how I configure a Service Connection in Azure DevOps of type Docker Registry to use this registry.
This is my current setup:
And this is the log:
We could go to the Docker's Settings > Network and change DNS Server radio button to Fixed
In addition, I found a sample issue, you could also check this.
I got a weird problem. I am using a docker contdainer runner with Gitlab ce to do our builds anywhere. One thing I need to do is to Scp results to a central server. The user id, private and public keys are the same on the remote server as the container and I have the remote server as a known host and the public key in the authorised keys file on the server.
Now if I spin up this container stand alone, I can ssh to the remote server. However, when I’m running as a docker container runner on gitlab, it can’t see the remote server.
I know I’m missing something simple but can’t figure it out.
Anyone have any ideas?
So this turned out to be a sync problem where we pass the ssh keys in to gitlab via variables and write them into ~/.ssh
We have a a group set of variables on gitlab and a project one, which had older keys and thats what caused ssh to fail.
We're trying to setup a GitLab Runner, which is resposible for building and testing our web application. For running the jobs we use the Docker executor with DinD.
Our problem is now: When trying to access certain services from inside the Runner Container (docker image) we get a timeout and no response back. It includes:
logging in to our own docker registry which is hosted on the same
system
wget on our domain (which is hosted on the same system)
What we can do:
ping our domain as well as the registry
ping other domains
wget other domains
Logging into the registry and wget our domain is successful when trying it native on the server and not in a docker container.
So it maybe looks like a docker problem.
Hope someone can help us.
I created the Bluemix Secured Gateway Docker Client in my local environment and everything was working fine. But after I rebooted my workstation, the Docker Client does not work anymore. I ran the Docker start command, but nothing happened. What should I do to get it working again?
Thanks.
Please run the "docker run ..." command from the Secure Gateway UI exactly as it's shown in the Secure Gateway UI. We do not support any of the "docker start/stop/restart" commands.