On my office desktop machine, I'm running a docker container that accesses the GPU. Now, since I'm working from home, I'm connected through ssh to my office desktop computer in vs code via remote ssh plugin which works really well. However, I would like to further connect via remote containers to that running container in order to be able to debug the code I'm running in that container. Failed to get this done yet.
Anyone has any idea if this is possible at all and in case any idea how to get this done?
Install and activate ssh server in a container.
Expose ssh port via docker
create user with home directory and password in container
(install remote ssh extension for vs code and) setup ssh connection within the remote extension in vs code and add config entry:
Host <host>-docker
Hostname your.host.name
User userIdContainer
Port exposedSshPortInContainer
Connect in vs code.
Note: Answer provided by OP on question section.
What are the dependencies and steps to connect to a remote docker container from VSCode? So I can properly compile and run the code with the tools in my container environment?
I have tried to follow the instructions here without much luck:
https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host
My setup involves:
Host with VSCode, no docker installed, ssh client installed, ssh keys are in place
Server with VSCode, docker installed, ssh client and server installed
Docker container already running on Server
Host can connect to Server, through VSCode using the Remote Development Version: 0.17.0 extension, through Remote - SSH Version: 0.47.2 extension
Version: 0.47.2
Server can connect to Docker container, through VSCode using the Remote Development Version: 0.17.0 extension, through Remote - Containers Version: 0.83.1 extension.
How do I connect Host to a Running Docker container?
UPDATE 1
Small advance
I have added this line to my ~/.config/Code/User/settings.json file. The option gets highlighed with a message unknown configuration setting
{
...
"docker.host":"tcp://localhost:23750",
...
}
Run this command in another terminal:
ssh -N -L localhost:23750:/var/run/docker.sock <user>#<serveraddr>
And now I can see the running containers in Remote explorer > Containers > Other Containers. However, when trying to connect to it, I get the following error message.
Setting up container with bc1700db049858ba20f1c830bbeff6d6a4e04de58a2b35a61df1016788bc07db
Docker returned an error code 127, signal null, message: Command failed: docker system info
/bin/sh: docker: command not found
So, it appears that docker must be installed on the host machine to prevent the last mentioned error.
Note: docker service does not need to be running in the host (systemctl disable docker)
With this in mind, these are the steps.
Host:
Install docker and ssh client
Add your user to docker group
Install VSCode
Configure Server
(After server config below): edit ~/.config/Code/User/settings.json with
"docker.host":"tcp://localhost:23750",
Configure your ssh keys for the Server
(After every reboot run on terminal: ssh -N -L localhost:23750:/var/run/docker.sock <user>#<serveraddr>)
Run VSCode and install Remote Development extension. Restart VSCode
Now you should see your running containers in VSCode Remote explorer > Containers > Other Containers
Server:
Install docker and ssh server
Install VSCode (this may not be a requirement on the server)
Add your user to docker group and start your container
I realize this was already answered, but I stumbled across this while trying to set this up myself today. I found an additional issue I had appeared to be that my local SSH key had not been added to the agent. I was following the instructions here.
I am running Windows 10 Version 1909 Build 18363.1082.
After doing an ssh-add $Env:USERPROFILE\.ssh\id_rsa and restarting the ssh-agent, I was able to connect to the remote container without having to employ the ssh tunneling method you show above.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I am having this issue
system3:postgres saurabh-gupta2$ docker build -t postgres .
Sending build context to Docker daemon 38.91kB
Step 1/51 : FROM registry.access.redhat.com/rhel7/rhel
Get https://registry.access.redhat.com/v2/: Service Unavailable
docker run -t apline
Unable to find image 'apline:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable.
See 'docker run --help'.
I have looked for a solution that says to set proxy, but I have set the proxy for the wifi.
https://docs.docker.com/docker-for-mac/networking/#httphttps-proxy-support
Still, it is not working.
I have set proxy for docker too. It is not working.
in Preference -> proxies
Docker version 17.12 ce
I also want to know if the proxy is the issue then how can I check it is set, what is work around for this?
Here are few suggestions:
Try restarting your Docker service.
Check your network connections. For example by the following shell commands:
</dev/tcp/registry-1.docker.io/443 && echo Works || echo Problem
curl https://registry-1.docker.io/v2/ && echo Works || echo Problem
Check your proxy settings (e.g. in /etc/default/docker).
If above won't help, this could be a temporary issue with the Docker services (as per Service Unavailable).
Related: GH-842 - 503 Service Unavailable at http://hub.docker.com.
I had this problem for past days, it just worked after that.
You can consider raising the issue at docker/hub-feedback repo, check at, Docker Community Forums, or contact Docker Support directly.
docker logout
docker login
This might solve your problem
I tried running on Windows, and got this problem after an update. I tried restarting the docker service as well as my pc, but nothing worked.
When running:
curl https://registry-1.docker.io/v2/ && echo Works
I got back:
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
Works
Eventually, I tried:
https://github.com/moby/moby/issues/22635#issuecomment-284956961
By changing the fixed address to 8.8.8.8:
Which worked for me!
I still got the unauthorized message for curl https://registry-1.docker.io/v2/ but I managed to pull images from docker hub.
For me I had this issue when I first installed Docker and ran
docker run hello-world
I got an authentication required error when I ran
curl https://registry-1.docker.io/v2/ && echo Works
All I needed to do was to restart my MacOS and then run the command again, it just started pulling the image and i got the message
Hello from Docker!
This message shows that your installation appears to be working correctly.
It's clearly a proxy issue: docker proxies https connections to the wrong place. Bear in mind that docker proxy settings may be different from the operating system (and curl) ones. Here's how I managed to solve the issue:
First of all, find out where are you proxying your docker https requests:
# docker info | grep Proxy
Http Proxy: http://<my.proxy.server>:8080
Https Proxy: https://<my.proxy.server>:8080
No Proxy: localhost,127.0.0.1
and double check your https settings.
In my case, I realized that the "Https proxy" was set to https://... instead of http://..., so I corrected it in /etc/sysconfig/docker file (I'm using RHEL7) and, after a docker restart with:
# systemctl restart docker
the proxy variable shows up succesfully updated:
# docker info | grep Proxy
Http Proxy: http://<my.proxy.server>:8080
Https Proxy: http://<my.proxy.server>:8080
No Proxy: localhost,127.0.0.1
and everything works fine :-)
Just to add, in case anyone else comes across this issue.
On a Mac
I had to logout and log back in.
docker logout
docker login
Then it prompts for username (NOTE: Not email) and password. (Need an account on https://hub.docker.com to pull images down)
Then it worked for me.
NTML PROXY AND DOCKER
If your company is behind MS Proxy Server that using the proprietary NTLM protocol.
You need to install **Cntlm** Authentication Proxy
After this SET the proxy in
/etc/systemd/system/docker.service.d/http-proxy.conf) with the following format:
[Service]
Environment=“HTTP_PROXY=http://<<IP OF CNTLM Proxy Server>>:3182”
In addition you can set in the .DockerFile
export http_proxy=http://<<IP OF CNTLM Proxy Server>>:3182
export https_proxy=http://<IP OF CNTLM Proxy Server>>:3182
export no_proxy=localhost,127.0.0.1,10.0.2.*
Followed by:
systemctl daemon-reload
systemctl restart docker
This Worked for me
For me the problem was solved by restarting the docker daemon:
sudo systemctl restart docker
One option which worked for me on MAC.
Click on the Docker Icon in the tray. Open Preferences -> Proxies. Click on Manual Proxy and specify Web Server (HTTP) proxy and Secure Web server (HTTPS) proxy in the same format as we specify in HTTPS_PROXY env variable.
Choose Apply and Restart.
This Worked for me
try to reload daemon then restart docker service.
systemctl daemon-reload
I had this same issue when working on an Ubuntu server.
I was getting the following error:
deploy#my-comp:~$ docker login -u my-username -p my-password
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp 35.175.83.85:443: connect: connection refused
Here are the things I tried that did not work:
Restarting the docker service using sudo docker systemctl restart docker
Powering off and restarting the Ubuntu server.
Changing the name server to 8.8.8.8 in the /etc/resolv.conf file
Here's what worked for me:
I tried checking if the server has access to the internet using the following netcat command:
nc -vz google.com 443
And it returned this output:
nc: connect to google.com port 443 (tcp) failed: Connection refused
nc: connect to google.com port 443 (tcp) failed: Network is unreachable
Instead of something like this:
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connected to 172.217.166.110:443.
Ncat: 0 bytes sent, 0 bytes received in 0.07 seconds.
I tried checking again if the server has access to the internet using the following wget command:
wget -q --spider http://google.com ; echo $?
And it returned:
4
Instead of:
0
Note: Anything other than 0 in the output means your system is not connected to the internet
I then tried the last time if the server has access to the internet using the following Nmap command:
nmap -p 443 google.com
And it returned:
Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT
Nmap scan report for google.com (216.58.223.238)
Host is up (0.00052s latency).
Other addresses for google.com (not scanned): 2c0f:fb50:4003:802::200e
rDNS record for 216.58.223.238: los02s04-in-f14.1e100.net
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds
Instead something like this:
Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT
Nmap scan report for google.com (216.58.223.238)
Host is up (0.00052s latency).
Other addresses for google.com (not scanned): 2c0f:fb50:4003:802::200e
rDNS record for 216.58.223.238: los02s04-in-f14.1e100.net
PORT STATE SERVICE
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds
Note: The state of port 443/tcp is closed instead of open
All this was enough to make me realize that connections to the internet were not allowed on the server.
All I had to do was speak with the team in charge of infrastructure to fix the network connectivity issue to the internet on the server. And once that was fixed my docker command started working fine.
Resources: 9 commands to check if connected to internet with shell script examples
That's all.
I hope this helps
Recheck Proxy Settings with the following commands
docker info | grep Proxy
Check VPN Connectivity
If VPN not using CHECK NET connectivity
Reinstall Docker and repeat the above steps.
Enjoy
On my windows 11 all I did was to first login into my account
docker login
Got this from a network filter (LuLu on macOS) blocking traffic to/from Docker-related processes.
I had this issue when I first installed Docker and ran
docker run hello-world
I was on a corporate network and switching to my personal network solved the issue for me.
The answers are provided here amazing, but if you are new in that and you don't realize full error then you may see at the end of that error net/http: TLS handshake timeout. message means that you have a slow internet connection. So it can be only that problem that's it.
Toodles
I had the following entries in my /etc/hosts file:
34.228.211.243 registry-1.docker.io
34.205.88.205 auth.docker.io
104.18.121.25 production.cloudflare.docker.com
Just by commenting them out, I fixed the problem.
List item
Many good answers above, but mine is a bit different with Mac and Docker Desktop UI. In my case, it is a Desktop proxy setting that needs to be turned off when I am outside of corporate fiewall/proxy:
ERROR message from docker CLI:
Username: xxx
Password: ***
Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable
My env: Machine Mac with Docker UI (i.e. called Docker Desktop,
shown as a whale icon), running outside of corp firewall/proxy.
I am able to Sign In with Docker Desktop UI.
However, whether docker login or docker pull, I kept getting the above error and I got sidetrack into the user id, reset the daemon, ...
Finally, I got to the Docker Desktop UI. Sure enough, there is a proxy setting that I have setup long time ago, and totally forgot about it!
Yes, when I am outside of firewall, I need to turn off the proxy setting here.
Docker Desktop -> Preference -> Resources -> Proxies. Turn
Turn off the manual proxy configuration.
Then docker pull works (without docker login as I was pulling a public image)!
Thanks
PS. I think the difference in behavior of Docker Desktop and Docker CLI contributes to the confusion. I am able to login to docker through the GUI, and the CLI keeps erroring out without good enough diagnostic information.
Using Linux. For me it worked by doing:
$ docker logout
log out of hub.docker.com
log in to hub.docker.com
$ docker login
Check whether containers is enabled or not?
Goto --> turn on/off windows feature, then enable checkbox of containers
Restart windows.
Using the root account instead of my regular user account solved it for me.
I have solved this issue about $ sudo docker run hello-world following the Docker doc.
If you are behind an HTTP Proxy server of corporate, this may solve your problem.
Docker doc also displays other situation about HTTP proxy setting.
In my case, stopping Proxifier fixed it. I added a rule to route any connections from vpnkit.exe as Direct and it now works.
One of the problems you might need to check is,
Does the registry requires VPN,
Enable your VPN and try pulling again.
Thanks.
Ok, I have a similar issue and nothing seemed to help, restart docker, disabled IPv6 and the nslookup and dig all seemed fine.
What worked for me was going to my Docker Desktop -> Preferences -> Experimental Features and unchecking Use new virtualization framework.
docker login terminal command worked for me.
If your machine requires VPN then must connect with VPN first and try docker login.
Have you create a repo with the matching tag on destinated docker hub? It might be that your container image has no where to be pushed to.
Run export DOCKER_CONTENT_TRUST=0 and then try it again.
Use --tls in the pull request.
For example if original pull request is docker pull dgraph/dgraph:v21.03.0
Use this instead : docker --tls pull dgraph/dgraph:v21.03.0
Just reloading system, this is helped for me. (Windows 10 64x)
While running
sudo docker pull centos
it gives connection time out, While it is running behind proxy where the proxy has been set http_proxy & https_proxy. What is the reason apart from proxy,though it seems proxy issue.I checked LINK but in vain, is there some other settings i am missing please let me know.
2014/11/10 23:31:53 Get https://index.docker.io/v1/repositories/centos/images: dial tcp 162.242.195.84:443: connection timed out
I was getting timeouts on Windows 10 Docker 17.03.0-ce-rc1
To fix it I opened Settings / Network and then set the DNS server to 8.8.8.8
If you are running behind proxy then,
add following command or line in /etc/default/docker file,
export http_proxy=<YOUR_PROXY>
Restart docker service and check,
# service docker restart
service docker stop
HTTP_PROXY=http://proxy_ip:port/ docker -d &
This should work.
On Ubuntu, you can add HTTP_PROXY and HTTPS_PROXY to /etc/default/docker
So yes, what worked for me at the end is setting the proxy, as mentioned by other answers.
I went to icon tray --> Right click on docker to windows --> Go to
settings --> set the proxy as ip:port
Please refer screenshot as below
To change for a fast, open and non-intrusive DNS on CentOS 7:
sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
add the line:
PEERDNS=no
and
sudo vi /etc/resolv.conf
keep only the line:
nameserver 9.9.9.9
If you run into these docker pull timeout issues on Docker Toolbox running on Windows 10 Home and piggybacking off an existing Virtualbox installation, check to see if Virtualbox is separately open and if so, shut down running machines and close Virtualbox (one or more of those running machines within Virtualbox were created and are being leveraged by Docker Toolbox). This heavy-handed way of going about things worked for me
Generally the problem of connection timeout, I know why the internet output was restricted to download docker images from external repositories,
To check this you can try to download the image from another server or another machine with a different internet channel.
If you can send the image from scp use the command: sudo docker save -o /home/your_image.tar your_image_name. and use with this command sudo docker load -i your_image.tar