I installed a new docker container (the standard Ubuntu latest version).
I would like connect on it trough SSH access. I followed instructions on this excellent link https://linuxconfig.org/how-to-connect-to-docker-container-via-ssh
Once I stop my container and restart it, SSH service is not available anymore.
I have to start it manually anytime.
I tried this command too "systemctl enable ssh" to configure ssh as permanent.
The result is as followed:
"Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ssh"
So everything seems be ok, but when I stop the container and restart it, the problem is still present, no ssh service started on the ubuntu.
Someone knows how to configure SSH access as permanent on this case?
Thank you all by advance for your help :)
You have to write a customized Dockerfile, and inside it instrument the SSH configuration so, each time you run the container, it will have a working SSH daemon.
About the issue that when you rerun the container it loses the SSH configuration is caused by the fact that you a craeting a new container from the original (non SSH configured) Ubuntu image. If you want to run the configured container you have to get the containers lsit by docker container ls --all' and copy the ID, the run the container by docker run {{ID}}`.
I'm running Ubuntu 20.4. and installed Docker by following the steps from the docs https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository .
I installed Rider from Jetbrains Toolbox and want to setup Docker for my project. The Docker plugin is installed by default. I made sure that Docker is running via systemctl status docker.
I followed this guide on how to setup Docker for Rider https://blog.jetbrains.com/dotnet/2018/07/18/debugging-asp-net-core-apps-local-docker-container/ but unfortunately I get this error
Cannot connect:
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection
refused: localhost/127.0.0.1:2375 caused by:
java.net.ConnectException: Connection refused
What is missing or wrong?
In that dialog, check "Unix socket".
This will contact the Docker daemon via the special file /var/run/docker.sock. You may need to adjust your user's permissions (typically by making yourself a member of the docker group) to get access to that file.
The "TCP socket" option is for an unusual and hard-to-securely-configure mode of connecting to Docker. (Anyone who can run any docker command can run a container as root, and bind-mount any file from the host; you really don't want to make this level of access network-accessible.) You shouldn't ever need the TCP socket mode..
On running docker images, I am getting error
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/images/json:
open //./pipe/docker_engine: The system cannot find the file specified.
In the default daemon configuration on Windows, the docker client must be
run elevated to connect. This error may also indicate that the docker
daemon is not running.
I am on Windows 10 Home and have tried to install/reinstall Docker but I keep getting the error. Things were working fine an hour back!
Can you check if you have a VM running under docker-machine? You can run the below command:
docker-machine ls
And if you haven't installed docker-machine, please install it.
Ref:- https://docs.docker.com/toolbox/toolbox_install_windows/
Note:- Make sure enabled the BIOS virtualization.
I want to work in a container in a remote server.
But it doesn't work.
Environment:
Local: Windows 10
Local Terminal for ssh: WSL in Windows 10
Server: Ubuntu 18.04
I checked these two articles.
https://code.visualstudio.com/docs/remote/containers-advanced
https://code.visualstudio.com/docs/containers/ssh
I followed these steps.
I installed [Remote Development] extension in VS Code.
Remote-SSH: Connect to host. It works fine.
I Installed [Docker] extension on the remoter server.
Now I can see my containers and images in a docker tab.
I clicked one container and clicked [Attach Visual Studio Code] and it says There are no running containers to attach to.
I resolved this problem by switching to the remote server's Docker context on my local machine:
docker context create some-context-label --docker "host=ssh://user#remote_server_ip"
docker context use some-context-label
docker ps
# A list of remote containers on my local machine! It works!
After that:
Connect via Remote-SSH to the container server
Right click relevant container -> the "Attach Visual Studio Code"
That works for me.
(Note: One would think that I should be able to just use my local VSCode (skip step 1) to connect to said remote container after switching my local context, but VSCode complains Failed to connect. Is docker running? in the Docker control pane.)
I solve this issue using SSH tunneling following the steps found in https://florian-kriegel.de/blog/?p=234
Summarizing:
Set (or add) "docker.host": "tcp://localhost:23750" in settings.json
in VSCode.
Open a SSH tunnel like this in your local machine
changing the user and hostname by the remote machine (where the docker daemon is running) credentials:
ssh -NL localhost:23750:/var/run/docker.sock user#hostname.
Now, in the docker tab, you will be able to see and attach to containers in the remote machine.
Note that the Remote SSH Extension is not used in this case.
This might sound very strange, but for me, I had to open a folder on the remote SSH server prior to using the Remote Containers extension in VS Code. If I didn't do that, then it would constantly try to find the docker service running locally, even though the terminal tab was connected to the remote SSH server.
This seems very weird, because if you're conncted via SSH in VS Code, then the extension should assume you're trying to attach to the container on the remote server. Shouldn't have to open a remote folder first.
By "opening a folder" on the remote server, the Remote Containers extension was then able to attach VS code to the container running on the remote SSH server. I didn't have to do any of the steps in any of those articles. Just simply use Remote SSH to connect VS Code remotely via SSH, open a folder, and then use Remote Containers.
Solution using the "Remote SSH" and the "Remote Explorer" extension in Visual Studio Code.
Following the steps above (https://stackoverflow.com/a/61728799/11687201) I figured out how to make use of the SSH Remote and Remote Explorer Extension. The first step is the same as above:
Open the settings.json file in VSCode, press F1 and select ">Preferences: Open Settings (JSON)" and add/edit the following line:"docker.host": "tcp://localhost:23750"
Open the ssh config file, click on the "Remote Explorer" Extension, then click on the "SSH Targets" "Configure" button and open the ssh config file.
Add the following line to your ssh connection:
LocalForward localhost:23750 /var/run/docker.sock
Remark: Previously I used the solution described earlier in this thread (https://stackoverflow.com/a/61728799/11687201). I had to reboot both machines the local machine and remote machine before the solution described below worked out.
Afterwards I have to use multiple VSCode Windows:
Local Machine: Start VSCode and use the "Remote Explorer" to connect to the remote machine using a new VSCode window
VSCode window connected to remote (SSH)
→ startup the Docker container of your choice
(I was not able to "Attach Visual Studio Code" from this VSCode window)
VSCode window connected to local machine
→ Click on the "Docker" extension, the docker containers running on the remote get listed. Attach VSCode to a running container using one of the folling options:
Right-click on the desired container and chose "Attach Visual Studio Code"
Press F1 and chose">Remote-Containers: Attach to Running Container..." and select the container of your choice afterwards
A third VSCode window will open being attached to the Docker container.
Pros and cons of this solution
(+) Using the "Remote Explorer" extension I can directly connect and open a previously used project folder on my remote machine with one click
(-) 3 VSCode windows (local machine, remote ssh and remote container) are needed instead of 2 VSCode windows
Do you see the error message as of following?
Failed to connect. Is Docker running?
Error: connect EACCES /var/run/docker.sock
Error Message on VSCode
It's because VSCode uses /var/run/docker.sock of remote host to communicate with the Docker service.
There're two methods.
Method 1. (Secure, Need reboot or logging out) After executing following code of dockerode npm getting error "connect EACCES /var/run/docker.sock" on ubuntu 14.04
Method 2. (Instant effect. Use it if you're not dealing with production server)
Run the following command on SSH console.
sudo chmod o+rw /var/run/docker.sock
For some reason, this problem is fixed for me when I open a folder in the remote window before trying to attach to a container.
I found Daniel's answer really helpful but didn't work for me. I put my two cents.
TL;DR
Create a new docker context for the remote machine where remote container is running.
docker context create some-context-label --docker "host=ssh://user#remote_server_ip"
docker context use some-context-label
Just open VSC, go to Docker (you should have installed the extension) tab and you'll see listed all running containers from the remote context you recently created.
Right click on your desired container and attach visual studio code
You can also use the remote-explorer tab, just select containers from the dropdown at the top left.
Why not to ssh remote host
When attaching visual studio code to a container, you can check logs by clicking the notification Setting up Remote-Containers (show log) at the bottom left. There, you can check that:
...
[26154 ms] Start: Run: ssh some-remote-host /bin/sh
[26160 ms] Start: Run in host: id -un
Here, my guess is that it's trying to ssh to the remote host from itself ,since we already connected via remote-ssh.
If you can reach the remote node running Docker engine via SSH why you need yet another SSH server inside the container? From the host running your container, it is possible and safe to use tty, i.e. attach.
I don't think that this is not a good idea to use SSHD running inside the container although it is possible. To be useful SSHD has to listen to non-conflict port in every container. Otherwise, 2 containers occasionally exposing the same port on the same node will conflict like any other service running on same the node.
Of course, ports can be randomized using -P option but it is not so convenient. It is also less convenient to manage keys and users at the container level than at host level where all machinery is provided by the Host software.
Loading every container with SSHD increases the container size. In Kubernetes, every container is reachable without any SSHD running inside containers via pass Pod->Container because Pod, has IP and containers are attachable by id, i.e. "Docker-host->container"
Step 1 - Docker daemon on the remote machine
make sure your remote Docker daemon can accept connections from your host
for testing purposes, I use the following command on the remote
machine to force Docker daemon to listen on port 4243 on all IPs,
beware this is not secure
There is no support for reading a file from /etc/sysconfig or elsewhere to modify the command line. Fortunately, systemd gives us the tools we need to change this behavior.
The simplest solution is probably to create the file /etc/systemd/system/docker.service.d/docker-external.conf (the exact filename doesn't matter; it just needs to end with .conf) with the following contents:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
And then:
systemctl daemon-reload
systemctl restart docker
Step 3 - Opening Docker Ports Using FirewallD
firewall-cmd --permanent --zone=public --change-interface=docker0
firewall-cmd --permanent --zone=public --add-port=4243/tcp
firewall-cmd --reload
Step 4 - Set (or add) "docker.host": "tcp://localhost:4243" in settings.json in VSCode.
I created a docker stack to deploy to a swarm. Now I´m a bit confused how the proper way looks like to deploy it to a real server?
Of course I can
scp my docker-stack.yml file to a node of my swarm
ssh into the node
run docker stack deploy -c docker-stack.yml stackname
So there is the docker-machine tool I thought.
I tried
docker-machine -d none --url=tcp://<RemoteHostIp>:2375 node1
what only seems to work if you open the port without TLS?
I received following:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.178.49:2375": dial tcp 192.168.178.49:2375: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I already tried to generate a certificate & copy it over to the host:
ssh-keygen -t rsa
ssh-copy-id myuser#node1
Then I ran
docker-machine --tls-ca-cert PathToMyCert --tls-client-cert PathToMyCert create -d none --url=tcp://192.168.178.49:2375 node1
With the following result:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "node1:2375": There was an error reading certificate
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I also tried it with the generic driver
$ docker-machine create -d generic --generic-ssh-port "22" --generic-ssh-user "MyRemoteUser" --generic-ip-address 192.168.178.49 node1
Running pre-create checks...
Creating machine...
(node1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Error creating machine: Error detecting OS: OS type not recognized
How do I add the remote docker host with docker-machine properly with TLS?
Or is there a better way to deploy stacks to a server/into production?
I read often that you shouldn´t expose the docker port but not once how to do it. And I can´t believe that they doesn´t provide a simple way to do this.
Update & Solution
I think both answers have there qualification.
I found Deploy to Azure Offical Doc (it´s the same for AWS).
The answer from #Tarun Lalwani pointed me into the right direction and it´s almost the official solution. Thats the reason I accepted his answer.
For me the following commands worked:
ssh -fNL localhost:2374:/var/run/docker.sock myuser#node1
Then you can run either:
docker -H localhost:2374 stack deploy -c stack-compose.yml stackname
or
DOCKER_HOST=localhost:2374
docker stack deploy -c stack-compose.yml stackname
The answer from #BMitch is also valid and the security concern he mentioned shouldn´t be ignored.
Update 2
The answer from #bretf is a awesome way to connect to your swarm. Especially if you have more than one. It´s still beta but works for swarms which are available to the internet and don´t have a ARM architecture.
I would prefer not opening/exposing the docker port even if I am thinking of TLS. I would rather use a SSH tunnel and then do the deployment
ssh -L 2375:127.0.0.1:2375 myuser#node1
And then use
DOCKER_HOST=tcp://127.0.0.1:2375
docker stack deploy -c docker-stack.yml stackname
You don't need docker-machine for this. Docker has the detailed steps to configure TLS in their documentation. The steps involve:
creating a CA
create and sign a server certificate
configuring the dockerd daemon to use this cert and validate client certs
create and sign a client certificate
copy the ca and client certificate files to your client machine's .docker folder
set some environment variables to use the client certificates and remote docker host
I wouldn't use the ssh tunnel method on a multi-user environment since any user with access to 127.0.0.1 would have root access to the remote docker host without a password or any auditing.
If you're using Docker for Windows or Docker for Mac, Docker Cloud has a more automated way to setup your TLS certs and get you securely connected to a remote host for free. Under "Swarms" there's "Bring your own Swarm" which runs an agent container on your Swarm managers to let you easily use your local docker cli without manual cert setup. It still requires the Swarm port open to internet, but this setup ensures it has TLS mutual auth enabled.
Here's a youtube video showing how to set it up. It can also support group permissions for adding/removing other admins from remotely accessing the Swarm.