How to access a docker container through SSH? - docker

I am currently thinking of building a docker image for my ipython parallel nodes. Because its a pain to configure each manually with commands. Will i be able to access this image (located on a different PC on my LAN) simply by typing ssh user#ip on my laptop (Master Node)? How do i get the ip of the docker image running on my Node?

Will i be able to access this image (located on a different PC on my LAN) simply by typing ssh user#ip on my laptop (Master Node)?
You cannot ssh into a container unless you arrange to run sshd inside that container. Normally that's not necessary; as this answer explains you can simply use docker exec to access a running container.
How do i get the ip of the docker image running on my Node?
First, a note about nomenclature: an image is just a collection of files. A container is what you get when you start services from an image. In other words, it doesn't make sense to ask questions about accessing or getting the ip address of an image.
You can get the ip address of a container using the docker container inspect command, which will show you a variety of information about your container. However, this may not be what you want: the ip address of the container will be a private ip address on a docker internal network that is only accessible from the host where you're running docker.
You provide remote access to services by using port forwarding (the -p flag to docker run). For example, if you're running a webserver on port 8080 inside a container, you could make that available on port 80 on your host doing something like:
docker run -p 80:8080 mywebserver
This document describes in more detail some of the options related to port forwarding.

Related

Resource cannot be found error when accessing a page in Docker Container

I created a asp.net webform project in Visual Studio with Docker support (Windows). When I run the project using Visual Studio page comes up as below
Visual Studio creates a docker image which I saw using command
docker images
See image below (webapplication3)
Now I run another instance of Image (webapplication3) by command
Docker run webapplication3:dev
I can see container running
Docker ps
see image below
But now when I access this new running container using ip http://172.17.183.118/PageA.aspx
it's not coming up, see image below (I have taken IP 172.17.183.118 from docker inspect command, so it is correct.
Can someone tell me why am I not able to view the page? Why is it saying "Resource cannot be found" error?
When you run a Docker container default, the container will run with an internal IP address and an expose port map the local machine port, and the IP address will go out to the internet through the docker bridge which associated with the local machine network interface.
When you access the container inside the local machine, you just need to access the localhost with the port shows you. In your issue, you need to access the address http://localhost:62774/PageA.aspx. If you want to access the container from the Internet, you should access the IP address of your local machine with the port. For you, it means the address http://your-local-machine-public-ip:62774/PageA.aspx.
You can get more details from Docker Network. Also, I suggest you'd better run the container with the special port you plan just like docker run -d -p nodePort:containerPort --name containerName yourImage.

SSH Between Docker Instances between Hosts

I have a setup that looks like this:
Essentially, two physical machines that exist on the same local network and each machine is running the same docker image. I have exposed a range of ports on both physical machines (2000-3000). The docker image used has both SSH and OpenSSH server installed, and when run, port 22 is mapped to 2222. What I would like to be able to do is SSH from Docker Image Machine-01 to Docker Image Machine-02.
I realize that docker attach, etc exist, however, I do have a specific use case for my application.
I know that my ports are open as I can have netcat listening on one machine, and then use nc -zv machine-02 2000 and get a response. Where I am stuck is getting the connection between the two docker images. It should be noted that I can SSH into the docker image locally (machine-01 can get into its own image, but machine-02 cannot access this)
What is the best way of proceeding with this?

Unable to connect outside database from Docker container App

we have two machineā€¦one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.

Can't connect to ASP.Net site in Docker for Windows

I am having difficulty connecting from the host to an ASP.Net website running in a Windows container on Docker. I can connect to a website running in a Linux container without any problem.
I have tried connecting to both localhost and to the IP port assigned to the container but in both cases I just get a timeout error.
I have tried several ASP.Net examples which are already pre-built along with trying to build my own custom image. In every case I get the same timeout error. I have also tried uninstalling and re-installing docker but that didn't change anything.
I am running Windows 10 Pro and Docker Community Edition Version 17.03.1-ce-win12 (12058)
Ultimately I was able to completely reset my container network using a customized older version of the Microsoft Vitualization cleanup scripts. https://github.com/Microsoft/Virtualization-Documentation/tree/live/windows-server-container-tools/CleanupContainerHostNetworking This reset my container network and everything is now working as expected.
SUMMARY:
When the published port/s for a container are defined using the EXPOSE directive in the container's Dockerfile, the -P argument must be used with the docker run command in order to "activate" those exposed port/s.
It is not possible for a Windows container host to access containers that it is running using localhost, 127.0.0.1 or its external host IP address. Access containers running on a given host, A, by using the IP address of A from a second host, B. Alternatively, you can use the IP address of a container directly.
FULL EXPLANATION:
So there are a few nuances with ensuring that the proper firewall rules are created, and your containers are actually accessible on their published port/s.
For instance, I'll assume that your ASP.Net containerized application is defined by a container image, which was defined by a Dockerfile. If so, you probably defined the published port for the image/app using the Dockerfile EXPOSE directive. In this case, when you actually run the container you need to "activate" that published port using the "-P" argument to the docker run command.
For example, if your container image is web_app, and the Dockerfile for that image included the line, EXPOSE 80, then when you go ahead and run that image you need to do something like:
C:\> docker run -P web_app
Once the container is running, it should be available on container port 80. You can then go ahead and view the app via browser. To do that you have two options:
You can access the app from your container host, using the container IP and port
Find the container IP using docker network inspect nat, then looking for the endpoint/IP address that corresponds with your container.
You can also fund the container IP by running docker exec <CONTAINER ID> ipconfig, where <CONTAINER ID> is the ID of your container.
You can get the ID of your container and the exposed port for your container by running docker ps on the container host.
You can access the app from another host machine, using the container host IP and host port
You can find the IP address of your host using ipconfig.
You can identify the host port upon which your app is exposed, by running docker ps from the host. Then, under PORTS you'll see a mapping of the form 0.0.0.0:<HOST PORT>-><CONTAINER PORT>/TCP. In this mapping <HOST PORT>, is the port upon which your app is available on the host.
Once you have the IP address of your container host, and the port upon which your app is available on the host, you can use that information to access your app from a browser on a separate host.
NOTE: Today you cannot access a container in this way from its own host--currently a Windows container host cannot access the containers it is running, despite whether localhost, 127.0.0.1 or the host IP address is used.

How to share host network bridge when using docker in docker

I'm using the https://github.com/jpetazzo/dind docker image to have docker in docker. When starting docker containers inside the parent docker, is it possible to use the bridge of the parent docker so I can share the network between the containers inside the docker container and the parent docker container?
What I want to do is to access the containers inside the parent docker container from the host directly by IP to assign domain names to them.
UPDATE -> Main Idea
I'm upgrading a free online Java compiler to allow users to run any program using docker. So I'm using the dind (docker in docker image) to launch a main container that have inside a Java program that receive requests and launch docker containers inside of it.
So what I want to do is to give the users the option to run programs that expose a port and let them access their containers using a subdomain.
So graphically I have this hierarchy
Internet -> My Host -> Main Docker Container -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
And what I want to do is to give the user a subdomain name to access his "User Docker Container" for example: www.user_25.compiler1.browxy.com
So he can have a program that expose a port in his "User Docker Container" and can access it using the subdomain www.user_25.compiler1.browxy.com
What confuses me is that to access the "User Docker Container" I need to access before the Main Docker Container. I'm trying to find a way to access the "User Docker Container" directly, so I thought that if the User Docker Container and the Main Docker container can share the same network I can access the User Docker Container directly from the host and assign a domain name to the "User Docker Container" IP updating the /etc/hosts file on the host.
Thanks a lot for any advice or suggestion :)
Finally I took many ideas that larsks gave me and this is what I did
Start docker in docker container with a name (--name compiler)
Execute this command in the host -> sudo route add -net 10.0.0.0 gw docker inspect --format '{{ .NetworkSettings.IPAddress }}' compiler netmask 255.255.255.0
For this to work I added a custom bridge in the docker in docker container that ensure that the ip range is 10.0.0.0/24
Now I can ping containers created inside the docker in docker container from the host
To have name resolution I installed docker-dns as larsks suggested into the docker in docker container and added the IP of it to /etc/resolv.conf in the host
The result is that from the host I can access containers by name that are created inside the docker in docker container.
One possible updgrade thing that I'd like to have is to configure everything with docker and don't add custom stuff into the host but by now I don't know how to do that and I can live with this solution
If you run your "Main docker container" with --net=host, then your configuration simplifies to:
Internet -> Host -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
Although you probably want to use a bridge other than docker0 for the child containers (e.g., create a new bridge docker1, and start your dind Docker daemon with -b docker1).
If two users were to attempt to publish a service on the same port at the same ip address, then yes, you would have port conflicts. There are a few ways of working around this:
If you can support multiple public ip addresses on your host, then you can "assign" (in quotes because this would not be automatic) one to each container. Instead of running docker run -p 80:80 ..., you would need to make the bind ip explicit, like docker run -p 80:80:1.2.3.4. This requires people to "play nice"; that is, there is nothing to prevent someone from either forgetting to specify a bind address or from specifying the wrong address.
If you are explicitly running web services, then you may be able to use some sort of front-end proxy to map subdomain names to containers using name-based virtual host. There are several components to this process, and making it automated would probably require a little work. Doing it manually is comparatively easy (just update /etc/hosts, for example), but is fragile because when a container is restarted it will have a new ip address. Something like a dynamic dns service can help with this.
These are mostly suggestions more than solutions, but let me know if you would like more details. There are probably other ways of cracking this particular nut, so hopefully someone else will chime in.

Resources