Connect to docker container from the host - docker

I have several docker containers running on my local machine. One with SQL Server, one with RabbitMQ, and one with my code. Everything works fine within the docker containers but how can I reference these containers from outside?
I would like to connect to SQL Server with the management studio installed on my desktop. I would also like to hit the RabbitMQ management console from the browser on my desktop.
Inside the container I reference the other containers with a hostname but this is not visible outside of the network. I can connect with the IP but that changes each time I start it.
Is there a way to give each container a hostname that is visible from the host? Is there a better approach?

Related

Connect docker container to a database that is on another network (not itself a container)

I am using Laradock, I build a container with php-fpm and nginx.
I need to connect to a database which resides in an outside network (using its ip), that is not itself a docker container.
When the container tries to connect to the database, it times out.
I know I need to edit the network inside docker-composer.yml file, but I cannot find any intructions on how to do so on the internet
At least, not ones that have worked for me, is this possible?

Connect to AWS DocumentDB from within a docker container

B"H
I have a docker container on EC2 attempting to connect to DocumentDB. DocuementDB needs to be within the vpc network.
When attempting to connect to DocumentDB in a none host mode the connection fails, but when I (hack and) mount the container to use host network mode it does work. But for simple deployments and replicating my containers it's a problem.
Any idea how to connect to DocumentDB (without ssh tunneling) from within docker hosted on EC2?
If I understand correctly, you are running the container in none networking mode. None means you want to disable all the networking for your container. Most frequent used modes are either bridge or host.
You can also refer the below post which talks how to run docker container in ECS and connect to documentdb.
https://aws.amazon.com/blogs/database/deploy-a-containerized-application-with-amazon-ecs-and-connect-to-amazon-documentdb-securely/

Running couchbase cluster with multiple nodes in docker on windows 10

I created a couchbase 4.0 docker container with single node on windows 10. And added node ip in host machine loopback and forwarded port in vitural box so that couchbase client in my app running in host can connect with node in cluster. I was able to connect and do db operation when I have single node in cluster.
However when I created multiple node cluster in docker on windows 10. I was not able to do db operation. In golang app running in host I got message unable to complete action after 6 attemps on get and set operation.
How to run couchbase cluster of multiple nodes in docker on same host in windows machine so that I can connect with cluster and do db operation from app running in host machine.
If your app is not running inside of Docker host, as far as I know, you can't do this (I would LOVE to be proven wrong by a Docker expert).
Couchbase clients need access to every node in the cluster, and with Docker you can only forward one image to a given port outside the host. (FYI, there is a tool called sdk-doctor which you can use to verify connectivity/networking issues called SDK Doctor).
I would suggest running your golang app inside of the Docker host (using docker-compose is the way this is typically done).
Also, I would highly suggest upgrading to a more recent version of Couchbase.

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Replace Docker container with external service

I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.

Resources