Do you know if it is possible to share localhost:port with kubernetes.
I am running kubernetes in docker-for-mac, and when creating a loadbalancer - everything works great for containers running in kubernetes via localhost.
Sometime I like to test some code, in a container running just as a docker run - where I am opening ports with -p 8080:80 something.
Now the question is will it conflict with the localhost running k8s loadbalancer - if I run on ports not open to kubernetes loadbalancer?
My guess is, that it does not work - as I am experience some problems reaching ports running with docker run.
If it does not work, how do you docker run along side Kubernetes?
If you’re using the Kubernetes built into Docker (Edge) for Mac, it is the same Docker daemon, and docker run -p will publish ports on your host as normal. This should share a port space with services running outside Docker/Kubernetes and also with exposed Kubernetes services.
You need to pick a different host port with your docker run -p option if you need to run a second copy of a service, whether the first one is another plain Docker container or a Kubernetes Service or a host process or something else.
Remember that “localhost” is extremely context sensitive; I’d avoid using it in questions like this. If you docker run -p 8080:80 ... as you suggest, the host can make outbound calls to the container at localhost:8080; the container can make outbound calls to itself at localhost:80; and nothing in any Kubernetes pod or any other container can see the service at localhost on any port.
Related
I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/
I have a process running on a host that needs to communicate with a docker and I want it to be done by some parameter that can't change (like docker name or host name) unlike IP (prefer not to make the IP of the docker static or install external dockers for this).
I'm aware that dockers can resolve addressees by name in a private network and that's what I want but not between dockers but between process running on the host and docker.
couldn't find a solution, can it be done ?
Edit:
I'm not allowed to use host network and open additional ports on the host for security reasons.
You're welcome to choose the way which fits your needs better.
Option 1. Use host's networking. In this case Docker does not create separate net for container and you connect to container's services as if they would run on your host:
docker run --network=host <image_name>
Drawback of this approach - low isolation and thus security. You dont need to expose any ports here - if service listens on 8080, just open localhost:8080 and enjoy.
Second approach is more correct - you expose (somehow forward) internal ports in container and map them onto ports in the host.
docker run -p 8080:80 <image_name>
This will map port 80 from container to port 8080 on the host. As in previous example, you still connect using localhost, e.g. localhost:8080.
I'm using a dedicated container for running generic project-related shell scripts in order to avoid having to test scripts on multiple environments(mac, win, ubuntu, debian...) and to minimize software requirements on the host OS. Even docker-compose commands are run from the console container. /var/run/docker.sock is bind mounted from host.
Everything else seems to be working fine, but for example if I run docker-compose up traefik inside the console container, traefik starts normally but it's unreachable both on the host and even on another container in the same network. If docker-compose up traefik is run from the host OS(Windows 10), traefik becomes reachable as expected. I suspect this has something to do with how Docker or docker-compose handle networking but I'm not completely sure. I did check that regardless of how I start the traefik container, the same ports appear instantly in NirSoft CurrPorts(sort of gui for netstat).
Is there any way (and how) to fix this?
EDIT
I realised that this must be somehow an error on my part, since dockerized docker guis exist and they assumably don't have any problems bringing up containers that are accessible from the host and outside world.
Now I'm wondering if this might be a simple configuration error either in my docker(-compose) settings or somewhere else on my host machine, or do guis like Portainer go through some extra steps in order to expose the started containers to the host?
For development purpose we all map the port of Traefik to 80, so I will assume the same in your case as well. Let's assume that you are running Traefik container in a port 80 which is mapped to the port 80 in the host. But according to your Traefik container the host machine is nothing but the container which is used for running the scripts. But the port 80 of the shell script container is not mapped to the Host machine of that particular container. I hope now you have been lost somewhere around the port mapping and containers.
Let me describe your situation in the image below.
To make your setup working you should deploy your containers as shown above along with the port mapping.
To simplify the answer,
docker run -t -d -p 80:80 shellScriptImage
docker run -t -d -p 80:80 traefik (- inside the shell script container)
By doing this you can access the traefik container from the outside.
I am totally new in the cloud stuff, I wanted to deploy my application which using node,MongoDB and redis. all these parts become a docker container and working well together.
now I want to set up nginx. I wonder what is the best practice for deploying load balancers? should I run nginx as docker container? or just install it in system level?
I think it depends on how many services you want to serve with your nginx instance. For example, since you can have only one nginx instance bound to the 80 and 443 ports, if you want to share the same SAP between different domains I would go for nginx running on the host (or in a dedicated stack but it looks complex). If you use the SAP for a single domain then it makes perfect sense to have it inside the stack.
If you are running other components of the stack on containers , then it makes sense to run nginx as container as well.
But it depends on your environment , what tools are available. You can scale nginx on kubernetes easily , as well as on docker swram or any other tool of your choice.
Ideally you need to run each compenent in a separate container so that you can manage and scale and troubleshoot them independently.
It's a really good idea to embed an nginx in your docker network. As a docker container, in a docker network, it could connect to other by their service/container name, while you will define port forwarding rule only on the nginx service.
For example :
docker network create --driver overlay --attachable demo
docker run -d -p 80:80 --network demo --name nginx nginx
docker run -it --network demo --name alpine alpine
Your shell should be in the alpine container. Do a "ping nginx". You should be able to ping it. The opposite is possible too.
So now, you have at localhost:80 (from your host machine) a nginx deployed, which can call other containers with their container/service name. Really useful to have an access point to your web-apis deployed in your docker network.
I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.