I am running a docker container locally on my Mac system. I have pasted the Dockerfile contents at the bottom of this post. I have exposed port 8888 in the image and I would like to access the python program from my host browser using container-ip-address:8888. But, it doesn't connect. Mapping it to a port on localhost works, but I don't want to do that. How can I solve this? (Already tried creating a docker network and running the container as part of the new network - no dice). Any help would be much appreciated.
Dockerfile contents -
FROM python:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
EXPOSE 8888
RUN pip install tornado
CMD ["python", "./api.py" ]
Expose just tells docker that this container going to work with this port. it doesn't map it anywhere.
https://docs.docker.com/engine/reference/builder/#expose
Mapping is the task of code which is running your container. In case of local machine its a docker. If you don't map port in docker run then container will not be accessible from outside of docker network. At the same time your host machine is not in docker network. Same in case if you create docker network manually - your machine just doesn't have access to inner parts of it.
In other words you have to choose your way:
use -p .... when you do docker run (local development)
or define it in docker compose file if you use docker composer (mostly local development). this approach is very useful when you are running multiple communicating systems in docker.
or do it through kubernetes yaml definition file on production if you go into kubernetes with your solution (production with orchestrator/ autoscaler and etc.)
Related
I am running a custom image (based on the docker image Locust) locally using the below command
docker run -p 5557:5557 my-stress-test:0.1
My dockerfile looks as below
FROM locustio/locust:latest
COPY ./ /mnt/locust
CMD ["-P", "5557", "-f", "/mnt/locust/locustfile.py"]
Now, I deploy this image on to my cloud service which runs this image generating the command
docker run -p 5557 my-stress-test:0.1
This is the command I cannot change. However, I am not able to run the image without port forwarding, like -p 5557:5557. How can I change my dockerfile or anything to run the image without port forwarding.
In dockers you should know how its network works.
There is 3 type of port configuration:
docker run my-stress-test:0.1
docker run -p 5557:5557 my-stress-test:0.1
docker run -p 127.0.0.1:5557:5557 my-stress-test:0.1
In the first type, only apps in the same network as this app can connect to that port.
In the second type, all apps inside and outside of the container network can connect to that port.
In the third type only apps inside the container network, and other apps inside the host can connect to the app, and apps outside of the host cannot connect to the app.
I think the third type is what you are looking for.
If you have multiple network in your host, and you want other apps from other hosts to access the app, you can bind that network ip to the port. for example
docker run -p 192.168.0.10:5557:5557 my-stress-test:0.1
With Docker, you can't publish ports in the dockerfile or at any other time other than at run by design.
How to publish ports in docker files
This question will be best posed to your cloud provider as to why you're unable to change the command that runs a container of your image.
I have cloned this project:
https://github.com/andfanilo/vue-hello-world
and made a dockerfile for it:
FROM node:10
RUN apt install curl
# make the 'app' folder the current working directory
RUN mkdir /app
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . /app
WORKDIR /app
RUN npm install
CMD [ "npm", "run", "serve" ]
I build and run it with:
FRONTEND_IMAGE='frontend-simple-image'
FRONTEND_CONTAINER_NAME='frontend-simple-container'
docker build -t ${FRONTEND_IMAGE} .
docker rm -f ${FRONTEND_CONTAINER_NAME}
docker run -it --name ${FRONTEND_CONTAINER_NAME} ${FRONTEND_IMAGE}
It builds and runs successfully:
And I can access it on my host browser:
Which is all good except I would not expect that I could access from my host browser according to:
https://docs.docker.com/config/containers/container-networking/
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host. Here are some examples.
So why does it work (accessing the web application from my host browser) without adding e.g -p 8080:8080 to docker run??
It's all working fine. To access the website you are using 172.17.0.2 which belongs to the initial Docker bridge network 172.17.0.0/16. It's a basic network in which all containers are being created if you won't specify any other network.
Because bridge is a network created on your host machine you can freely access it using direct IP address. But if you will try to access the Vue app through localhost:8080 or 127.0.0.1:8080 you shouldn't be able to connect, as you are using a different network. After adding -p 8080:8080 the behavior should change and an app will be accessible through localhost.
Basically an "outside world" from Docker documentation means a network beyond the ones assigned to the container, so in your case, an "outside world" is anything but 172.17.0.0/16.
You can read more about container communications here.
I want to run nginx inside a docker container as a reverse proxy to talk to an instance of gunicorn running on the host machine (possibly inside another container, but not necessarily). I've seen solutions involving docker compose but I'd like to learn how to do it "manually" first without learning a new tool, right now.
The simplified version of the problem is this:
Say I run a docker container on my machine.
Outside the container, I run gunicorn on port 5000.
From within the container, I want to run ping ??? and have it reach the gunicorn instance run in step 2.
How can I do this in a simple, portable way?
The easiest way is to run gunicorn in its own container and expose port 5000 (not map it, just expose it).
It is important to create a network first and run both your containers on the same network so that they see each other: docker network create xxxx
The when you run your 2 containers attach them to this network: docker run ... --network xxxx ...
Give names to your your containers, it is a good practice. (eg: docker run ... --name gunicorn ...)
Now from your container you can ping your gunicorn container: ping gunicorn, or even telnet on on port 5000.
If you need more information just drop me a comment.
I want to copy a file in my localhost to the container.
RUN wget http://127.0.0.1/wso2/wso2esb-5.0.0.zip
Im trying to parameterize the location (IP). For testing I'm trying to point it to the localhost. When running the container we can bind the ip address to the host with --net='host' . But how can I access the host machine when I build the image. Or is there a default IP for the host machine in the build stage ?
Maybe you can use docker cp ?
https://docs.docker.com/engine/reference/commandline/cp/
I am creating a Docker image using dockerfile build, where my base image is Tomcat 8.0:jre8. Now in the dockerfile I want to specify a custom port instead of 8080 and expose a custom port outside the docker container.
Can anyone guide me how this can be done?
The port used inside the container has nothing to do with docker. This is configured in the Tomcat configuration files.
You can map the port used internally to a port on the host with the --publish (or -p for short) option of the docker run command when you start the container.
docker run --publish=hostport:containerport ...