I am running a custom image (based on the docker image Locust) locally using the below command
docker run -p 5557:5557 my-stress-test:0.1
My dockerfile looks as below
FROM locustio/locust:latest
COPY ./ /mnt/locust
CMD ["-P", "5557", "-f", "/mnt/locust/locustfile.py"]
Now, I deploy this image on to my cloud service which runs this image generating the command
docker run -p 5557 my-stress-test:0.1
This is the command I cannot change. However, I am not able to run the image without port forwarding, like -p 5557:5557. How can I change my dockerfile or anything to run the image without port forwarding.
In dockers you should know how its network works.
There is 3 type of port configuration:
docker run my-stress-test:0.1
docker run -p 5557:5557 my-stress-test:0.1
docker run -p 127.0.0.1:5557:5557 my-stress-test:0.1
In the first type, only apps in the same network as this app can connect to that port.
In the second type, all apps inside and outside of the container network can connect to that port.
In the third type only apps inside the container network, and other apps inside the host can connect to the app, and apps outside of the host cannot connect to the app.
I think the third type is what you are looking for.
If you have multiple network in your host, and you want other apps from other hosts to access the app, you can bind that network ip to the port. for example
docker run -p 192.168.0.10:5557:5557 my-stress-test:0.1
With Docker, you can't publish ports in the dockerfile or at any other time other than at run by design.
How to publish ports in docker files
This question will be best posed to your cloud provider as to why you're unable to change the command that runs a container of your image.
Related
I am running a docker container locally on my Mac system. I have pasted the Dockerfile contents at the bottom of this post. I have exposed port 8888 in the image and I would like to access the python program from my host browser using container-ip-address:8888. But, it doesn't connect. Mapping it to a port on localhost works, but I don't want to do that. How can I solve this? (Already tried creating a docker network and running the container as part of the new network - no dice). Any help would be much appreciated.
Dockerfile contents -
FROM python:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
EXPOSE 8888
RUN pip install tornado
CMD ["python", "./api.py" ]
Expose just tells docker that this container going to work with this port. it doesn't map it anywhere.
https://docs.docker.com/engine/reference/builder/#expose
Mapping is the task of code which is running your container. In case of local machine its a docker. If you don't map port in docker run then container will not be accessible from outside of docker network. At the same time your host machine is not in docker network. Same in case if you create docker network manually - your machine just doesn't have access to inner parts of it.
In other words you have to choose your way:
use -p .... when you do docker run (local development)
or define it in docker compose file if you use docker composer (mostly local development). this approach is very useful when you are running multiple communicating systems in docker.
or do it through kubernetes yaml definition file on production if you go into kubernetes with your solution (production with orchestrator/ autoscaler and etc.)
I have a Docker container that has a Flask server inside ran with Gunicorn.
Locally I run it using docker run -p 443:443 appcontainer and it works just fine.
I can't figure out how to tell Google Cloud Run to do the same, is it possible to specify the -p for it or any other Docker command line arguments for that matter?
According to the Docker documentation
Published ports
By default, when you create or run a container using docker create or
docker run, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker
containers which are not connected to the container’s network, use the
--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world
Cloud Run (fully managed) always exposes services a single port (on :443) over HTTPS SO and cloud run container listen on default port 8080. From my understanding the default set up is something like (-p 443:8080).
However you can configure on which port requests are sent to the container if you want to change the default port 8080
Configuring the container port
gcloud run services update SERVICE --port 443
I have my application running locally in a Docker container, I have published the port which I want to use to invoke its API. However, my Docker container application also needs to make other network requests to externally hosted APIs. Currently I am getting network errors when it tries to make these requests. How do I give my Docker container access to the same network that my local machine is on? Is there a Docker config I need to pass to my docker -it -p 8080:8080 command?
You need to add these options to your docker run command:
--network host
It will bind container networking directly to the Docker host’s network.
Documentation: https://docs.docker.com/network/host/
I have two services running in separate containers, one is grunt(application) and runs off port 9000 and the other is sails.js (server) which runs off port 1337. What I want to try to do is have the client app connect with the server through localhost:1337. Is this feasible? Thanks.
HOST
You won't be able to connect to the other container with localhost (as localhost is the current container) but you can connect via the container host (the host that is running your container). In your case you need boot2docker VM IP (echo $(boot2docker ip)). For this to work, you need to expose your port at the host level (which you are doing with -p 1337:1337).
LINK
Another solution that is most common and that I prefer when possible, is to link the containers.
You need to add the --name flag to the server docker run command:
--name sails_server
You need to add the --link flag to the application docker run command:
--link sails_server:sails_server
And inside your application, you will be able to access the server at sail_server:1337
You could also use environment variables to get the server IP. See documentation: https://docs.docker.com/userguide/dockerlinks/
BONUS: DOCKER-COMPOSE
Your run commands may start to be a bit long... in this case I like to use docker-compose that allows me to define my containers and their relationships (volumes, names, link, commands...) in one file.
Yes if you use docker parameter -p 1337:1337 in your docker run command, it will expose the port 1337 from inside the container to your localhost:1337
I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id