When I run following commands, I can access 127.0.0.1:80 at localhost successfully.
docker run -p 127.0.0.1:80:80 --name Mynginx -dt nginx
docker exec -it Mynginx bash
But If I run the commands at digitalocean's DROPLETS, how to access it now? ( I tried to access DROPLETS's IP address:80, but I get nothing.)
You need to EXPOSE the port. See the documentation for more information on how.
Running from the command-line
If you run the containers from the command-line, you can map the ports with the -p tag. You can map multiple ports.
docker run -dt -p 80:80 --name Mynginx nginx
or
docker run -dt -p 80:80 -p 443:443 --name Mynginx nginx
Docker-compose
If you're using docker-compose, you can add the EXPOSE tag in your yaml file.
version: '2.3'
services:
my_container:
container_name: "Mynginx"
image: nginx:latest
expose:
- "80"
You need to update your droplets firewall settings to allow incoming connections to port :80. To update this select your droplet.
Then go to Networking -> Manage Firewalls -> Create Firewall
Then under Inbound Rules create a new HTTP rule by selecting HTTP from the dropdown menu. Scroll down and apply this firewall to your droplet, then you should be able to receive inbound traffic on port :80. You will have to add a similar rule for any other ports you want to open up.
See here for more details.
Related
How do I use the nginx in the container and access other container with setup config file?
I am a beginner for docker.
I try to learn how to use nginx manage my applications by docker containers.
I will use the "pgadmin" as an application in container for example.
Create & start containers. I try to use the [link] parameter to connect two containers.
sudo docker create -p 80:80 -p 443:443 --name Nginx nginx
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --link Nginx:PSQLA --name PSQLA dpage/pgadmin4
sudo docker start Nginx
sudo docker start PSQLA
Go to Nginx bash and install nano edit.
sudo docker exec -it Nginx bash
apt update
apt install nano
Create and setup the nginx config file in admin.conf.
nano etc/nginx/conf.d/admin.conf
In the admin.conf is following blow.
{
listen 80;
server_name admin.my-domain-name;
location / {
proxy_pass http://PSQLA:80;
}
}
I get this error blow.
2020/10/17 01:57:16 [emerg] 333#333: host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
nginx: [emerg] host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
How do I use the nginx in the container and access other container with setup config file?
Try the following commands (in the same order) to launch the containers:
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --name PSQLA dpage/pgadmin4
sudo docker create -p 80:80 -p 443:443 --link PSQLA:PSQLA --name Nginx nginx
sudo docker start PSQLA
sudo docker start Nginx
Now edit the Nginx configurations and you should not encounter the error anymore.
Tl;dr
As mentioned in the docker documentation:
When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source.
In order to access PSQLA from Nginx container, we need to link Nginx container to PSQLA container and not the other way around.
Now the question is: What difference does that even makes?
For this we need to understand how --link option works in docker.
The docker adds a host entry for the source container to the /etc/hosts file
We can verify this in the /etc/hosts file inside the Nginx container. It contains a new entry something like this (The id and IP might be different in your case):
172.17.0.4 PSQLA 1117cf1e8a28
This entry makes Nginx container access PSQLA container using the container name.
Please refer this for better understanding:
https://docs.docker.com/network/links/#updating-the-etchosts-file
Important Note
As mentioned in the Docker documentation:
The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
I build a docker image from dockerfile with ubuntu as the base.
I manually install elasticsearch kibana airflow in it. The ip of my container is 172.17.0.2. I am able to access the Airflow's Web UI from the host machine at 172.17.0.2:8080 . However cannot access Kibana or elasticsearch at 172.17.0.2:5601 and 172.17.0.2:9200 respectively.
The following is the excerpt from my dockerfile for installing elasticsearch
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
RUN apt-get update
RUN apt-get install elasticsearch
Please Advice,
Thanks!
If you've successfully installed kibana and docker inside your docker image build-ed container then you've to EXPOSE the ports for kibana default port 5601 and elasticsearch default port 9200 before accessing it from the local host. You can do it by two different ways. For example: at the dockerfile EXPOSE 5601 9200 8080 or at the time when running the container. So If you want access the Kibana UI or Elasticsearch that reside inside your Airflow container then you can remove the existing container and re-run it with port expose. Let's say-
At container running time,
docker run -it --name webui_kibana_elasticsearch -p 5601:5601 -p 9200:9200 -p 8080:8080 ec45652e2ca4 /bin/bash
At dockerfile building time,
EXPOSE 8080 5601 9200
Alternatively If you want to use Kibana and Elasticsearch in separate docker container then this articale will help you for sure.
See here
https://docs.swiftybeaver.com/article/33-install-elasticsearch-kibana-via-docker
and
https://gist.github.com/sany2k8/347690434b282369890057d094218c7f
In fact, I don't know how you can visit 172.17.0.2:8080. But the common way should be publishing your ports, see this.
For your situation it could be something like:
docker run -it -p 5601:5601 -p 8080:8080 -p 9200:9200 your_image
Then, use your_host_ip:5601, your_host_ip:8080 etc (Not container ip) to visit the container service.
docker version: 17.05.0-ce
I have some containers running by hand using docker run ... but recently for new project I create docker-compose.yml file based on this tutorial. However when i run following commands in my hosting:
docker network create --driver bridge reverse-proxy
docker-compose up
and
docker run -d --name nginx-reverse-proxy --net reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The proxy not work for old containers and I am unable to use subdomain to that projects (they "stop work").
So what to do?
I make experiments with --net parameter in docker run ... and using docker network inspect network_name. I get many different results like welcome to nginx or http 404 not found or http 503 temporarily unavailable and get following conclusions:
if no --net command then container is run in bridge network
if --net xxx command then cntainer is run only in 'xxx' network (not in bridge !)
if --net xxx --net yyy then container is run only in 'yyy' (no 'xxx' at all!)
The bridge is default docker network for containers inter-communication.
So when on running proxy we use only --net reverse-proxy then proxy container not see bridge and cannot communicate with other containers. If we try to use --net reverse-proxy --net bridge (two or more times in one line - like -p) then container will be connected only to last network.
So solution is... run proxy in following way:
docker run -d --name nginx-reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker network connect reverse-proxy reverse-proxy
As you see we not use --net command at all. The network connect command allow container to connect to use multiple networks. When you execute docker network inspect reverse-proxy and docker network inspect bridge we will see that nginx-reverse-proxy is in both networks :)
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/
I have a docker image which is running perfectly in file I have done
EXPOSE 8080
and I run my image using
sudo docker run -p 8080 <image-name> <Argument1> <Argument2>
Image runs but when I go to
localhost:8080
I get page not found error. Is there no way I can see some response or something on localhost:8080?
The option -p 8080 will expose the container:8080 port into a host:random-port.
The option --publish works as follow: -p ip:hostPort:containerPort. Using a -P| --publish-all will automatically bind any container-opened port into random-host port.
It is also possible to publish range of ports: -p 1234-1236:1222-1224.