I am trying to connect to rabbitmq management interface via vm that I created on Azure portal using docker. However, even though rabbitmq is started, cant access the ports.
P.S. the management plugin is enabled. I did almost everything on the internet however still no progress.
I used commands such as:
docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management
Also tried to dockerize a rabbitmq instance using docker containers. Also no progress.
I got already binding port error however I killed the ports and run again so there was no issue with it. Still no access to localhost:15672.
If anyone also trying to solve this issue, I solved it by adding an inbound port (15672) to the VM via Azure.
Related
I used fastapi for my project as a simple server. On local host it works, but I have problems putting it publicly available on my IP.
I followed the official tutorial at https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker
and then ran a docker container successfully (this workedsudo docker run -d --name mycontainer -p 80:80 myimage. But it was not on a public IP.
So i tried:
sudo docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
sudo docker network connect --ip 172.20.128.2 multi-host-network /mycontainer
But when I try to access it from 4G on my phone, it gives a timeout.
Although the 4G network is fast.
I am a newbie in webservers, so please don't judge me :)
If you want me to write more, just ask me, I will. If you want, I will upload my FastAPI server code and my client code as well.
My system is Linux Mint 20.2
Thanks for your help!
This question already has answers here:
accessing a docker container from another container
(8 answers)
Closed 1 year ago.
I am in a confusion right now. I try many things I can find on the web, but, none solved it. I have Win10 and Docker desktop installed using WSL 2 to host Linux containers. I use the following command to start the Jenkins website.
docker run --name jenkins-master-c -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
This works fine. I can access the website using http://localhost:8080/
The problem is, I try to curl http://localhost:8080 from another alpine docker container, but, I am not getting the web page back, it said connection refused. I tried my own tiny web service on my Windows machine without docker. Same thing. I can access the web service using web browser on Windows 10. However, if I get inside a container, I couldn't access the web service on the localhost.
I know I am missing some thing really basic, because the web doesn't seem to have this topic. I am just on my own computer without anything fancy. Thus, I just want to use localhost. The web said the default is supposed to use bridge which the container should talk to each other easily, but, it is not working for me. What am I missing. Maybe I shouldn't type localhost? But, what else should I do?
thank you
Edit: just want to explain what I did to get my problem solved. The creating network --network my-network-name was what I originally did, which failed because the way I curl the webpage is wrong. I did --name jenkins-master-c only to make it easy locate my container on the docker ps. But, as pointed out in my question, I suspected the localhost is wrong, which is confirmed by the solution. Instead of using localhost, I do curl http://jenkins-master-c:8080 which worked. Thanks
localhost is always a question of perspective, it refers to the current machine. This means if you call localhost from a container it speaks to himself and not the machine you see as localhost. If you want to call a service running on this one you have to use its real IP address.
You can imagine that docker containers are individual virtual machines, they have their own localhost. And they are isolated from your host pc and other containers.
Now if you want to communicate among two or more docker containers then you can use bridge network. In docker perspective, a bridge network allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. You can see the docker doc for bridge network.
On the other hand, if you want to communicate with your docker container from your host then you need to fort-forward for opening/exposing a port to connect with the container (which you did in -p 8080:8080)
Another way you can bring your containers under a local host is using kubernetes, in kuernetes you can run one or more containers in a pod and then they will share same network space. kubernetes pod
Probably these two containers are not in the same network, you they cannot see and talk to each other.
First of all, create a network by docker command docker network create SOMENAME, and then run containers again (both of them):
docker run --name jenkins-master-c --network SOMENAME -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:2.282-alpine
Now it should talk to another docker container.
I tried delploying kie workbench using docker command docker run -p 8080:8080 -p 8001:8001 -d --name drools-wb jboss/business-central-workbench-showcase:latest and kieserver using the docker command docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest. I deployed a sample drl file to kie server using the business central. The screen image after deployment is as shown below.
The remote server is given as 172.17.0.3:8080. But when I try to test the deployment file using Postman the server is not responding.The requests are getting timed out.The two endpoint services I tried to access are http://172.17.0.3:8080/kie-server/services/rest/server/and http://172.17.0.3:8080/kie-server/services/rest/server/DemoRule_1.0.0-SNAPSHOT. First of all Iam not understanding why is it getting deployed in some remote server and not localhost. Secondly why is it not getting accessible. I even tried the kie server container endpoint http://localhost:8180/kie-server/services/rest/server/. But none of this works. Can someone help me understand the problem.
I found the answer for myself. The service was available at http://localhost:8180/kie-server/services/rest/server/containers/instances/DemoRule_1.0.0-SNAPSHOT. That's were the actual controller was available. Port 8080 was endpoint for wildfly server. The IP 172.17.0.3:8080 was related to docker container. It had nothing do with the controllers.
currently I have one personal docker image uploaded to DockerHub, I am making the changes to connect this application to redis (that is running in another docker container as well)
Now, I have the next message from my app when it tries to connect to redis:
Error saving to redis: dial tcp 127.0.0.1:6379: connect: connection refused
I understand that they're not in the same network, so my app is pointing to 127.0.0.1:6379 that is not executing anything, so, I am looking for the best way to connect those containers in some way that don't make my app depending of the IP where redis is hosted, from my local machine I can use redis, but not from another docker container. Briefly, what I did was:
docker run --name redis_server -p 6379:6379 -d redis
sudo docker run -d --restart=always -p 10000:10000 --link redis_server:redis --name my_app repo/my_app
So, I am looking for solutions on how to make 127.0.0.1:6379 accessible for my_app. I don't use docker-compose by the way
The easiest solution would be to run Docker with "--network=host" which binds the containers to the host network. This is a perfectly fine solution for testing, however it will expose the ports and services respectively to the internet (if you don't have a firewall), which may or may not be secure.
Another way of doing this would be to define a network in the following way:
docker network create -d bridge my-net
And then to run your containers on that same network by running the docker run command with --network=my-net. This way you can reference each container by its name. For example in your case you can ping my_app from the Redis container and vice-versa.
For the life of me I can't get to the NiFi Web UI. It makes me hate security.
TLDR; I can't find the right way to start NiFi in a docker container and still access the UI. Here's what I've tried (for 8 hours):
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
I go to localhost:8080/nifi - timeout. Same on 127.0.0.1.
docker inspect nifi - IP Gateway is 172.20.0.1 with actual IP of 172.0.0.2. Invalid Host Header and timeout, respectively.
Start randomly trying stuff:
# I tried localhost, 0.0.0.0, various IP addresses
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here}
-d \
apache/nifi:latest
I also built a full docker-compose.yml for my diminishingly-possible stack. Everything thing works except for:
nifi:
image: apache/nifi:latest
hostname: nifi
depends_on:
- zookeeper
- broker
- schema_registry
- nifi-registry
ports:
- "8080:8080"
No changes. Can you help me?
Updates 1
I used the docker-compose.yml file from the repo linked in comments below; thank you #Chaffelson. Still dealing with timeout on localhost. So I spun up a droplet with docker-machine.
The services start fine, and logs indicate Jetty server is up for both NiFi Registry and NiFi. I can access NiFi registry # <host ip>:18080/nifi-registry exactly like I can on my local machine.
I can't access <host ip>8080/nifi - I get invalid host header response.
So I added to docker-compose.yml:
environment:
# Tried both with and without quotes
NIFI_WEB_HTTP_HOST: "<host-ip>"
Jetty server fails to start. Insight?
Updates 2
From the logs, using just docker run --name nifi -p 8080:8080 -d apache/nifi:1.5.0:
[NiFi Web Server-24] o.a.n.w.s.HostHeaderSanitizationCustomizer Request host header [45.55.36.15:8080] different from web hostname [348146fc286f(:8080)]. Overriding to [348146fc286f:8080/nifi] where 45.55.36.15 is the host ip.
This is the crux of my problem.
Updates 3
I disabled ufw (firewall) on my local machine. I can now access nifi via localhost:8080. No progress on actually accessing on a remote host (which is the point of all this).
Sorry to hear you are having trouble with this. In Apache NiFi 1.5.0, the stricter host header protection was enabled to prevent host header poisoning attacks. Unfortunately, we have seen that this was not documented sufficiently for users who were not familiar with the configuration. In response, we have made changes that are currently in master and will be included in the upcoming 1.6.0 release:
a new property nifi.web.proxy.host was added to nifi.properties which accepts a comma-separated list of valid host headers independent of the Jetty hostname
the Docker configuration has been updated to allow proxy whitelisting from the run command
the host header protection is only enforced on "secured" NiFi instances. This should make it much easier for users to quickly deploy sandbox environments like you are doing in this case
For an immediate fix, this command should work:
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=172.20.0.1
-d \
apache/nifi:latest
You can also intercept the requests using a Chrome extension like ModHeader to override the Host header and verify that it works when it matches the expected host. Along with Daniel's excellent additions, this should help you until the next version is released.
I use this and similar docker compose files for my automated NiFi Python client testing. It looks superficially similar to yours, and works perfectly well on both Ubuntu (Travis-CI) and my local MacBook pro for myself.
I suggest you try running this file as a known-good configuration, and also examine 'docker logs -f nifi' for the above to see if your environment is throwing errors on startup.
The environment variables for NIFI_WEB_HTTP_HOST and NIFI_WEB_HTTP_PORT are for when you are accessing Docker nifi on a port other than 8080, so that you don't get the host-headers blocker. I contributed these modifications to the project recently, so if you are having trouble with them I would like to know so I can fix it.
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
Here is my docker-compose.yml:
in docker use this. It fixed my problem.
--net=host
so that docker can reduce the internal port forwarding path.