I'm having troubles with Logstash with Docker.
I'm using docker.elastic.co/logstash/logstash:6.5.1.
The problem is that the container is not exposing port 5044, despite that the docker-compose is exposing that given port.
services:
logstash:
image: docker.elastic.co/logstash/logstash:6.5.1
ports: ['5044:5044']
expose:
- '5044'
Tried around 5 hours but can't understand where is the problem.
The way that I'm trying to figure out if the port is being exposed by container is using nmap localhost and docker container ls (and looking at the PORTS section). I'm in a MacOS machine.
I think this is related to loopback addresses, you can see more info here https://discuss.elastic.co/t/docker-image-doesnt-expose-ports-in-custom-image/156479/4
did you check if the host machine port is in use or not : netstat -plntu and look for that port else you can change it on host by some other port like
port: ['5045:5044']
expose:
- '5044'
And up your app and check if it's working or not?
Related
I know there are a ton of similar issues, but none of them are working for me!
I'm running docker on Ubuntu 22.04.1 LTS. All I want to do is access a Elasticsearch-Instance that i mapped onto Port localhost:9208 via SSH-Tunnel, without switching the docker containers network mode to host.
Here is my minimal docker-compose.yml example:
version: '3'
services:
curl_test:
image: curlimages/curl
command: "curl http://localhost:9208"
Output:
curl: (7) Failed to connect to localhost port 9208 after 0 ms: Connection refused
I tried:
Using the workaround using "host.docker.internal:host-gateway"
Specifiying extra_hosts
Mapping Port 9208 directly into the container
I know that this minimal example does not work. But all the fixes/changes proposed are not working. So I'm very glad for any suggestions for this problem.
Finally fixed it!
The solution contains two parts:
1. Making hosts localhost accessible to docker container
All it needed was
extra_hosts:
- "host.docker.internal:host-gateway"
2. More importantly: An SSH-Tunnel is not a normal localhost per se!
This #felix-k's answer in this post made me realize I had to map the remote port to dockers docker0 interface. All it needed was an additional tunnel:
ssh -N -L 172.17.0.1:9214:localhost:9200 user#remotehost
For testing purposes I would like to run a cluster of three containers, each running the same service on port 7600. Those containers should reside in one network and could theoretically access each other as host1:7600, host2:7600 and host3:7600.
However I want to 'emulate' an external port mapping, such that the service of each container is still bound to port 7600 but that the services can acess each other by maped (different) ports like host1:8881, host2:8882 and host3:8883.
How can I do that as easily as possible - preferred within a Docker Compose setup.
The reasoning is that I want to test how the service will behave with a configuration of three physical hosts running that service and mapped its port to an arbitrary external port.
Following some edits to clarify the task, after the first comments won't met the requirements (however thank you for every answer).
I can't use VMs, as the Test is already running within VirtualBox with no ability to get nested VT-x running.
I would neither bind the ports to the host, nor to the same IP address.
After further investigation I found a working solution for me.
The following Docker Compose file shows an example of the solution. It shows how to make two services accessible by an external IP and external port. The example works completely in Docker without the need to run the containers in two separate virtual machines.
The two services are by example two Nginx instances. Imagine both services should access each other by their external IP and port to form a cluster. The external IP and port are emulated by two separate busybox containers mapping the ports of the service containers to their own IP.
version: '3'
services:
service1:
image: nginx:latest
service2:
image: nginx:latest
proxy1:
image: busybox:latest
command: nc -lk -p 8081 -e /bin/nc service1 80
expose:
- "8081"
proxy2:
image: busybox:latest
command: nc -lk -p 8082 -e /bin/nc service2 80
expose:
- "8082"
The services service1:80 and service2:80 can access each other by their external representations proxy1:8081 and proxy2:8082
I have created the following docker-compose file...
version: '3'
services:
db-service:
image: postgres:11
volumes:
- ./db:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_PASSWORD=mypgpassword
networks:
- net1
pgadmin:
image: dpage/pgadmin4
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- 5000:80
environment:
- PGADMIN_DEFAULT_EMAIL=me#gmail.com
- PGADMIN_DEFAULT_PASSWORD=mypass
networks:
- net1
networks:
net1:
external: false
From reading various docs on the docker site, my expectation was that the pgadmin container would be able to access the postgres container via port 5432 but that I should not be able to access postgres directly from the host. However, I am able to use psql to access the database from the host machine.
In fact, if I comment out the expose and ports lines I can still access both containers from the host.
What am I missing about this?
EDIT - I am accessing the container by first running docker container inspect... to get the IP address. For the postgres container I'm using
psql -h xxx.xxx.xxx.xxx -U postgres
It prompts me for the password and then allows me to do all the normal things you would expect.
In the case of the pgadmin container I point my browser to the IP address and get the pgadmin interface.
Note that both of those are being executed from a terminal on the host, not from within either container. I've also commented out the expose command and can still access the postgres db.
docker-compose creates a network for those two containers to be able talk to each-other when you run it, through a DNS service which will contain pointers to each service, by name.
So from the perspective of the pgadmin container, the dbserver can be reached under hostname db-service (because that is what you named your service in the docker-compose.yml file).
So, that traffic does not go through the host, as you were assuming, but through the aforementioned network.
For proof, docker exec -it [name-of-pg-admin-container] /bin/sh and type:
ping db-service. You will see that docker provides a DNS resolution and that you can even open a connection to the normal postgres port there.
The containers connect one with another by bridge network net1.
When you expose port, you create port forwarding in your IPTABLES for connecting host network and net1.
Stop expose 5432 port in your db-service and you see that you can't connect from your host to db-service.
Docker assigns an internal IP address to each container. If you happen to have this address, and it happens to be reachable, then Docker doesn’t do anything specific to firewall it off. On a Linux host in particular, if specific Docker network is on 172.17.0.0/24, the host might have a 172.17.0.1 address and a specific container might be 172.17.0.2, and they can talk to each other this way.
Using the Docker-internal IP addresses is not a best practice. If you ever delete and recreate a container, its IP address will change; on some host platforms you can’t directly access the private IP addresses even from the same host; the private IP addresses are never reachable from other hosts. In routine use you should never need to docker inspect a container.
The important level of isolation you do get here is that the container isn’t accessible from other hosts unless you explicitly publish a port (docker run -p option, Docker Compose ports: option). The setup here is much more uniform than for standard applications: set up the application inside the container to listen on 0.0.0.0 (“all host interfaces”, usually the default), and then publish a port or not as your needs require. If the host has multiple interfaces you can publish a port on only one of them (even if the application doesn’t natively support that).
I'm running another docker-compose exposing Logstash on port 5044 (using docker-elk). I'm able to make requests to the service on localhost:5044 on my host, so the port is exposed correctly.
I'm then running another docker-compose (Filebeat) but from there I cannot connect to "localhost:5044". Here is the docker compose file:
version: '2'
services:
filebeat:
build: filebeat/
networks:
- elk
networks:
elk:
driver: bridge
Any cluye why the localhost:5044 is not accessable in this docker compose?
First of all, the compose file you linked exposes port 5000, but you say you're trying to connect to port 5044.
Secondly, exposing port 5044 (or 5000) will make the port available to the host machine, not to other containers launched with other compose files.
The way i see it is you can either:
keep the first service as it is and instead of localhost:port on the secon service use your_ip:port , where your_ip can be retrieved from ifconfig -a or something similar and should look like 192.168.x.x
Connect both services to an external created network like so:
first create the network with docker network create foo
link the services to the external network in the compose file:
networks:
test_network:
external: true
Then access change the logstash reference from localhost:port to logstash:port
Good luck
On my development machine, I am trying to make a docker container connect to a mysql server on the host. To do this, I have set the networking mode to "host" like so:
phpfpm:
image: mageinferno/magento2-php:7.0.8-fpm-0
container_name: php7fpm
restart: always
hostname: phpfpm
ports:
- "9000:9000"
environment:
- APP_MAGE_MODE=developer
volumes:
- /Users:/Users
- /usr/local/var/log/nginx:/var/www/logs
net: "host"
The problem is that after I start the docker container, I can't telnet to localhost 9000 from the host. Connecting to the container on 9000 is not a problem when the container runs in bridge mode. The docker ps command shows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6de4c973a34a mageinferno/magento2-php:7.0.8-fpm-0 "/usr/local/bin/start" 4 seconds ago Up 3 seconds php7fpm
Why doesn't the container bind the ports to the host? What am I missing here?
I am on OS X 10.10 using Docker for Mac (Version 1.12.0-rc3-beta18 (build: 9996))
Is there a reason you are trying to use host networking? It can be a bit tricky to set up and is mostly for pretty special use cases. At a minimum, you need to make sure the Docker host allows that port via something like:
iptables -I INPUT 1 -p tcp -m tcp --dport 9000 -j ACCEPT
Otherwise you are probably better sticking with bridge mode.