Unable to connecto to docker container - docker

Hi i'm start a docker container using docker-compose, but when i try to use localhost to connect I can't connect. Here is the docker-compose i'm using:
version: '3.3'
services:
standalone:
image: apachepulsar/pulsar
expose:
- 8080
- 6650
environment:
- PULSAR_MEM=" -Xms512m -Xmx512m -XX:MaxDirectMemorySize=1g"
command: >
/bin/bash -c
"bin/apply-config-from-env.py conf/standalone.conf
&& bin/pulsar standalone"
Im using windows 10

Be aware that expose, as the documentation suggest:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
My guess is that you instead want to publish them and let them be available to the host. To do so:
services:
standalone:
image: apachepulsar/pulsar
ports:
- "8080:8080"
- "6650:6650"

Related

How to connect to external FTP from NiFi running in docker container?

I have:
NiFi running in docker container. I'm running NiFi via Docker Compose with the following config:
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: nifi
ports:
- "8443:8443"
volumes:
- ./database_repository:/opt/nifi/nifi-current/database_repository
- ./flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
- ./content_repository:/opt/nifi/nifi-current/content_repository
- ./provenance_repository:/opt/nifi/nifi-current/provenance_repository
- ./state:/opt/nifi/nifi-current/state
- ./logs:/opt/nifi/nifi-current/logs
- ./conf:/opt/nifi/nifi-current/conf
restart: always
FTP server located on localhost (outside the container)
In the NiFi route i'm using a GetFTP processor which tries unsuccessfully to connect to localhost:21
I understand that the problem is that localhost is not available, because container is isolated. What and how to configure in the docker-compose.yml configuration to solve the problem?

Docker container can't talk to another container

I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/

Docker Compose port forwarding works fine on MacOS but not on Linux

Having following docker compose script
version: '3.1'
services:
flowable-ui:
image: flowable/flowable-ui
container_name: flowable-ui
depends_on:
- flowable-db
environment:
- SERVER_PORT=8888
- SPRING_DATASOURCE_DRIVER-CLASS_NAME=org.postgresql.Driver
- SPRING_DATASOURCE_URL=jdbc:postgresql://flowable-db:5432/flowable
- SPRING_DATASOURCE_USERNAME=flowable
- SPRING_DATASOURCE_PASSWORD=flowable
ports:
- 80:8888
flowable-db:
image: postgres
container_name: flowable-db
environment:
- POSTGRES_PASSWORD=flowable
- POSTGRES_USER=flowable
- POSTGRES_DB=flowable
ports:
- 5432:5432
command: postgres
I can start with docker-compose up -d flowable image and it is accessible at http://localhost/flowable-ui in my browser.
Doing exactly the same on my Linux machine causes http://localhost/flowable-ui is not loading, I see that there is something there because the browser tries to access it, but it doesn't happen and I get timeout.
Do I have to set up something additionally on the Linux machine?
You're trying to port-forward from 8888 from your container to 80 to your host. On Linux, you'd need elevated permissions to open ports 1-1024.
Try a port >1024. For example
services:
flowable-ui:
...
ports:
- 8888:8888
and then access your app on http://localhost:8888.

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Resources