This question already has answers here:
Connection refused on docker container
(7 answers)
Closed 3 years ago.
I have set up an environment today that runs a golang:1.13-alpine image, along with the latest images for Elasticsearch and Kibana.
Elasticsearch and Kibana are running fine when accessing from my local machine, but I cannot connect to Elasticsearch through the Go server. I have put this together from guides I have found and followed.
I am still a bit green using Docker. I have an idea that I am pointing at the wrong ip address in the container, but I am unsure how to fix it. Hope someone can guide me in the right direction.
docker-compose.yml:
version: "3.7"
services:
web:
image: go-docker-webserver
build: .
ports:
- "8080:8080"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
environment:
node.name: elasticsearch
cluster.initial_master_nodes: elasticsearch
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.4.2
ports:
- "5601:5601"
links:
- elasticsearch
Dockefile:
FROM golang:1.13-alpine as builder
RUN apk add --no-cache --virtual .build-deps \
bash \
gcc \
git \
musl-dev
RUN mkdir build
COPY . /build
WORKDIR /build
RUN go get
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o webserver .
RUN adduser -S -D -H -h /build webserver
USER webserver
FROM scratch
COPY --from=builder /build/webserver /app/
WORKDIR /app
EXPOSE 8080
EXPOSE 9200
CMD ["./webserver"]
main.go:
func webserver(logger *log.Logger) *http.Server {
router := http.NewServeMux()
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
es, err := elasticsearch.NewDefaultClient()
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
res, err := es.Info()
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
log.Println(res)
})
return &http.Server{
Addr: listenAddr,
Handler: router,
ErrorLog: logger,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 15 * time.Second,
}
}
When I boot the server, everything is running fine and I can access Kibana and query the data that I have indexed, but as soon as I hit localhost:8080 in Postman, the server dies and outputs:
web_1 | 2019/11/26 16:40:40 Error getting response: dial tcp 127.0.0.1:9200: connect: connection refused
go-api_web_1 exited with code 1
You can mention the network as bridge in the docker-compose file and verify once , How it works .
version: "3.7"
services:
web:
image: go-docker-webserver
build: .
ports:
- "8080:8080"
networks:
- elastic
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
environment:
node.name: elasticsearch
cluster.initial_master_nodes: elasticsearch
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.4.2
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
networks:
elastic:
driver: bridge
set the kernal setting to 262144 :
sysctl -w vm.max_map_count=262144
Related
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I've created a quarkus service that reads from a bunch of Kstreams, joins them and then post the join result back into a kafka topic.
During development, I was running kafka and zookeeper from inside a docker-compose and then running my quarkus service on dev mode with:
mvn quarkus:dev
At this point, everything was working fine. I'm able to connect to the broker without problem and read/write the Kstreams.
Then I tried to create a docker container that runs this quarkus service, but when the service runs inside the container, it doesn't reach the broker.
I tried several different configs inside my docker-compose, but none worked. It just can't connect to the broker.
Here is my Dockerfile:
####
# This Dockerfile is used in order to build a container that runs the Quarkus application in JVM mode
#
# Before building the container image run:
#
# mvn package
#
# Then, build the image with:
#
# docker build -f src/main/docker/Dockerfile.jvm -t connector .
#
# Then run the container using:
#
# docker run -i --rm -p 8080:8080 connector
#
# If you want to include the debug port into your docker image
# you will have to expose the debug port (default 5005) like this : EXPOSE 8080 5050
#
# Then run the container using :
#
# docker run -i --rm -p 8080:8080 -p 5005:5005 -e JAVA_ENABLE_DEBUG="true" connector
#
###
FROM docker.internal/library/quarkus-base:latest
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
USER root
RUN apk update && apk add libstdc++
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
#ENV QUARKUS_LAUNCH_DEVMODE=true \
# JAVA_ENABLE_DEBUG=true
# -Dquarkus.package.type=mutable-jar
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ ${APP_HOME}/lib/
COPY --chown=1001 target/quarkus-app/*-run.jar ${APP_HOME}/app.jar
COPY --chown=1001 target/quarkus-app/app/ ${APP_HOME}/app/
COPY --chown=1001 target/quarkus-app/quarkus/ ${APP_HOME}/quarkus/
EXPOSE 8080
USER 1001
#ENTRYPOINT [ "/deployments/run-java.sh" ]
And here is my docker-compose:
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
networks:
- kafkastreams-network
kafka:
container_name: kafka
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
- KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
- KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=100
networks:
- kafkastreams-network
connect:
container_name: connect
image: debezium/connect
ports:
- "8083:8083"
depends_on:
- kafka
environment:
- BOOTSTRAP_SERVERS=kafka:29092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_connect_statuses
networks:
- kafkastreams-network
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
container_name: schema-registry
ports:
- "8081:8081"
depends_on:
- zookeeper
- kafka
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
networks:
- kafkastreams-network
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafdrop
restart: "no"
ports:
- "9001:9000"
environment:
KAFKA_BROKERCONNECT: kafka:29092
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- kafka
- schema-registry
networks:
- kafkastreams-network
connector:
image: connector
depends_on:
- zookeeper
- kafka
- connect
environment:
QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS: kafka:9092
networks:
- kafkastreams-network
networks:
kafkastreams-network:
name: ks
The error I'm getting is:
2021-08-05 11:52:35,433 WARN [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | connector-18d10d7d-b619-4715-a219-2557d70e0479-admin) [AdminClient clientId=connector-18d10d7d-b619-4715-a219-2557d70e0479-admin] Connection to node -1 (kafka/172.21.0.3:9092) could not be established. Broker may not be available.
Am I missing any config on either the Dockerfile or the docker compose?
I figured out that there were 2 problems:
In my docker-compose, I had to change the property KAFKA_ADVERTISED_LISTENERS to PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
In my quarkus application.properties, I had 2 properties pointing to the wrong place:
quarkus.kafka-streams.bootstrap-servers=localhost:9092
quarkus.kafka-streams.application-server=localhost:9999
I have a Dockerfile and docker-compose.yml file set up, but not sure if they are correct and I am unable to run it without an error.
My Dockerfile is:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go get
RUN go run server.go
and my compose.yml is:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
ports:
- 9200:9200
gqlgen:
container_name: "gqlgen"
build: ./
restart: "on-failure"
ports:
- "8080:8080"
depends_on:
- elasticsearch
This is how the root of my folder looks like:
I tried to run: docker-compose up from the root directory and this is what I get:
panic: Get "http://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
I think I am doing my setup wrong.
UPDATE:
Based on suggestions and more stuff that I read online, I changed my DOCKERFILE as:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o server .
CMD ["./server"]
and compose file:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
and it builds correctly now.
But same issue with running docker-compose up.
panic: Get "http://elasticsearch:9200/": dial tcp 172.18.0.2:9200: connect: connection refused
You have a problem because you address Elasticsearch incorrectly.
Inside docker container 127.0.0.1 refers to the container itself, so your app is trying to find Elasticsearch where there isn't one.
The correct way to reference one docker container from another is by using docker container name. So in your case, it would be using name: elasticsearch.
Edit:
There is another issue with your configuration.
You miss some vital elements of Elasticsearch configuration.
Here you have snippet with minimal configuration for a single node Elasticsearch cluster.
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
All I have written before is still valid. After modifying docker-compose your last version which refers to Elasticsearch via http://elasticsearch:9200 should work fine.
Edit:
As #David Maze pointed out there is a third issue in your example.
Instead of RUN go run server.go you should have CMD go run server.go.
What you are doing is running your app during your build, when you want to run your app inside the container.
The more conventional approach would be to build app, instead of copying the source, copying the binary to the container and running the binary inside the container.
Here you have some information about that: https://medium.com/travis-on-docker/multi-stage-docker-builds-for-creating-tiny-go-images-e0e1867efe5a
So the above action to replace localhost with elasticsearch is correct.
But this it should happen only when you are starting your up with docker-compose.
Do not attempt to call elasticsearch from your IDE using elasticearch instead of host.
I suggest to make the host of elasticsearch configurable and for local configuration keep localhost, but you can override it in docker-compose file.
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_HOST: elasticsearch
Where ELASTICSEARCH_HOST is a variable that you use in your project
I have four containers. A container for the php server, a container for the mysql server, a container for ngnix and a container for let's encrypt.
The problem is that the php server can't connect to the mysql server.
In the server, I can connect to the database on 127.0.0.1
Schema update in the server
In the container, I can't connect to the database on 127.0.0.1
Schema update in php container
I think it's a problem of network beetwen the containers.
This is the docker-compose :
version: "3.3"
services:
saas-smd-php:
build: ./html
container_name: saas-smd-php
ports:
- "80"
network_mode: bridge
env_file:
- .env
environment:
- VIRTUAL_PORT=80
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
network_mode: bridge
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
env_file:
- .env
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
db:
container_name: saas-smd-mysql
image: mysql
ports:
- "3306:3306"
env_file:
- .env
volumes:
- data:/var/lib/mysql
network_mode: bridge
volumes:
data:
conf:
vhost:
html:
dhparam:
certs:
And the Dockerfile of the php server :
ARG PHP_VERSION=7.3
FROM php:${PHP_VERSION}-fpm-alpine
RUN apk update
RUN apk upgrade
RUN set -ex \
&& apk --no-cache add postgresql-libs postgresql-dev \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apk del postgresql-dev
WORKDIR /var/www/html
COPY saas-api .
EXPOSE 80
EXPOSE 22
CMD ["php", "-S", "0.0.0.0:80", "-t", "./", "./web/app_dev.php"]
I would like to the php server can communicate with the mysql server, without breaking the access of the php server from a domain name with ngnix and let's encrypt.
Since you do not provide any info on how you connect, I will go ahead and assume you are just not using you database container name; and you're supposed to use saas-smd-mysql instead of localhost.
I am on the Mac with docker install version 2.0.0.3 (31259)
docker-compose up -d
Removing ab-insight_postgres_1
Starting ab-insight_data_1 ... done
Recreating 31d36fb9c48a_ab-insight_postgres_1 ... error
ERROR: for 31d36fb9c48a_ab-insight_postgres_1 Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: for postgres Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: Encountered errors while bringing up the project.
Here is my docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
expose:
- "5432"
and here is my Dockerfile
FROM python:3.6.1
MAINTAINER Ka So <kanel.soeng#kso.com>
# Create the group and user to be used in this container
RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask
# Create the working directory (and set it as the working directory)
RUN mkdir -p /home/flask/app/web
WORKDIR /home/flask/app/web
# Install the package dependencies (this step is separated
# from copying all the source code to avoid having to
# re-install all python packages defined in requirements.txt
# whenever any source code change is made)
COPY requirements.txt /home/flask/app/web
RUN pip install --no-cache-dir -r requirements.txt
# Copy the source code into the container
COPY . /home/flask/app/web
RUN chown -R flask:flaskgroup /home/flask
USER flask
run docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is happening due to postges running locally on your machine on the same port you have mentioned in your docker-compose.yml for postges service.
Either stop the sevice running on your local machine.(not recommended)
Or use other port to map to 5432 port of docker. To do so replace the
expose
-5432
in postgresa service with the following code
ports:
- "5433:5432"
The whole docker compose file will look like:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
ports:
- "5433:5432"
I have a docker-compose file, where I combine nginx and a node js app. I need to run the shell command sudo sysctl -w net.core.somaxconn=65536 when the whole thing starts, but I can't find a clear example how to do it.
This is my node app dockerfile:
FROM node:6.9.0
ADD . /myapi
WORKDIR /myapi
RUN npm install
EXPOSE 1337
ENTRYPOINT ["node"]
CMD ["./index.js"]
This is my nginx dockerfile:
FROM nginx
MAINTAINER me
COPY nginx.conf /etc/nginx/nginx.conf
COPY ssl-certificates/ServerCertificate.cer /var/www/ssl/ServerCertificate.cer
COPY ssl-certificates/SSLPrivateKey.key /var/www/ssl/SSLPrivateKey.key
And this is my docker compose:
version: "3"
services:
api:
image: myregistry/my-api:1.033
build: ./api
ports:
- "1337"
environment:
NODE_ENV: qa
deploy:
replicas: 12
networks:
- api-network
proxy:
image: myregistry/my-api-proxy:1.033
build:
context: ./nginx
args:
RUNNING_MODE: prod
ulimits:
nproc: 65535
nofile:
soft: 10240
hard: 20480
ports:
- "80:80"
- "443:443"
links:
- api
deploy:
replicas: 3
networks:
- api-network
networks:
api-network:
driver: overlay
A clear example will be very appreciated, thanks.
Hey you can use ENTRYPOINT as below:
FROM node:6.9.0
ADD . /myapi
WORKDIR /myapi
RUN npm install
EXPOSE 1337
ENTRYPOINT "/bin/node index.js" && "sudo sysctl -w net.core.somaxconn=65536"