Container hostname is not resolved to IP - docker

I am trying to set the host for connection to Neo4j in the application.conf file using environment variable which is going to be set in a Dockerfile. Neo4j is used as an image from the docker-compose.yml file.
When I run an image of the application with the hostname of the Neo4j container I get an error ServiceUnavailableException: Unable to connect to neo4jdb:7687, ensure the database is running and that there is a working network connection to it
So the name of the container is not resolved to the IP-address. How can I fix it?
Configurations file:
neo4j{
url= "bolt://localhost:7687"
url= ${?HOSTNAME}
user = "user"
password = "password"
}
Application Dockerfile:
FROM java:8
ARG ARG_CLASS
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV HOSTNAME bolt://neo4jdb:7687
VOLUME /workdir
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
CMD /workdir/runDemo.sh "$MAIN_CLASS"
I build and run my application separately with this commands:
docker build -t container-name --build-arg ARG_CLASS=producer .
docker run -v ${PWD}/:/workdir -w /workdir container-name
Docker-compose file looks like this:
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
networks:
- docker_elk
environment:
- ADVERTISED_HOST=localhost
neo4jdb:
image: neo4j:latest
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
networks:
- docker_elk
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/data
- /var/lib/neo4j/conf:/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
volumes:
esdata1:
driver: local
networks:
docker_elk:
driver: bridge

Defining all the application containers in one docker-compose file worked. Hostnames of all the containers are resolved.
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
environment:
- ADVERTISED_HOST=localhost
neo4jdb:
image: neo4j:latest
hostname: neo4jdb
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/var/lib/neo4j/data
- /var/lib/neo4j/conf:/var/lib/neo4j/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
consumer-demo:
build:
context: .
dockerfile: Dockerfile
args:
- HOST=neo4jdb
volumes:
- ./:/workdir
working_dir: /workdir
depends_on:
- neo4jdb
- kafka
- elasticsearch
- kibana
links:
- neo4jdb
- kafka
- elasticsearch
- kibana
producer-demo:
build:
context: .
dockerfile: Dockerfile
args:
- HOST=neo4jdb
volumes:
- ./:/workdir
working_dir: /workdir
depends_on:
- neo4jdb
- kafka
- elasticsearch
- kibana
links:
- neo4jdb
- kafka
- elasticsearch
- kibana
- consumer-demo
volumes:
esdata1:
driver: local
networks:
docker_elk:
driver: bridge

Related

how do i run sh file on docker compose?

After running without comand or entrypoint, enter docker exec -it bash and run bad-bot.sh it will run without any problems.
but, if do this in the command or entrypoint in docker-compose, an error occurs and restart repeats indefinitely.
bad-bot.sh:
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/conf.d/globalblacklist.conf -O /etc/nginx/conf.d/globalblacklist.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blockbots.conf -O /etc/nginx/bots.d/blockbots.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/ddos.conf -O /etc/nginx/bots.d/ddos.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/whitelist-ips.conf -O /etc/nginx/bots.d/whitelist-ips.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/whitelist-domains.conf -O /etc/nginx/bots.d/whitelist-domains.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blacklist-user-agents.conf -O /etc/nginx/bots.d/blacklist-user-agents.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/custom-bad-referrers.conf -O /etc/nginx/bots.d/custom-bad-referrers.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/blacklist-ips.conf -O /etc/nginx/bots.d/blacklist-ips.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/bots.d/bad-referrer-words.conf -O /etc/nginx/bots.d/bad-referrer-words.conf
wget https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/conf.d/botblocker-nginx-settings.conf -O /etc/nginx/conf.d/botblocker-nginx-settings.conf
echo "include /etc/nginx/bots.d/blockbots.conf;" | tee -a /etc/nginx/vhost.d/default
echo "include /etc/nginx/bots.d/ddos.conf;" | tee -a /etc/nginx/vhost.d/default
perl -i -pe 's/^.*server_names_hash_bucket_size 128;/#$&/' /etc/nginx/conf.d/default.conf
nginx -t
nginx -s reload
Changing entrypoint to command does not work also.
entrypoint: bsh -c chmod 777 /etc/nginx/bad-bot-installer/bad-bot.sh && /etc/nginx/bad-bot-installer/bad-bot.sh
and change like this does not work also.
proxy.yml:
version: '3.9'
services:
reverse-proxy:
image: nginxproxy/nginx-proxy:latest
container_name: reverse-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
# for bad_bot ddos
# https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/blob/master/MANUAL-CONFIGURATION.md
- /etc/nginx/bots.d:/etc/nginx/bots.d
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/log/nginx-proxy:/var/log/nginx
- ./Nginx/bad-bot.sh:/etc/nginx/bad-bot-installer/bad-bot.sh:rw
command: bash -c "/etc/nginx/bad-bot-installer/bad-bot.sh"
acme-companion:
image: nginxproxy/acme-companion:latest
depends_on:
- reverse-proxy
container_name: acme-companion
restart: always
environment:
NGINX_PROXY_CONTAINER: reverse-proxy
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
conf:
driver: local
vhost:
driver: local
certs:
driver: local
acme:
driver: local
html:
driver: local
networks:
default:
name: my-network
error:
when run docker exec -it reverse-proxy /bin/bash:
Error response from daemon: Container e8179bf5071d875a26d0ce17cad5290585991b71dce3832bb10a7b690ef99154 is restarting, wait until the container is running

Can't mount volume in Docker Compose right

Firstly, I went all that step by step: https://docs.docker.com/language/golang/develop/
Works perfectly.
Then I started to try the same with my golang project. It requires not only db in a volume but also 'assets' and 'creds' directories which I was able to provide working with normal Dockerfile and --mount flag in 'docker run' comand.
So my schema was:
Create a volume 'roach'.
Create a temp container for copying folders.
docker container create --name temp -v roach:/data busybox \
docker cp assets/ temp:/data \
docker rm temp
Run my container with
docker run -it --rm \
--mount 'type=volume,src=roach,dst=/usr/data' \
--network mynet \
--name postgres-server \
-p 80:8080 \
-e PGUSER=totoro \
-e PGPASSWORD=myfriend \
-e PGHOST=db \
-e PGPORT=26257 \
-e PGDATABASE=mydb \
postgres-server
Go files have acces to /usr/data/my_folders
BTW here is Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go mod tidy
RUN go build -o /t main/main.go main/inst_list.go
## Deploy
FROM gcr.io/distroless/base-debian10
ENV GO111MODULE=on
ENV GOOGLE_APPLICATION_CREDENTIALS='/usr/data/credentials/creds.json'
WORKDIR /
COPY --from=build /t /t
EXPOSE 8080
USER root:root
ENTRYPOINT ["/t"]
================================================================
Then I started to try to make a Docker-compose.yml file like in the end of that example.
It has no --mount flags but I found plenty ways to specify mount path.
I tried much more but left 3 variants of it in code bellow(2 of 3 are commented):
version: '3.8'
services:
docker-t-roach:
depends_on:
- roach
build:
context: .
container_name: postgres-server
hostname: postgres-server
networks:
- mynet
ports:
- 80:8080
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
# - type: volume
# source: roach
# target: /usr/data
- roach:/usr/data
# - "${PWD}/cockroach-data/roach:/usr/data"
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
and it still doesn't work. Moreover it creates 2 Volumes: 'roach' and 'WORKDIRNAME_roach'. I actually tried to copy my folders to both of those. It's not working. The output of build command is alwaysl like that:
postgres-server | STARTED AT
postgres-server | Sep 4 10:43:10
postgres-server | lstat /usr/data/assets/current_batch: no such file or directory
postgres-server | 2022/09/04 10:43:10 lstat /usr/data/assets/current_batch: no such file or directory
(first 2 strings are produced my my go.files, 'assets' is the folder I'm copying)
I think that I'm seaking in the wrong place: maybe the way I copy folders doesn't work with this kind of build?
UPDATE:
At the same time command
docker run -it --rm -v roach:/data ubuntu ls /data/usr
showes that my folders are there. But container is in kind of cycle that doesn't let him see them.
Mihai is tried to help but I didn't understand what he meant. He actually meant that I had to add volume to my app service. I did it now and it works. In example bellow I named 2 volumes for db and app different just for better accuracy:
version: '3.8'
services:
docker-parser:
depends_on:
- roach
build:
context: .
container_name: parser
hostname: parser
networks:
- mynet
ports:
- 80:8080
volumes:
- assets:/data
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/db
command: start-single-node --insecure
volumes:
assets:
roach:
networks:
mynet:
driver: bridge

'Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound)' while creating my api service

docker-compose.yml
version: "3.7"
services:
courseshine_redis:
container_name: courseshine_redis
image: redis:latest
command: redis-server --requirepass ${POSTGRES_PASSWORD}
restart: always
env_file: .env
stdin_open: true
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- courseshine_redis_data:/data
networks:
- internal
courseshine_db:
container_name: courseshine_db
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_db/Dockerfile
restart: always
env_file: .env
environment:
- POSTGRES_MULTIPLE_DATABASES=${POSTGRES_DEV_DB},${POSTGRES_TEST_DB}
ports:
- ${COURSESHINE_DB_PORT}:${COURSESHINE_DB_PORT}
volumes:
- courseshine_postgres_data:/var/lib/postgresql/data
- ./courseshine_db:/dockerfile-entrypoint-initdb.d
networks:
- internal
courseshine_pgadmin:
container_name: courseshine_pgadmin
image: dpage/pgadmin4:4.21
restart: unless-stopped
env_file: .env
environment:
- PGADMIN_DEFAULT_EMAIL=${POSTGRES_USER}
- PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- pgadmin:/var/lib/pgadmin
- courseshine_postgres_data:/var/lib/postgresql/data
depends_on:
- courseshine_db
networks:
- internal
courseshine_api: &api_base
container_name: courseshine_api
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_api/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_api:/var/www/courseshine/courseshine_api
- /var/run/docker.sock:/var/run/docker.sock
- bundle_cache:/usr/local/bundle
depends_on:
- courseshine_redis
- courseshine_db
networks:
- internal
courseshine_ui:
container_name: courseshine_ui
build:
context: ../../
dockerfile: courseshine_docker/development/courseshine_ui/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_ui:/var/www/courseshine_ui
depends_on:
- courseshine_api
networks:
- internal
networks:
internal:
volumes:
courseshine_redis_data:
courseshine_postgres_data:
pgadmin:
bundle_cache:
my docerfile for courseshine_api service
FROM ruby:2.7.1-slim-buster
RUN apt-get update -qq && apt-get install -y build-essential nodejs libpq-dev postgresql-client && rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/courseshine/courseshine_api
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY ./courseshine_api/Gemfile $APP_HOME/Gemfile
COPY ./courseshine_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install --path vendor/cache
# Copy the main application.
COPY ./courseshine_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./courseshine_docker/development/courseshine_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
entrypoint.sh
set -e
rm -f $APP_HOME/tmp/pids/server.pid
exec "$#"
when i hit docker-compose up, the courseshine_api service is not stand and throw Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound). Why this problem occur and how to fix this ..

How to set up separate .env for development and production using Docker

Coming from an environment where I was manually doing a ssh into the remote server, doing a git pull and creating my .env(since it is gitignored), how do I separate development .env and a production .env. I used docker-machine to create an AWS EC2 instance. I created a production.yml and did docker-compose -f production.yml up -d. The container in the EC2 machine picked up my development .env which is not what I want.
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev git jpeg-dev zlib-dev libmagic
RUN python -m pip install --upgrade pip
RUN mkdir /writer-api
COPY requirements.txt /writer-api/
RUN pip install --no-cache-dir -r /writer-api/requirements.txt
COPY . /writer-api/
WORKDIR /writer-api
production.yml
version: "3"
services:
postgres:
restart: always
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
restart: always
build: .
command: gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
SECRET_KEY: ${SECRET_KEY}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
redis:
restart: always
image: "redis:alpine"
celery:
restart: always
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
restart: always
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
I guess you can export the environment shell variable & then use the .env as per the environment. Create a dev.env & prod.env file in the workspace.
Sample compose -
version: '3'
services:
nginx:
image: nginx
ports:
- '80'
env_file:
- ${ENVIRON}.env
Build for DEV -
export ENVIRON=dev
docker-compose up -d
Build for PROD -
export ENVIRON=prod
docker-compose up -d
This way you will be able to leverage same compose file for DEV & PROD environments.
setup the compose files for production and dev in seperate folders and put .env file in those folders

Docker compose file with yarn monorepo

I'm pretty new with docker, but trying to get docker-compose handling my local dev environment. Essentially when I run docker-compose up it should have my api running on port 3000:3000
This is my current docker-compose.yml file:
version: "3"
services:
api:
image: node:9
ports:
- 127.0.0.1:3000:3000
working_dir: /api
volumes:
- ./:/api
command: bash -c 'yarn && cd api && yarn dev'
mongo:
image: mongo:3.4
ports:
- 127.0.0.1:27017:27017
volumes:
- ./db:/data/db
minio:
image: minio/minio
ports:
- 9000:9000
environment:
- MINIO_ACCESS_KEY=miniokey
- MINIO_SECRET_KEY=miniosecret
volumes:
- ./minio:/data
command: ["server", "/data"]
createbuckets:
image: minio/mc
depends_on:
- minio
entrypoint: >
/bin/sh -c "
while ! /usr/bin/nc minio 9000; do sleep 2s; done;
/usr/bin/mc config host add myminio http://minio:9000 miniokey miniosecret;
/usr/bin/mc mb myminio/vividaura;
/usr/bin/mc policy download myminio/vividaura;
/usr/bin/mc mb myminio/vividaura-test;
/usr/bin/mc policy download myminio/vividaura-test;
exit 0;
"
nats:
image: nats:1.1.0-linux
ports:
- 127.0.0.1:4222:4222
- 127.0.0.1:8222:8222
The thing is, I'm using yarn's workspace feature. So I need to run yarn in the root directory then run yarn inside /api
This is my folder structure:
> /api
> /image-compose
> /src
> docker-compose.yml
> package.json
I got it working. Had issues with the ports. This is my updated docker-compose.yml file:
version: "3"
services:
web:
image: node:9
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
command: bash -c 'yarn && cd src && yarn dev'
depends_on:
- api
api:
image: node:9
ports:
- 3001:3001
working_dir: /api
volumes:
- ./:/api
command: bash -c 'yarn && cd api && yarn dev'
depends_on:
- mongo
- nats
mongo:
image: mongo:3.4
ports:
- 127.0.0.1:27017:27017
volumes:
- ./db:/data/db
minio:
image: minio/minio
ports:
- 9000:9000
environment:
- MINIO_ACCESS_KEY=miniokey
- MINIO_SECRET_KEY=miniosecret
volumes:
- ./minio:/data
command: ["server", "/data"]
createbuckets:
image: minio/mc
depends_on:
- minio
entrypoint: >
/bin/sh -c "
while ! /usr/bin/nc minio 9000; do sleep 2s; done;
/usr/bin/mc config host add myminio http://minio:9000 miniokey miniosecret;
/usr/bin/mc mb myminio/vividaura;
/usr/bin/mc policy download myminio/vividaura;
/usr/bin/mc mb myminio/vividaura-test;
/usr/bin/mc policy download myminio/vividaura-test;
exit 0;
"
nats:
image: nats:1.1.0-linux
ports:
- 127.0.0.1:4222:4222
- 127.0.0.1:8222:8222

Resources