Links in docker-compose are not resolved - docker

docker-compose.yml
version: '2'
services:
redis:
image: redis
ports:
- "6379"
myapp:
build: ./myapp
ports:
- "80:80"
links:
- redis
depends_on:
- redis
Dockerfile of myapp
FROM <BASE_IMAGE>
EXPOSE 80
CMD ["curl", "http://redis"]
When I run docker-compose up -d --build and then docker run prefix_myapp_1, I'm getting the 'Host is not resolved' error after 4-5 seconds.
If I change the Dockerfile, say, as follows:
FROM <BASE_IMAGE>
EXPOSE 80
CMD sleep 1000000000
then do docker exec -it prefix_myapp_1 bash, and try running curl http://redis, the host is resolved successfully.
Where the problem lies?

Related

Docker compose django, nginx, daphne expose internet in EC2

Hello i need expose the containers to internet i read thats is with ip table and works in in the instance but i can't access by internet for the moment and this is my configuration:
EC2 instance:
Ubuntu
Docker vserions:
Docker version 20.10.7, build f0df350
docker-compose version 1.25.0, build unknown
docker-compose.yml
version: '3.1'
services:
redis:
image: redis
command: redis-server --port 6379
ports:
- '6379:6379'
environment:
- REDIS_MODE="LRU"
- REDIS_MAXMEMORY="800mb"
server:
build: .
command: daphne rubrica.asgi:application --port 8000 --websocket_timeout -1 --bind 0.0.0.0 -v2
volumes:
- .:/src
- ${AWS_CREDENTIALS_PATH}:/root/.aws/
ports:
- "8000:8000"
links:
- redis
env_file:
- .env
nginx:
image: nginx
volumes:
- ./nginx/:/etc/nginx/
links:
- server
ports:
- "80:80"
- "443:443"
depends_on:
- server
command: ["nginx", "-g", "daemon off;"]
networks:
docker-network:
driver: networktest
network
sudo docker create --name networktest --network <instance ip> --publish 8080:80 --publish 443:443 nginx:latest
and this is the rules in my instance
If someone can help me to know what i need or a can't see in the configuration.

Cannot connect from asgi app (FastApi) to Datadog Agent running on Docker

I am trying to connect to datadog from my fastapi backend. I am currently trying to do this on localhost using a docker-compose file to let both my datadog-agent and my backend-container run in the same network.
Here is a minimal example
dd-minimal
- docker-compose.yml
- backend-client
- Dockerfile
- app
- main.py
docker-compose.yml
version: "3.7"
networks:
my_network:
services:
datadog:
image: datadog/agent:latest
environment:
DD_API_KEY: <my-api-key>
DD_APM_ENABLED: 'true'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup:/host/sys/fs/cgroup:ro
ports:
- "8126:8126/tcp"
networks:
- my_network
backend-web-client:
image: gql-backend-api
build:
dockerfile: Dockerfile
context: ./backend-client
environment:
DD_TRACE_ANALYTICS_ENABLED: 'true'
DD_AGENT_HOST: 172.21.0.2
ports:
- "5555:8080"
networks:
- my_network
depends_on:
- datadog
Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim
COPY ./app /app
RUN pip install ddtrace==0.41.0
CMD exec ddtrace-run gunicorn --bind :8080 --workers 1 --threads 8 --timeout 0 main:api -k uvicorn.workers.UvicornWorker
main.py
import os
import uvicorn
from fastapi import FastAPI
api = FastAPI()
if __name__ == "__main__":
uvicorn.run(api, host="127.0.0.1", port=int(os.environ.get("PORT", 8080)))
I run docker-compose up and then check the ip of my dd-container with
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' dd-minimal_datadog_1
and update it in the compose file.
When I then again run docker-compose up, I get the following error
- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 111] Connection refused.
Any help would be very appreciated
You may need to set the environment variable DD_APM_NON_LOCAL_TRAFFIC=true in your datadog agent container.
Ref: https://docs.datadoghq.com/agent/docker/apm/?tab=linux#docker-apm-agent-environment-variables

How to run and access web server running inside a Docker container

I have trouble understanding how Docker handle things.
I am trying to run a node web server for development purpose. I have it defined in a docker-compose.yml and everything works fine when i run it from there. But when i manually run it from inside the container, it can't be reach from outside.
e.g : this is working fine
node:
image: node:10.15-stretch
tty: true
command: bash -c "./node_modules/.bin/encore dev-server --host 0.0.0.0 --public http://dev.local:8080 --port 8080 --disable-host-check --hot"
working_dir: /var/www/
volumes:
- ${PATH_SOURCE}:/var/www/
ports:
- 8080:8080
The files are now accessible from http://dev.local:8080 !
But i would prefer run it manually only when i need it...
So i remove the command from the docker-compose.yml and run it from inside the container :
node:
image: node:10.15-stretch
tty: true
working_dir: /var/www/
volumes:
- ${PATH_SOURCE}:/var/www/
ports:
- 8080:8080
docker-compose run node bash
root#1535e3c963cc:/var/www/# ./node_modules/.bin/encore dev-server --host 0.0.0.0 --public http://dev.local:8080 --port 8080 --disable-host-check --hot
The process is running fine but the files are not accessible from http://dev.local:8080 ...
I am sure there is something i am missing from Docker but i can't find what...
Thanks for your help.
EDIT:
here the full config
version: '3'
services:
apache:
image: httpd
volumes:
- ${PATH_SOURCE}/.docker/conf/apache/httpd.conf:/usr/local/apache2/conf/httpd.conf
- ${PATH_SOURCE}/.docker/conf/apache/httpd-vhosts.conf:/usr/local/apache2/conf/extra/httpd-vhosts.conf
- ${PATH_SOURCE}:/var/www/sadc/alarm
ports:
- 80:80
- 443:443
restart: always
depends_on:
- php
- postgres
php:
build: .docker
restart: always
ports:
- 9000:9000
volumes:
- ${PATH_SOURCE}/.docker/conf/php/php.ini:/etc/php/7.1/cli/php.ini
- ${PATH_SOURCE}/.docker/conf/php/php.ini:/etc/php/7.1/fpm/php.ini
- ${PATH_SOURCE}:/var/www/sadc/alarm
environment:
- PGDATESTYLE=ISO,DMY
working_dir: /var/www/sadc/alarm
postgres:
image: mdillon/postgis:10
restart: always
environment:
- POSTGRES_DB=${PG_DATABASE}
- POSTGRES_USER=${PG_USERNAME}
- POSTGRES_PASSWORD=${PG_PASSWORD}
- PGDATESTYLE=ISO,DMY
ports:
- 5432:5432
volumes:
- sadc-alarm-pgdata:/var/lib/postgresql/data
- ${PATH_SOURCE}:/var/www/sadc/alarm
- ${PATH_SOURCE}/.docker/conf/postgres/initdb.sql:/docker-entrypoint-initdb.d/initdb.sql
node:
image: node:10.15-stretch
tty: true
working_dir: /var/www/sadc/alarm
volumes:
- ${PATH_SOURCE}:/var/www/sadc/alarm
ports:
- 8080:8080
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c6a394453de4 node:10.15-stretch "node" 2 hours ago Up 50 seconds 0.0.0.0:8080->8080/tcp alarm_node_1
5dcc8b936b58 httpd "httpd-foreground" 21 hours ago Up 49 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp alarm_apache_1
bb616453d0cc alarm_php "/bin/sh -c '/usr/sb…" 21 hours ago Up 49 seconds 0.0.0.0:9000->9000/tcp alarm_php_1
3af75f3a3716 mdillon/postgis:10 "docker-entrypoint.s…" 28 hours ago Up 49 seconds 0.0.0.0:5432->5432/tcp
EDIT 2
Problem is with "docker-compose run" method...
When i run the following using "docker exec" it works
docker exec -it node_alarm_1 bash
FINAL EDIT
OK.
So i missused "docker-compose run" method. It is "docker-compose exec" method that should be use because it's reuse the running container that is correctly mapped. "docker-compose run" instead run a non-mapped container...
docker-compose run it doesn't seem to respect port publishing described in docker-compose.yml file.
To fix your issue do the following
docker-compose run -p 8080:8080 node bash
or
docker-compose run --service-ports node bash

Docker for Mac | Docker Compose | Cannot access containers using localhost

I've been trying to figure out why I cannot containers using "localhost:3000" from host. I've tried installing Docker via Homebrew, as well as the Docker for Mac installer. I believe I have the docker-compose file configured correctly.
Here is the output from docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------
ecm-datacontroller_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
ecm-datacontroller_kafka_1 supervisord -n Up 0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp
ecm-datacontroller_redis_1 docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
ecm-datacontroller_web_1 npm start Up 0.0.0.0:3000->3000/tcp
Here is my docker-compose.yml
version: '2'
services:
web:
ports:
- "3000:3000"
build: .
command: npm start
env_file: .env
depends_on:
- db
- redis
- kafka
volumes:
- .:/app/user
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
kafka:
image: heroku/kafka
ports:
- "2181:2181"
- "9092:9092"
I cannot access any ports that are exposed by docker-compose with curl localhost:3000 I get the following result from that
curl: (52) Empty reply from server
I should be getting {"hello":"world"}.
Dockerfile:
FROM heroku/heroku:16-build
# Which version of node?
ENV NODE_ENGINE 10.15.0
# Locate our binaries
ENV PATH /app/heroku/node/bin/:/app/user/node_modules/.bin:$PATH
# Create some needed directories
RUN mkdir -p /app/heroku/node /app/.profile.d
WORKDIR /app/user
# Install node
RUN curl -s https://s3pository.heroku.com/node/v$NODE_ENGINE/node-v$NODE_ENGINE-linux-x64.tar.gz | tar --strip-components=1 -xz -C /app/heroku/node
# Export the node path in .profile.d
RUN echo "export PATH=\"/app/heroku/node/bin:/app/user/node_modules/.bin:\$PATH\"" > /app/.profile.d/nodejs.sh
ADD package.json /app/user/
RUN /app/heroku/node/bin/npm install
ADD . /app/user/
EXPOSE 3000
Anyone have any ideas?
Ultimately, I ended up having a service that was listening on 127.0.0.1 instead of 0.0.0.0. Updating this resolved the connectivity issue I was having.

How to build drone image for ARMv7?

First, I downloaded the drone image:
go get github.com/drone/drone/cmd/...
Second, I built the image for arm:
GOARM=7 go build -o release/drone-server github.com/drone/drone/cmd/drone-server
After that, I built the image for docker:
docker -f ./go-workspace/src/github.com/drone/drone/Dockerfile build -t drone/drone .
The docker file looks like so:
# docker build --rm -t drone/drone .
FROM drone/ca-certs
EXPOSE 8000 9000 80 443
ENV DATABASE_DRIVER=sqlite3
ENV DATABASE_CONFIG=/var/lib/drone/drone.sqlite
ENV GODEBUG=netdns=go
ENV XDG_CACHE_HOME /var/lib/drone
ADD release/drone-server /bin/
ENTRYPOINT ["/bin/drone-server"]
That's my docker-compose.yml:
version: '2'
services:
drone-server:
image: drone/drone:latest
ports:
- 8000:8000
- 9000:9000
volumes:
- /var/lib/drone:/var/lib/drone
restart: always
env_file:
- /etc/drone/server.env
drone-agent:
image: drone/agent:linux-arm
command: agent
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always
env_file:
- /etc/drone/agent.env
The agent.env file:
DRONE_SECRET=xxx
DRONE_SERVER=[server-hostname]:9000
DOCKER_ARCH=arm
The server.env file:
# Service settings
DRONE_SECRET=xxx
DRONE_HOST=https://[server-hostname]/drone
DRONE_OPEN=false
DRONE_GOGS=true
DRONE_GOGS_URL=https://[server-hostname]/git
DRONE_GOGS_PRIVATE_MODE=true
However, when running docker-compose -f /etc/drone/docker-compose.yml up, I get the following error:
drone-server_1 | standard_init_linux.go:190: exec user process caused "no such file or directory"
And the drone-server exits with code 1.
I configured Apache to reach drone trough a proxy as described here: http://readme.drone.io/0.5/install/setup/apache/
Any help is appreciated.

Resources