I run local DynamoDB and an app via docker-compose. Unfortunately I encounter an error when querying DynamoDB from the app:
Unable to execute HTTP request: Connect to dynamodb:80 [dynamodb/172.18.0.2] failed: Connection refused (Connection refused)
Here is how the docker-compose looks like:
version: "3"
services:
dynamodb:
image: "dynamodb-local:latest"
container_name: app-dynamodb
ports:
- "80:8000"
api:
image: "app-backend:latest"
container_name: app-api
ports:
- "5000:5000"
- "5100:5100"
environment:
- DYNAMO_HOST=dynamodb:80
Here is how a Dockerfile for DynamoDB:
FROM openjdk:8-jre
ENV DYNAMODB_VERSION=latest
COPY .aws/ root/.aws/
COPY setup.sh setup.sh
COPY setup-stats.sh setup-stats.sh
RUN apt-get update && \
apt-get install -y python python-pip && \
pip --no-cache-dir install awscli && \
apt-get clean all && \
curl -O https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
tar zxvf dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
rm dynamodb_local_${DYNAMODB_VERSION}.tar.gz
EXPOSE 8000
ENTRYPOINT java -Djava.library.path=. -jar DynamoDBLocal.jar --sharedDb -inMemory
UPDATE:
I'm able to connect to dynamoDB JS shell from the host by http://localhost/shell
I'm NOT able to connect to dynamoDB from the app container:
wget dynamodb/shell
Connecting to dynamodb (172.18.0.2:80)
wget: can't connect to remote host (172.18.0.2): Connection refused
The app is written on Scala and uses Scanamo for interaction with DynamoDB
The problem was in missed DYNAMO_ACCESS_KEY and DYNAMO_SECRET_KEY
Despite the dyanmoDB is local and runs in-memory, it requires access_key and secret_key values any way
Furthermore, these values should NOT be empty! So set there anything you want, like "foo" or "bar".
I can see that there is no communication between the DB and the API container. There is a Docker Links Key-Value pair which will help. Please find the updated docker-compose file.
version: "3"
services:
dynamodb:
image: "dynamodb-local:latest"
container_name: app-dynamodb
ports:
- "80:8000"
api:
image: "app-backend:latest"
container_name: app-api
ports:
- "5000:5000"
- "5100:5100"
links:
- dynamodb
environment:
- DYNAMO_HOST=dynamodb:80
This may be the solution Please let me know the Status.
Related
Well, basically I got this docker-compose.yml:
version: "3.9"
services:
# Database
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
- ./schemas/mysql.sql:/data/application/init.sql
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: 123
MYSQL_ROOT_HOST: 10.5.0.1
MYSQL_DATABASE: forgottenserver
MYSQL_PASSWORD: 123
command: --init-file /data/application/init.sql
networks:
tibia:
ipv4_address: 10.5.0.5
# phpmyadmin
phpmyadmin:
depends_on:
- db
image: phpmyadmin
restart: always
ports:
- "8090:80"
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 123
networks:
tibia:
ipv4_address: 10.5.0.3
networks:
tibia:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
volumes:
db_data:
and this Dockerfile:
FROM ubuntu:20.04#sha256:bffb6799d706144f263f4b91e1226745ffb5643ea0ea89c2f709208e8d70c999
ENV TZ=America/Sao_Paulo
ENV WD=/home/tibia/server
ARG DEBIAN_FRONTEND=noninteractive
RUN useradd --system --create-home --shell /bin/bash --gid root --groups sudo --uid 1001 tibia
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install --no-install-recommends -y tzdata \
autoconf automake pkg-config build-essential cmake \
liblua5.1-0-dev libsqlite3-dev libmysqlclient-dev \
libxml2-dev libgmp3-dev libboost-filesystem-dev \
libboost-regex-dev libboost-thread-dev
USER tibia
WORKDIR $WD
COPY . .
RUN mv config.lua.dist config.lua && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(grep processor /proc/cpuinfo | wc -l)
EXPOSE 7171 7172
CMD ["/bin/bash"]
The Dockerfile is just building an executable.
The problem is that if I add this to the compose file and try to run all those services, the one that uses the Dockerfile just exits and doesn't restart:
# ...
services:
server:
build: .
ports:
- "7171:7171"
- "7172:7172"
networks:
tibia:
ipv4_address: 10.5.0.4
But if I run the compose with just the services db and phpmyadmin, and then run manually my built image from Dockerfile using:
docker run -itd --network=3777_tibia --ip 10.5.0.4 -p 7171:7171 -p 7172:7172 3777_server
Then it works like a charm!!!! Even the network does work.
Some screenshots of my Docker Desktop:
How can I make this missing service work with the docker-compose file?
NEW EDIT:
image of the logs:
Your dockerfile specifies bash as the command to run.
When you run it via the docker-compose file, bash sees that there's no TTY and it exits immediately and the container stops.
When you run it from the command line, you attach a TTY using the -it options. Bash then runs interactively and waits for input.
To get your container to run interactively when run from docker-compose, you need to add stdin_open and tty options, like this
services:
server:
build: .
ports:
- "7171:7171"
- "7172:7172"
stdin_open: true
tty: true
networks:
tibia:
ipv4_address: 10.5.0.4
Your Dockerfile specifies bash as the command to run. It doesn't actually run the program you built. Since Compose is oriented towards running multiple long-running service-type containers, it's tricky to interact with an interactive shell as the main container process. You also don't usually want to start a container, then start the thing the container does; you just want to start the container and have it run the process.
Once you've built the program, set the image's CMD to run it.
CMD ["./the_program"]
With a typical C(++) program built using Make, you should be able to make install it into /usr/local where you can run it without specifying a path explicitly. You could combine this with a multi-stage build to get a much smaller image without any of the build tools or header files.
OS: MacOS Montery Apple M1.
Stack: Golang, Gulp, Docker, Mongo.
I do not know what the error is. I do not know what to search.
I have come a long way, this is my first question how to solve running gcc failed exist status 1 in mac m1?
my second follow up question Command `supervisorctl restart` failed with exit code using mac m1.
now this is my third one.
After all the things I did, the supervisor is running completely fine.
No error logs at /go/src/github.com/sony/src/api/api_err.log and /go/src/github.com/sony/src/api/api_debug.log
supervisord.conf
#!/bin/sh
[unix_http_server]
file=/tmp/supervisor.sock
username=admin
password=revproxy
[supervisord]
nodaemon=false
user=root
logfile=/dev/null
logfile_maxbytes=0
logfile_backups=0
loglevel=info
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
username=admin
password=revproxy
[program:sony_api]
directory=/go/src/github.com/sony/src/api
command=/go/src/github.com/sony/src/api/api
autostart=true
autorestart=true
stderr_logfile=/go/src/github.com/sony/src/api/api_err.log
stderr_logfile_maxbytes=0
stdout_logfile=/go/src/github.com/sony/src/api/api_debug.log
stdout_logfile_maxbytes=0
startsecs=0
Dockerfile:
FROM golang:1.17.0-alpine3.14
# Install dependencies
RUN apk add --update tzdata \
--no-cache ca-certificates git wget \
nodejs npm \
g++ \
supervisor \
&& update-ca-certificates \
&& npm install -g gulp gulp-shell
RUN apk update && apk add gcc make git libc-dev binutils-gold
COPY ops/api/local/supervisor /etc
ENV PATH $PATH:/go/bin
WORKDIR /go/src/github.com/sony/src/api
docker-compose.yaml
version: '3'
services:
####################################################
# title: Sony API
# author: Kocera
# contact: <kocera#sony.com>
####################################################
sony_mongo:
image: mongo:4.2.19-bionic
container_name: sony_mongo
# network_mode: "host"
restart: always
ports:
- "27017:27017"
networks:
- sony_network
environment:
- MONGO_INITDB_ROOT_USERNAME=${DB_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
- mongod:/data/db
sony_api:
image: go-gin
container_name: sony_api
networks:
- sony_network
# network_mode: "host"
depends_on:
- sony_mongo
ports:
- "8001:8001"
environment:
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_ENDPOINT=${DB_ENDPOINT}
- DB_NAME=${DB_NAME}
- JWT_API_SECRET_KEY=${JWT_API_SECRET_KEY}
- JWT_API_SECRET_STRING=${JWT_API_SECRET_STRING}
- JWT_REFRESH_SECRET_KEY=${JWT_REFRESH_SECRET_KEY}
- COOKIE_HASH_KEY=${COOKIE_HASH_KEY}
- COOKIE_BLOCK_KEY=${COOKIE_BLOCK_KEY}
- UI_DOMAIN=${UI_DOMAIN}
- UI_ORIGIN=${UI_ORIGIN}
- HELM_DIR=${HELM_DIR}
- SMTP_HOST=${SMTP_HOST}
- SMTP_PORT=${SMTP_PORT}
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- TMC_API_TOKEN=${TMC_API_TOKEN}
volumes:
- "../../../:/go/src/github.com/sony/"
entrypoint:
[
"sh",
"-c",
"supervisord -c /etc/supervisord.conf && gulp"
]
networks:
sony_network: null
volumes:
mongod:
external: true
As you can see the docker is running fine...
I save the go file so it automatically downloads packages, dependencies, libraries, etc.
Now let's go check the gulp watcher, perfectly fine right?
Now let's test the API using postman
I'm trying to create symfony 5 project using docker with a container for mysql, phpmyadmin, symfony and maildev.
Here is my configuration in the docker-compose.yml :
version: '3.7'
services:
db:
image: mysql:latest
container_name: ruakh_db
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- dev
phpmyadmin:
image: phpmyadmin:latest
container_name: ruakh_phpmyadmin
restart: always
depends_on:
- db
ports:
- 8080:80
environment:
PMA_HOST: db
networks:
- dev
maildev:
image: maildev/maildev
container_name: ruakh_mail_dev
restart: always
command: bin/maildev --web 80 --smtp 25 --hide-exetensions STARTTLS
ports:
- 8081:80
networks:
- dev
apache:
build: php
container_name: ruakh_www
ports:
- 80:80
volumes:
- ./php/vhosts:/etc/apache2/sites-enabled
- ./:/var/www
restart: always
networks:
- dev
networks:
dev:
volumes:
db-data:
The Dockerfile configuration to create the server :
FROM php:8.0-apache
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN apt-get update \
&& apt-get install -y --no-install-recommends locales apt-utils git libicu-dev g++ libpng-dev libxml2-dev libzip-dev libonig-dev libxslt-dev;
RUN echo "en_US.UTF8 UTF8" > /etc/locale.gen && \
echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen && \
locale-gen
RUN curl -sSk https://getcomposer.org/installer | php -- --disable-tls && \
mv composer.phar /usr/local/bin/composer
RUN docker-php-ext-configure intl
RUN docker-php-ext-install pdo pdo_mysql gd opcache intl zip calendar dom mbstring zip gd xsl
RUN pecl install apcu && docker-php-ext-enable apcu
WORKDIR /var/www/
The issue i'm struggling with is that whenever I want to run a php bin/console make:migration it throws me this error :
In AbstractMySQLDriver.php line 128: An exception occurred in driver: could not find driver
I assume that it has something to do with my .env and my server can't manage to connect to the database.
Here is the .env :
MAILER_DSN=smtp://ruakh_mail_dev:25?verify_peer=0
DATABASE_URL="mysql://root:#ruakh_db/ruakh?serverVersion=5.7"
How could I manage to resolve this error ?
I can run queries to the database and fetch data from a controller.
But I can't run php bin/console make migration but the php bin/console make:entity is working
here is the config/packages/doctrine :
doctrine:
dbal:
url: '%env(resolve:DATABASE_URL)%'
# IMPORTANT: You MUST configure your server version,
# either here or in the DATABASE_URL env var (see .env file)
#server_version: '13'
orm:
auto_generate_proxy_classes: true
naming_strategy: doctrine.orm.naming_strategy.underscore_number_aware
auto_mapping: true
mappings:
App:
is_bundle: false
type: annotation
dir: '%kernel.project_dir%/src/Entity'
prefix: 'App\Entity'
alias: App
EDIT
Today I just opened and tried again and it seems like the error had change here is the error I got now :
In AbstractMySQLDriver.php line 112:
An exception occurred in driver: SQLSTATE[HY000] [2002]
php_network_getaddresses: getaddrinfo failed: Temporary failure in name > resolution
I find a solution to avoid my problem. Whenever something is about the database I run the commands from the docker containereven if it's not what I was looking for. So I keep this post open in case someone have an answer.
A better understanding of docker will help you get why it's working inside the container and not from your machine.
When you declare services in docker-compose.yml each service will have a DNS name which is the container_name so when you are inside one of the containers ruakh_db is reachable, that's why your controllers are able to access the database.
But when you are outside the containers ruakh_db has no meaning as your machine will not be able to resolve the DNS name. That's why your command line wont work.
One solution is to configure your OS to make ruakh_db point at your localhost.
Doing so depends on the OS you are using, but generally it consists of adding this line to your hosts file:
127.0.0.1 ruakh_db
Follow this link for more information on how to change your hosts file depending on your OS: https://www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/
You should then configure your MySQL container to expose an external port, so it's reachable from outside :
db:
image: mysql:latest
container_name: ruakh_db
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- dev
ports:
- 3306:3306
You also should have mysql extension on your machinesphp-cli installed to be able to talk to your mysql database.
I have a PostgreSQL container and a Swift server container. I need to pass the DB IP to the Server to start it. So I created an alias for DB in my custom bridge network. Have a look at my docker-compose.yml
version: '3'
services:
db:
build: database
image: postgres
networks:
mybridgenet:
aliases:
- mydbalias
web:
image: mywebserver:latest
ports:
- "8000:8000"
depends_on:
- db
networks:
- mybridgenet
environment:
WAIT_HOSTS: db:5432
networks:
mybridgenet:
driver: bridge
Dockerfile to build webserver.
FROM swift:4.2.1
RUN apt-get update && apt-get install --no-install-recommends -y libpq-dev uuid-dev && rm -rf /var/lib/apt/lists/*
EXPOSE 8000
WORKDIR /app
COPY client ./client
COPY Package.swift ./
COPY Package.resolved ./
COPY Sources ./Sources
RUN swift build
COPY pkg-swift-deps.sh ./
RUN chmod +x ./pkg-swift-deps.sh
RUN ./pkg-swift-deps.sh ./.build/debug/bridgeOS
FROM busybox:glibc
COPY --from=0 /app/swift_libs.tar.gz /tmp/swift_libs.tar.gz
COPY --from=0 /app/.build/debug/bridgeOS /usr/bin/
RUN tar -xzvf /tmp/swift_libs.tar.gz && \
rm -rf /tmp/*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
CMD /wait && mywebserver db "10.0.2.2"
Database Dockerfile
FROM postgres
COPY init.sql /docker-entrypoint-initdb.d/
The server is started using mybinary mydbalias. Like I said earlier, I pass the alias to start the server. While doing this, I get the following error.
message: "could not translate host name \"mydbalias\" to address: Temporary failure in name resolution\n"
What could be the problem?
UPDATE
After 4 days of a grueling raid, I finally found the rat. He is busybox container. I changed it to ubuntu:16.04 and it's a breeze. Feeling so good about this whole conundrum. Thanks, everyone who helped.
Simplify. There is no need in your explicit network declaration (it is done automatically by docker-compose, nor aliases (services get their host names based on service names)
docker-compose.yml
version: '3'
services:
db:
build: database
image: postgres
web:
image: mywebserver:latest
ports:
- "8000:8000"
depends_on:
- db
environment:
WAIT_HOSTS: db:5432
Then just use db as a hostname to connect to database from web
My application is dockerized. Its python/django application. We are using a local sms sending api that is restricted on IP based. So I have given them my EC2 ip address. And I am running my docker container in this EC2 machine. But my python app is not able to send requests to that machine. Because this docker container has different IP.
How do I solve this problem ?
Dockerfile
# ToDo use alpine image
FROM python:3.6
# Build Arguments with defaults
ARG envior
ARG build_date
ARG build_version
ARG maintainer_name='Name'
ARG maintainaer_email='email#email.com'
# Adding Labels
LABEL com.example.service="Service Name" \
com.example.maintainer.name="$maintainer_name" \
com.example.maintainer.email="$maintainaer_email" \
com.example.build.enviornment="$envior" \
com.example.build.version="$build_version" \
com.example.build.release-date="$build_date"
# Create app directory
RUN mkdir -p /home/example/app
# Install Libre Office for pdf conversion
RUN apt-get update -qq \
&& apt-get install -y -q libreoffice \
&& apt-get remove -q -y libreoffice-gnome
# Cleanup after apt-get commands
RUN apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
/var/cache/apt/archives/*.deb /var/cache/apt/*cache.bin
# Activate WORKING DIR
WORKDIR /home/example/app
# Copying requirements
COPY requirements/${envior}.txt /tmp/requirements.txt
# Install the app dependencies
# ToDo Refactor requirements
RUN pip install -r /tmp/requirements.txt
# Envs
ENV DJANGO_SETTINGS_MODULE app.settings.${envior}
ENV ENVIORNMENT ${envior}
# ADD the source code and entry point into the container
ADD . /home/example/app
ADD entrypoint.sh /home/example/app/entrypoint.sh
# Making entry point executable
RUN chmod +x entrypoint.sh
# Exposing port
EXPOSE 8000
# Entry point and CMD
ENTRYPOINT ["/home/example/app/entrypoint.sh"]
docker-compose.yml
version: '3'
services:
postgres:
image: onjin/alpine-postgres:9.5
restart: unless-stopped
ports:
- "5432:5432"
environment:
LC_ALL: C.UTF-8
POSTGRES_USER: django
POSTGRES_PASSWORD: django
POSTGRES_DB: web
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build:
context: .
args:
environ: local
command: gunicorn app.wsgi:application -b 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- postgres
environment:
DATABASE_URL: 'postgres://django:django#postgres/web'
DJANGO_MANAGEPY_MIGRATE: 'on'
DJANGO_MANAGEPY_COLLECTSTATIC: 'on'
DJANGO_LOADDATA: 'off'
DOMAIN: '0.0.0.0'
volumes:
postgres_data:
You should try putting the container in the same network as your EC2 instance. It means using networks with host driver.
suggested docker-compose file
version: '3'
services:
postgres:
[...]
networks:
- host
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
[...]
networks:
- host
volumes:
postgres_data:
networks:
host:
In case it wouldn't work, you might define your own network by:
networks:
appnet:
driver: host
and connect to that network form services:
postgres:
[..]
networks:
- appnet
Further reading about networks official ref.
An interesting read too from official networking tutorial.
Publish port from docker container to base machine, then configure ec2IP:port in sms application.