Boto3 timeout connecting to local dynamodb but can curl - docker

I have been attempting to follow the various instructions and troubleshooting to get a docker container to connect to another docker container running local dynamodb via boto3. References/troubleshooting so far:
In compose, can just use automatic linking. Incidentally I tried explicitly specifying a shared network but could not succeed this way.
Make sure to use the resource name from compose, not localhost
Github repo with a template
Dockerfile (docker build -t min-example:latest .):
FROM python:3.8
RUN pip install boto3
RUN mkdir /app
COPY min_example.py /app
WORKDIR /app
Docker compose (min-example.yml):
version: "3.3"
services:
db:
container_name: db
image: amazon/dynamodb-local
ports:
- "8000:8000"
app:
image: min-example:latest
container_name: app
depends_on:
- db
min_example.py
import boto3
if __name__ == '__main__':
ddb = boto3.resource('dynamodb',
endpoint_url='http://db:8000',
region_name='dummy',
aws_access_key_id='dummy',
aws_secret_access_key='dummy')
existing_tables = [t.name for t in list(ddb.tables.all())]
print(existing_tables)
existing_tables = [t.name for t in list(ddb.tables.all())]
print(existing_tables)
Run with
docker-compose run -f min-example.yml app python min_example.py
It hangs on the ddb.tables.all() call, and times out with the error:
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "http://db:8000/"
Interestingly, I can curl:
docker-compose -f min-example.yml run app curl http://db:8000/
{"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken","message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}
Which suggests the containers can communicate.

Related

issues in docker-compose when running up, cannot find localhost and services starting in wrong order

I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main

Connection refused in Dockerfile but not when execed in

I have a command that calls a local docker container server.
I use docker-compose run name_of_service /bin/bash to exec into an image and from there calling command below works as expected.
pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt
But running virtually the same command in Dockerfile results in a Retrying error
RUN pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt --user
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPConnection object at 0x7f54bac2dad0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /custom-utils/
Both services are in one docker-compose.yml
Yaml
service:
image: service:20.10.1
build:
context: platform
dockerfile: service/Dockerfile
depends_on:
- api
- pypi
environment:
PORT: "8088"
ports:
- "8088:8088"
volumes:
- some_location_of_source
restart: always
pypi:
image: pypi:20.10.1
build:
context: services/pypi
dockerfile: Dockerfile
environment:
PORT: "8000"
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- some_location_of_source
Dockerfile RUN instructions can never make network calls to other services, even in the same docker-compose.yml file. You need to arrange for the package server to run "somewhere else" (even in Docker but launched separately might work).
At a technical level there are two issues. Compose broadly gets to assume all image builds happen before any containers are launched, so there's no way to require the pypi service to start before the service image is built (depends_on: doesn't affect the build stage). Image builds also aren't attached to the Docker network that Compose creates, so they can't do things like resolve container hostnames; that will lead to the specific error you're getting.
It might work to split this into two separate Compose YAML files, one for the package server and one for the main service. You can launch the package server; then docker-compose build the main service; then stop the package server. Since you have published ports: you can reach the package server through one of the host's IP addresses; or if you're on a MacOS or Windows host, the special host name host.docker.internal; or otherwise use one of the techniques described in From inside of a Docker container, how do I connect to the localhost of the machine?.
RUN pip install \
--extra-index-url http://host.docker.internal:8000 \
-r requirements.txt
(Depending on what exactly is in this package server, you may not need it at all. If you python setup.py bdist_wheel or pip wheel the dependencies you keep there, you can COPY the resulting .whl files into your image and install them directly. If it's all from the same source tree then a multi-stage build where earlier stages just build libraries could work too.)

Access port of one container from another container

I have a postgres database in one container, and a java application in another container. Postgres database is accessible from port 1310 in localhost, but the java container is not able to access it.
I tried this command:
docker run modelpolisher_java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
But it gives error java.net.UnknownHostException: biggdb.
Here is my docker-compose.yml file:
version: '3'
services:
biggdb:
container_name: modelpolisher_biggdb
build: ./docker/bigg_docker
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bigg
ports:
- "1310:5432"
java:
container_name: modelpolisher_java
build: ./docker/java_docker
stdin_open: true
tty: true
Dockerfile for biggdb:
FROM postgres:11.4
RUN apt update &&\
apt install wget -y &&\
# Create directory '/bigg_database_dump/' and download bigg_database dump as 'database.dump'
wget -P /bigg_database_dump/ https://modelpolisher.s3.ap-south-1.amazonaws.com/bigg_database.dump &&\
rm -rf /var/lib/apt/lists/*
COPY ./scripts/restore_biggdb.sh /docker-entrypoint-initdb.d/restore_biggdb.sh
EXPOSE 1310:5432
Can somebody please tell what changes I need to do in the docker-compose.yml, or in the command, to make java container access ports of biggdb (postgres) container?
The two containers have to be on the same Docker-internal network to be able to talk to each other. Docker Compose automatically creates a network for you and attaches containers to that network. If you're docker run a container alongside that, you need to find that network's name.
Run
docker network ls
This will list the Docker-internal networks you have. One of them will be named something like bigg_default, where the first part is (probably) your current directory name. Then when you actually run the container, you can attach to that network with
docker run --net bigg_default ...
Consider setting a command: in your docker-compose.yml file to pass these arguments when you docker-compose up. If the --host option is your code and doesn't come from a framework, passing settings like this via environment variables can be a little easier to manage than command-line arguments.
As you use docker-compose to bring up the two containers, they already share a common network. To be able to access that you should use docker-compose run and not docker run. Also, pass the service name (java) and not the container name (modelpolisher_java) in docker-compose run command.
So just use the following command to run your jar:
docker-compose run java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg

Having trouble communicating between docker-compose services

I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.

Docker container connect to remote db through jump SSH tunnel

I am trying to connect to remote SQL, in between we have jump server. Usually, I can access the db from datagrip once I ssh into jump server from bash. I am trying to connect to the remote db through docker container. I have tried different Docker hub images.
Jump server listens to local port 11433 and forwards remote sql db port 1433.
Here is my docker-compose.yml
version: '2'
services:
test-db:
image: gsengun/flyway-mssql
volumes:
- ./sql:/flyway/sql
command: localhost 11433 user Password db
container_name: testDb
Here is my docker file.
FROM calendar42/python-geos
COPY ./svc/requirements.txt /domain/requirements.txt
COPY ./requirements.txt /api/requirements.txt
RUN pip install -U pip
RUN pip install -r /api/requirements.txt
WORKDIR /api/svc/
I am not sure exactly, what I am missing any help or direction is appreciated.

Resources