Docker two tier application issue: failed to connect to mongo container - docker

I have a simple nodeJS application consisting of the frontend and a mongo database. I want to deploy it via Docker.
In my docker-compose file I have the following:
version: '2'
services:
express-container:
build: .
ports:
- "3000:3000"
depends_on:
- mongo-container
mongo-container:
image: mongo:3.0
When I run docker-compose up, I have the following error:
Creating todoangularv2_mongo-container_1 ...
Creating todoangularv2_mongo-container_1 ... done
Creating todoangularv2_express-container_1 ...
Creating todoangularv2_express-container_1 ... done
Attaching to todoangularv2_mongo-container_1, todoangularv2_express-container_1
mongo-container_1 | 2017-07-25T15:26:09.863+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=25f03f51322b
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] db version v3.0.15
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] git version: b8ff507269c382bc100fc52f75f48d54cd42ec3b
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] build info: Linux ip-10-166-66-3 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongo-container_1 | 2017-07-25T15:26:09.864+0000 I CONTROL [initandlisten] options: {}
mongo-container_1 | 2017-07-25T15:26:09.923+0000 I JOURNAL [initandlisten] journal dir=/data/db/journal
mongo-container_1 | 2017-07-25T15:26:09.924+0000 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
express-container_1 | Listening on port 3000
express-container_1 |
express-container_1 | events.js:72
express-container_1 | throw er; // Unhandled 'error' event
express-container_1 | ^
express-container_1 | Error: failed to connect to [mongo-container:27017]
So my frontend cannot reach the mongo container called 'mongo-container' in the docker-compose file. In the application itself I'm giving the URL for the mongo database as follows:
module.exports = {
url : 'mongodb://mongo-container:27017/todo'
}
Any idea how I can change my application so that when it is run on Docker, I don't have this connectivity issue?
EDIT: the mongo container gives the following output:
WAUTERW-M-T3ZT:vagrant wim$ docker logs f63
2017-07-26T09:15:02.824+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=f637f963c87f
2017-07-26T09:15:02.825+0000 I CONTROL [initandlisten] db version v3.0.15
2017-07-26T09:15:02.825+0000 I CONTROL [initandlisten] git version: b8ff507269c382bc100fc52f75f48d54cd42ec3b
...
2017-07-26T09:15:21.461+0000 I STORAGE [FileAllocator] done allocating datafile /data/db/local.0, size: 64MB, took 0.024 secs
2017-07-26T09:15:21.476+0000 I NETWORK [initandlisten] waiting for connections on port 27017
The express container gives the following output:
WAUTERW-M-T3ZT:vagrant wim$ docker logs 25a
Listening on port 3000
events.js:72
throw er; // Unhandled 'error' event
^
Error: failed to connect to [mongo-container:27017]
at null.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:555:74)
at EventEmitter.emit (events.js:106:17)
at null.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:156:15)
at EventEmitter.emit (events.js:98:17)
at Socket.<anonymous> (/usr/src/app/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/connection.js:534:10)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
EDIT: the issue appeared in the Dockerfile. Here is a corrected one (simplified a bit as I started from a node image rather than an Ubuntu image):
FROM node:0.10.40
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["node", "/usr/src/app/bin/www"]

You could substitute depends_on by links session, that express dependency between services like depends_on and, according to the documentation, containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
version: '2'
services:
express-container:
build: .
ports:
- "3000:3000"
links:
- "mongo-container"
mongo-container:
image: mongo:3.0

The issue appeared in the Dockerfile. Here is a corrected one (simplified a bit as I started from a node image rather than an Ubuntu image):
FROM node:0.10.40
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["node", "/usr/src/app/bin/www"]

Related

docker-compose: failed to register layer: Error processing tar file - cannot find Windows DLL MUI file

Docker desktop is using Linux containers.
(Yes, I tried this: Docker Error: failed to register layer: Error processing tar file(exit status 1): "...msader15.dll.mui: no such file or directory", but using Docker Desktop with Windows containers caused the docker-compose command to fail with the response Error response from daemon: operating system is not supported)
Structure
- engine-load-tests
|- Dockerfile
|- docker-compose.yml
|- engine_load_tester_locust\
|- main.py
|- WinPerfCounters\ [I know - the casing is inconsistent]
|- main.py
|- Dockerfile
|- environment config files, README, other files
Dockerfile
FROM python:3.9.6-windowsservercore-1809
COPY . ./WinPerfCounters/
RUN pip install --no-cache-dir -r ./WinPerfCounters/requirements.txt
CMD [ "python", "WinPerfCounters/main.py", "WinPerfCounters/load_test.conf" ]
Docker-Compose
version: "3.3"
services:
win_perf_counters:
container_name: win_perf_counters
platform: windows
image: python:3.9.6-windowsservercore-1809
build: ./WinPerfCounters
depends_on:
- influxdb
links:
- influxdb
Then other containers for locust, influx, and grafana...
Output - Snippets
------
> [python:3.9.6-windowsservercore-1809 1/3] FROM docker.io/library/python:3.9.6-windowsservercore-1809#sha256:54b7eadfbbc3a983bf6ea80eb7478b68d46267bbbcc710569972c140247ccd5e:
-----
failed to solve: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): link /Files/Program Files/common files/Microsoft Shared/Ink/en-US/micaut.dll.mui /Files/Program Files (x86)/common fi
les/Microsoft Shared/ink/en-US/micaut.dll.mui: no such file or directory
You can't run Windows containers (i.e. derived from some Windows base image like windowsservercore-1809) when Docker Desktop is set to Linux containers.

Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address' for release version

I'm trying to test the release version of my application inside a docker container on my local machine, and I keep getting the warning below when I spin up the container, and it refuses the requests:
Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.
I've checked this post, and this is not the problem, and I haven't been able to get to the root cause of the problem. When I make requests, they get refused. The application works without a problem outside Docker. Below is my dotnet publish command:
dotnet publish .\Sistema.Cadastro.Api\Sistema.Cadastro.Api.csproj -c Release --runtime linux-musl-x64 --interactive --no-self-contained
After it is published, I generate my container. Below is my Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
EXPOSE 5000
EXPOSE 5001
COPY System.Cadastro.Api/bin/Release/netcoreapp3.1/linux-musl-x64/publish/ /app
WORKDIR /app
ENTRYPOINT ["dotnet", "Sistema.Cadastro.Api.dll"]
After I've built the container, I use the docker-compose.yaml below to start it:
version: '3'
services:
siefcadapi:
container_name: sistemacadapi
image: localdocker/sistema.cadastro.api:latest
ports:
- 5000:5000
- 5001:5001
environment:
- "ASPNETCORE_URLS=https://+:5001;http://+:5000"
Below is the log that is generated:
docker-compose -f .\.docker\docker-compose.yaml up
Recreating sistemacadapi ... done
Attaching to sistemacadapi
sistemacadapi | warn: Microsoft.AspNetCore.Server.Kestrel[0]
sistemacadapi | Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.
sistemacadapi | info: Microsoft.Hosting.Lifetime[0]
sistemacadapi | Now listening on: http://localhost:5000
sistemacadapi | info: Microsoft.Hosting.Lifetime[0]
sistemacadapi | Application started. Press Ctrl+C to shut down.
sistemacadapi | info: Microsoft.Hosting.Lifetime[0]
sistemacadapi | Hosting environment: Production
sistemacadapi | info: Microsoft.Hosting.Lifetime[0]
sistemacadapi | Content root path: /app
I don't need to debug inside the container, or nothing of the sort. I just want to run and test my application so that I can push to Azure to run at the cloud.
After 2 days of digging, I found the problem. It was not obvious at all. I use a appsettings.json, and I use model to read it, and inject it into the IConfiguration at Startup.cs
var appConfigs = Configuration.GetSection("App").Get<AppConfigs>();
services.AddSingleton<IConfiguration>(Configuration);
The line that was breaking everything is the second one:
services.AddSingleton<IConfiguration>(Configuration);
According to this answer, after .net core 2, it was no longer necessary to add it. Adding it didn't mess up during debugging, but once it ran inside the container, it caused the error listed above. I just removed the second line, and the application remained working fine, and it started working perfectly inside the container as well.
Creating projeto_sistemacadastroapi_1 ... done
Attaching to projeto_sistemacadastroapi_1
sistemacadastroapi_1 | info: Microsoft.Hosting.Lifetime[0]
sistemacadastroapi_1 | Now listening on: http://[::]:80
sistemacadastroapi_1 | info: Microsoft.Hosting.Lifetime[0]
sistemacadastroapi_1 | Application started. Press Ctrl+C to shut down.
sistemacadastroapi_1 | info: Microsoft.Hosting.Lifetime[0]
sistemacadastroapi_1 | Hosting environment: Production
sistemacadastroapi_1 | info: Microsoft.Hosting.Lifetime[0]
sistemacadastroapi_1 | Content root path: /app

Service "postgis" fails to start in GitLab CI

I am trying to use the Docker image "postgis/postgis:latest" as a service in GitLab CI but the service fails to start.
This is the start of the CI log, the last line is most important:
Running with gitlab-runner 12.9.0 (4c96e5ad)
on xxxxxxx xxxxxxxx
Preparing the "docker" executor
Using Docker executor with image node:lts-stretch ...
Starting service redis:latest ...
Pulling docker image redis:latest ...
Using docker image sha256:4cdbec704e477aab9d249262e60b9a8a25cbef48f0ff23ac5eae879a98a7ebd0 for redis:latest ...
Starting service postgis/postgis:latest ...
Pulling docker image postgis/postgis:latest ...
Using docker image sha256:a412dcb70af7acfbe875faea4467a1594e7cba3dfca19e5e1c6bcf35286380df for postgis/postgis:latest ...
Waiting for services to be up and running...
*** WARNING: Service runner-xxxxxxxx-project-1-concurrent-0-postgis__postgis-1 probably didn't start properly.
Health check error:
service "runner-xxxxxxxx-project-1-concurrent-0-postgis__postgis-1-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-04-06T11:58:09.487216183Z The files belonging to this database system will be owned by user "postgres".
2020-04-06T11:58:09.487254326Z This user must also own the server process.
2020-04-06T11:58:09.487260023Z
2020-04-06T11:58:09.488674041Z The database cluster will be initialized with locale "en_US.utf8".
2020-04-06T11:58:09.488696993Z The default database encoding has accordingly been set to "UTF8".
2020-04-06T11:58:09.488704024Z The default text search configuration will be set to "english".
2020-04-06T11:58:09.488710330Z
2020-04-06T11:58:09.488716134Z Data page checksums are disabled.
2020-04-06T11:58:09.488721778Z
2020-04-06T11:58:09.490435786Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2020-04-06T11:58:09.490649106Z creating subdirectories ... ok
2020-04-06T11:58:09.490656485Z selecting dynamic shared memory implementation ... posix
2020-04-06T11:58:09.525841255Z selecting default max_connections ... 100
2020-04-06T11:58:09.562735034Z selecting default shared_buffers ... 128MB
2020-04-06T11:58:09.614695491Z selecting default time zone ... Etc/UTC
2020-04-06T11:58:09.616784837Z creating configuration files ... ok
2020-04-06T11:58:09.917724902Z running bootstrap script ... ok
2020-04-06T11:58:10.767115421Z performing post-bootstrap initialization ... ok
2020-04-06T11:58:10.924542026Z syncing data to disk ... ok
2020-04-06T11:58:10.924613120Z
2020-04-06T11:58:10.924659485Z initdb: warning: enabling "trust" authentication for local connections
2020-04-06T11:58:10.924720453Z You can change this by editing pg_hba.conf or using the option -A, or
2020-04-06T11:58:10.924753751Z --auth-local and --auth-host, the next time you run initdb.
2020-04-06T11:58:10.925150488Z
2020-04-06T11:58:10.925175359Z Success. You can now start the database server using:
2020-04-06T11:58:10.925182577Z
2020-04-06T11:58:10.925188661Z pg_ctl -D /var/lib/postgresql/data -l logfile start
2020-04-06T11:58:10.925195041Z
2020-04-06T11:58:10.974712774Z waiting for server to start....2020-04-06 11:58:10.974 UTC [47] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2020-04-06T11:58:10.976267115Z 2020-04-06 11:58:10.976 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-04-06T11:58:11.003287980Z 2020-04-06 11:58:11.002 UTC [48] LOG: database system was shut down at 2020-04-06 11:58:10 UTC
2020-04-06T11:58:11.011056242Z 2020-04-06 11:58:11.010 UTC [47] LOG: database system is ready to accept connections
2020-04-06T11:58:11.051536096Z done
2020-04-06T11:58:11.051578164Z server started
2020-04-06T11:58:11.051855017Z
2020-04-06T11:58:11.052088262Z /usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/10_postgis.sh
2020-04-06T11:58:11.218053189Z psql: error: could not connect to server: could not translate host name "postgres" to address: Name or service not known
could not translate host name "postgres" to address: Name or service not known
It seems to me that the host "postgres" is wrong. But the documenation of GitLab says that the hostname will be the alias: https://docs.gitlab.com/ce/ci/docker/using_docker_images.html#accessing-the-services
Excerpt of my .gitlab-ci-yml:
image: node:lts-stretch
services:
- name: redis:latest
- name: postgis/postgis:latest
alias: postgres
variables:
NODE_ENV: production
REDIS_HOST: redis
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGHOST: postgres
PGUSER: postgres
PGPASSWORD: postgres
I have also tried to remove the alias and use "postgis-postgis" or "postgis__postgis" as the hostname as per documenation, but the same error every time. I also tried to use the docker image "mdillon/postgis" because i saw it often, but also the same error.
I tried plugging in your .gitlab-ci.yml excerpt and got an error:
This GitLab CI configuration is invalid: jobs config should contain at least one visible job
Please provide a minimal reproducible example next time. ;)
I was able to reproduce and fix the issue. The fix was to remove the PGHOST setting. (You had its value set to postgres. Your main container can get to the postgis container using the alias postgres but the postgis container itself doesn't need a hostname to get to the PostgreSQL service because that service is listening on a local socket.)
PGHOST is used by psql in the "postgis" container (launched by the services directive), in the script https://github.com/postgis/docker-postgis/blob/master/initdb-postgis.sh (which ends up in /docker-entrypoint-initdb.d/10_postgis.sh -- see https://github.com/postgis/docker-postgis/blob/master/Dockerfile.template#L16)
The following .gitlab-ci.yml works:
image: node:lts-stretch
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGUSER: postgres
PGPASSWORD: postgres
services:
- name: postgis/postgis:latest
alias: postgres
job1:
script: ping -c 3 postgres
Here is the job log:
Running with gitlab-runner 12.9.0 (4c96e5ad)
on docker-auto-scale 0277ea0f
Preparing the "docker+machine" executor
Using Docker executor with image node:lts-stretch ...
Starting service postgis/postgis:latest ...
Pulling docker image postgis/postgis:latest ...
Using docker image sha256:a412dcb70af7acfbe875faea4467a1594e7cba3dfca19e5e1c6bcf35286380df for postgis/postgis:latest ...
Waiting for services to be up and running...
Pulling docker image node:lts-stretch ...
Using docker image sha256:88c089733a3b980b3517e8e2e8afa46b338f69d7562550cb3c2e9fd852a2fbac for node:lts-stretch ...
Preparing environment
00:05
Running on runner-0277ea0f-project-17971942-concurrent-0 via runner-0277ea0f-srm-1586221223-45d7ab06...
Getting source from Git repository
00:01
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/atsaloli/service-postgis/.git/
Created fresh repository.
From https://gitlab.com/atsaloli/service-postgis
* [new ref] refs/pipelines/133464596 -> refs/pipelines/133464596
* [new branch] master -> origin/master
Checking out d20469e6 as master...
Skipping Git submodules setup
Restoring cache
00:02
Downloading artifacts
00:01
Running before_script and script
00:04
$ ping -c 3 postgres
PING postgres (172.17.0.3) 56(84) bytes of data.
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=3 ttl=64 time=0.060 ms
--- postgres ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2062ms
rtt min/avg/max/mdev = 0.060/0.067/0.077/0.007 ms
Running after_script
00:01
Saving cache
00:02
Uploading artifacts for successful job
00:01
Job succeeded
As you can see in the ping command above, the container created from the image node:lts-stretch is able to access the postgis container using the postgres alias.
Does that unblock you?

docker-compose issue: Permission denied when attempting to create/mount volume

I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile

How to pass an ARG to a Dockerfile in a docker-compose.yml

I have a project apkmirror-scraper-compose with the following (simplified) structure:
.
├── docker-compose.yml
└── tor
└── Dockerfile
The docker-compose.yml is
version: '3'
services:
tor:
build:
context: ./tor
args:
password: ""
ports:
- "9050:9050"
- "9051:9051"
The Dockerfile in the tor directory is:
FROM alpine:latest
EXPOSE 9050 9051
ARG password
RUN apk --update add tor
RUN echo "ControlPort 9051" >> /etc/tor/torrc
RUN echo "HashedControlPassword $(tor --quiet --hash-password $password)" >> /etc/tor/torrc
CMD ["tor"]
I'm trying to pass the argument password, which as the value "" (an emtpy string), to the Dockerfile, so that it can hash it with Tor and add a HashedControlPassword line to the configuration file (cf. https://www.torproject.org/docs/tor-manual.html.en).
However, if I docker-compose build followed by docker-compose up, the logs contain the following:
Creating network "apkmirrorscrapercompose_default" with the default driver
Starting apkmirrorscrapercompose_tor_1
Attaching to apkmirrorscrapercompose_tor_1
tor_1 | May 02 08:03:59.344 [notice] Tor v0.2.8.12 running on Linux with Libevent 2.0.22-stable, OpenSSL LibreSSL 2.4.4 and Zlib 1.2.8.
tor_1 | May 02 08:03:59.345 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
tor_1 | May 02 08:03:59.345 [notice] Read configuration file "/etc/tor/torrc".
tor_1 | May 02 08:03:59.349 [warn] Linelist option 'HashedControlPassword' has no value. Skipping.
tor_1 | May 02 08:03:59.349 [warn] ControlPort is open, but no authentication method has been configured. This means that any program on your computer can reconfigure your Tor. That's bad! You should upgrade your Tor controller as soon as possible.
In other words, the password argument is not getting 'picked up': Tor is saying it has "no value". Comparing with the example on https://docs.docker.com/compose/compose-file/#args, however, I don't see what's wrong with either the docker-compose.yml or Dockerfile.
Can anyone spot what the problem is?
I believe the problem was with the password being an empty string. If I replace it by "foo", the docker-compose up seems to work as expected.

Resources