Unable to build app using dockers - ruby-on-rails

I have setup of my application on DigitaOcean using dockers. It was working fine but few days back it stopped working. Whenever I want to build application and deploy it doesn't shows any progress.
By using following commands
docker-compose build && docker-compose stop && docker-compose up -d
systems stucks on the following output
db uses an image, skipping
elasticsearch uses an image, skipping
redis uses an image, skipping
Building app
It doesn't shows any furthur progress.
Following are the logs of docker-compose
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2018-01-10
02:25:36 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
redis_1 | 11264:C 26 Mar 15:20:17.028 # Failed opening the RDB
file root (in server root dir /run) for saving: Permission denied
redis_1 | 1:M 26 Mar 15:20:17.127 # Background saving error
redis_1 | 1:M 26 Mar 15:20:23.038 * 1 changes in 3600 seconds.
Saving...
redis_1 | 1:M 26 Mar 15:20:23.038 * Background saving started by pid 11265
elasticsearch | [2018-03-06T01:18:25,729][WARN ][o.e.b.BootstrapChecks ] [_IRIbyW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch | [2018-03-06T01:18:28,794][INFO ][o.e.c.s.ClusterService ] [_IRIbyW] new_master {_IRIbyW}{_IRIbyWCSoaUaKOLN93Fzg}{TFK38PIgRT6Kl62mTGBORg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch | [2018-03-06T01:18:28,835][INFO ][o.e.h.n.Netty4HttpServerTransport] [_IRIbyW] publish_address {172.17.0.4:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch | [2018-03-06T01:18:28,838][INFO ][o.e.n.Node ] [_IRIbyW] started
elasticsearch | [2018-03-06T01:18:29,104][INFO ][o.e.g.GatewayService ] [_IRIbyW] recovered [4] indices into cluster_state
elasticsearch | [2018-03-06T01:18:29,799][INFO ][o.e.c.r.a.AllocationService] [_IRIbyW] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[product_records][2]] ...]).
elasticsearch | [2018-03-07T16:11:18,449][INFO ][o.e.n.Node ] [_IRIbyW] stopping ...
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] stopped
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] closing ...
elasticsearch | [2018-03-07T16:11:18,601][INFO ][o.e.n.Node ] [_IRIbyW] closed
elasticsearch | [2018-03-07T16:11:37,993][INFO ][o.e.n.Node ] [] initializing ...
WARNING: Connection pool is full, discarding connection: 'Ipaddress'
I am using postgres , redis, elasticsearch and sidekiq images in my rails application
But i have no clue where the things are going wrong.

Related

Error 404 not found after running docker-compose with SpringBoot and MongoDB

My DockerFile is:
FROM openjdk:8
VOLUME /tmp
ADD target/demo-0.0.1-SNAPSHOT.jar app.jar
#RUN bash -c 'touch /app.jar'
#EXPOSE 8080
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongo/players","-jar","/app.jar"]
And the docker-compose is:
version: "3"
services:
spring-docker:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- db
db:
image: mongo
volumes:
- ./data:/data/db
ports:
- "27000:27017"
restart: always
I have docker Image and when I use docker-compose up, anything goes well without any error.
But in the Postman, when I use GET method with localhost:8080/player I do not have any out put, so I used the IP of docker-machine such as 192.168.99.101:8080, but I have error 404 Not found in the Postman.
what is my mistake?!
The docker-compose logs:
$ docker-compose logs
Attaching to thesismongoproject_spring-docker_1, thesismongoproject_db_1
spring-docker_1 |
spring-docker_1 | . ____ _ __ _ _
spring-docker_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
spring-docker_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
spring-docker_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
spring-docker_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
spring-docker_1 | =========|_|==============|___/=/_/_/_/
spring-docker_1 | :: Spring Boot :: (v2.2.6.RELEASE)
spring-docker_1 |
spring-docker_1 | 2020-05-31 11:36:39.598 INFO 1 --- [ main] thesisM
ongoProject.Application : Starting Application v0.0.1-SNAPSHOT on e81c
cff8ba0e with PID 1 (/demo-0.0.1-SNAPSHOT.jar started by root in /)
spring-docker_1 | 2020-05-31 11:36:39.620 INFO 1 --- [ main] thesisM
ongoProject.Application : No active profile set, falling back to defau
lt profiles: default
spring-docker_1 | 2020-05-31 11:36:41.971 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositori
es in DEFAULT mode.
spring-docker_1 | 2020-05-31 11:36:42.216 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in
225ms. Found 4 MongoDB repository interfaces.
spring-docker_1 | 2020-05-31 11:36:44.319 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] o.apach
e.catalina.core.StandardService : Starting service [Tomcat]
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] org.apa
che.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.
33]
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.a.c.c
.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationC
ontext
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.s.web
.context.ContextLoader : Root WebApplicationContext: initialization c
ompleted in 4810 ms
spring-docker_1 | 2020-05-31 11:36:46.183 INFO 1 --- [ main] org.mon
godb.driver.cluster : Cluster created with settings {hosts=[db:270
17], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'
, maxWaitQueueSize=500}
spring-docker_1 | 2020-05-31 11:36:46.781 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.connection : Opened connection [connectionId{localValue:1
, serverValue:1}] to db:27017
spring-docker_1 | 2020-05-31 11:36:46.802 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.cluster : Monitor thread successfully connected to ser
ver with description ServerDescription{address=db:27017, type=STANDALONE, state=
CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 7]}, minWireVersion
=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30,
roundTripTimeNanos=5468915}
spring-docker_1 | 2020-05-31 11:36:48.829 INFO 1 --- [ main] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTas
kExecutor'
spring-docker_1 | 2020-05-31 11:36:49.546 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with
context path ''
spring-docker_1 | 2020-05-31 11:36:49.581 INFO 1 --- [ main] thesisM
ongoProject.Application : Started Application in 11.264 seconds (JVM r
unning for 13.615)
spring-docker_1 | 2020-05-31 11:40:10.290 INFO 1 --- [extShutdownHook] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTa
skExecutor'
db_1 | 2020-05-31T11:36:35.623+0000 I CONTROL [main] Automatically
disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none
'
db_1 | 2020-05-31T11:36:35.639+0000 W ASIO [main] No TransportL
ayer configured during NetworkInterface startup
db_1 | 2020-05-31T11:36:35.645+0000 I CONTROL [initandlisten] Mong
oDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=1a0e5bc0c503
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] db v
ersion v4.2.7
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] git
version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] Open
SSL version: OpenSSL 1.1.1 11 Sep 2018
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] allo
cator: tcmalloc
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] modu
les: none
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten] buil
d environment:
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distmod: ubuntu1804
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distarch: x86_64
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
target_arch: x86_64
db_1 | 2020-05-31T11:36:35.648+0000 I CONTROL [initandlisten] opti
ons: { net: { bindIp: "*" } }
db_1 | 2020-05-31T11:36:35.649+0000 I STORAGE [initandlisten] Dete
cted data files in /data/db created by the 'wiredTiger' storage engine, so setti
ng the active storage engine to 'wiredTiger'.
db_1 | 2020-05-31T11:36:35.650+0000 I STORAGE [initandlisten] wire
dtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_
max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(f
ast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager
=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statis
tics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
db_1 | 2020-05-31T11:36:37.046+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:46670][1:0x7f393f9a0b00], txn-recover: Recovering log
9 through 10
db_1 | 2020-05-31T11:36:37.231+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:231423][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.294+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:294858][1:0x7f393f9a0b00], txn-recover: Main recovery
loop: starting at 9/6016 to 10/256
db_1 | 2020-05-31T11:36:37.447+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:447346][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 9 through 10
db_1 | 2020-05-31T11:36:37.564+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:564841][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.645+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:645216][1:0x7f393f9a0b00], txn-recover: Set global re
covery timestamp: (0, 0)
db_1 | 2020-05-31T11:36:37.681+0000 I RECOVERY [initandlisten] Wire
dTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2020-05-31T11:36:37.703+0000 I STORAGE [initandlisten] Time
stamp monitor starting
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] ** W
ARNING: Access control is not enabled for the database.
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] **
Read and write access to data and configuration is unrestricted.
db_1 | 2020-05-31T11:36:37.705+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.712+0000 I SHARDING [initandlisten] Mark
ing collection local.system.replset as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.722+0000 I STORAGE [initandlisten] Flow
Control is enabled on this deployment.
db_1 | 2020-05-31T11:36:37.722+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.roles as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.724+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.version as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.726+0000 I SHARDING [initandlisten] Mark
ing collection local.startup_log as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.729+0000 I FTDC [initandlisten] Init
ializing full-time diagnostic data capture with directory '/data/db/diagnostic.d
ata'
db_1 | 2020-05-31T11:36:37.740+0000 I SHARDING [LogicalSessionCache
Refresh] Marking collection config.system.sessions as collection version: <unsha
rded>
db_1 | 2020-05-31T11:36:37.748+0000 I SHARDING [LogicalSessionCache
Reap] Marking collection config.transactions as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on 0.0.0.0
db_1 | 2020-05-31T11:36:37.749+0000 I NETWORK [listener] waiting f
or connections on port 27017
db_1 | 2020-05-31T11:36:38.001+0000 I SHARDING [ftdc] Marking colle
ction local.oplog.rs as collection version: <unsharded>
db_1 | 2020-05-31T11:36:46.536+0000 I NETWORK [listener] connectio
n accepted from 172.19.0.3:40656 #1 (1 connection now open)
db_1 | 2020-05-31T11:36:46.653+0000 I NETWORK [conn1] received cli
ent metadata from 172.19.0.3:40656 conn1: { driver: { name: "mongo-java-driver|l
egacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "
amd64", version: "4.14.154-boot2docker" }, platform: "Java/Oracle Corporation/1.
8.0_252-b09" }
db_1 | 2020-05-31T11:40:10.302+0000 I NETWORK [conn1] end connecti
on 172.19.0.3:40656 (0 connections now open)
db_1 | 2020-05-31T11:40:10.523+0000 I CONTROL [signalProcessingThr
ead] got signal 15 (Terminated), will terminate after current cmd ends
db_1 | 2020-05-31T11:40:10.730+0000 I NETWORK [signalProcessingThr
ead] shutdown: going to close listening sockets...
db_1 | 2020-05-31T11:40:10.731+0000 I NETWORK [listener] removing
socket file: /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:40:10.731+0000 I - [signalProcessingThr
ead] Stopping further Flow Control ticket acquisitions.
db_1 | 2020-05-31T11:40:10.796+0000 I CONTROL [signalProcessingThr
ead] Shutting down free monitoring
db_1 | 2020-05-31T11:40:10.800+0000 I FTDC [signalProcessingThr
ead] Shutting down full-time diagnostic data capture
db_1 | 2020-05-31T11:40:10.803+0000 I STORAGE [signalProcessingThr
ead] Deregistering all the collections
db_1 | 2020-05-31T11:40:10.811+0000 I STORAGE [signalProcessingThr
ead] Timestamp monitor shutting down
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [TimestampMonitor] T
imestamp monitor is stopping due to: interrupted at shutdown
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [signalProcessingThr
ead] WiredTigerKVEngine shutting down
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.916+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.935+0000 I STORAGE [signalProcessingThr
ead] shutdown: removing fs lock...
db_1 | 2020-05-31T11:40:10.942+0000 I CONTROL [signalProcessingThr
ead] now exiting
db_1 | 2020-05-31T11:40:10.943+0000 I CONTROL [signalProcessingThr
ead] shutting down with code:0
for solving this problem I must put #EnableAutoConfiguration(exclude={MongoAutoConfiguration.class}) annotation

Docker not properly installing python packages using pip install -r requirements.txt

I am pretty new to Docker and Django. I am trying to set up a Django project for a REST-ful API running in a Docker container. I am trying to import the relavent python packages from a RUN command in the dockerfile, however, not all the packages are successfully installing.
Here are the files I'm using and the error I am getting.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
web:
build: .
# command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
djangorestframework
django-filter
markdown
Django
psycopg2
When I execute docker-compose up I get this output
Starting apiTest_db_1 ... done
Recreating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-04-17 21:35:57.022 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 21:35:57.028 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 21:35:57.075 UTC [27] LOG: database system was shut down at 2020-04-17 21:34:34 UTC
db_1 | 2020-04-17 21:35:57.100 UTC [1] LOG: database system is ready to accept connections
web_1 | Watching for file changes with StatReloader
web_1 | Exception in thread django-main-thread:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
web_1 | self.run()
web_1 | File "/usr/local/lib/python3.8/threading.py", line 870, in run
web_1 | self._target(*self._args, **self._kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
web_1 | autoreload.raise_last_exception()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
web_1 | raise _exception[1]
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 357, in execute
web_1 | autoreload.check_errors(django.setup)()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 991, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
web_1 | ModuleNotFoundError: No module named 'rest_framework'
Which indicates that djangorestframework has not been installed by pip.
Furthermore, when I switch the comented line in the docker-compose.yml file for the line below it (so that section becomes)
command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
# command: python manage.py runserver 0.0.0.0:8000
Then when I run docker-compose up I get the following output.
Creating network "apiTest_default" with the default driver
Creating apiTest_db_1 ... done
Creating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... Etc/UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
web_1 | Collecting djangorestframework
db_1 | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2020-04-17 22:47:22.783 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:22.789 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
web_1 | Downloading djangorestframework-3.11.0-py3-none-any.whl (911 kB)
db_1 | 2020-04-17 22:47:22.823 UTC [47] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:22.841 UTC [46] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2020-04-17 22:47:22.885 UTC [46] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2020-04-17 22:47:22.889 UTC [46] LOG: aborting any active transactions
db_1 | 2020-04-17 22:47:22.908 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1
db_1 | 2020-04-17 22:47:22.920 UTC [48] LOG: shutting down
db_1 | 2020-04-17 22:47:22.974 UTC [46] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-04-17 22:47:23.021 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:23.022 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 22:47:23.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 22:47:23.036 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 22:47:23.063 UTC [55] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:23.073 UTC [1] LOG: database system is ready to accept connections
web_1 | Collecting django-filter
web_1 | Downloading django_filter-2.2.0-py3-none-any.whl (69 kB)
web_1 | Collecting markdown
web_1 | Downloading Markdown-3.2.1-py2.py3-none-any.whl (88 kB)
web_1 | Requirement already satisfied: Django in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (3.0.5)
web_1 | Requirement already satisfied: psycopg2 in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (2.8.5)
web_1 | Requirement already satisfied: setuptools>=36 in /usr/local/lib/python3.8/site-packages (from markdown->-r requirements.txt (line 3)) (46.1.3)
web_1 | Requirement already satisfied: pytz in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (2019.3)
web_1 | Requirement already satisfied: sqlparse>=0.2.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (0.3.1)
web_1 | Requirement already satisfied: asgiref~=3.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (3.2.7)
web_1 | Installing collected packages: djangorestframework, django-filter, markdown
web_1 | Successfully installed django-filter-2.2.0 djangorestframework-3.11.0 markdown-3.2.1
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 |
web_1 | You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | April 17, 2020 - 22:47:25
web_1 | Django version 3.0.5, using settings 'apiTesting.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
Which shows that some packages such as Django have been successfully installed by the Dockerfile but some like djangorestframework, django-filter and markdown have not.
Why is this and what can I do in my Dockerfile to make them correctly install?
Both the main problem and the problem mentioned in the comments of itamar-turner-trauring's answer were solved by instead of running docker-compose up running
docker-compose up --build
Not 100% sure why this fixed it but I'd guess the compose file was loaing up the container from an old image which didn't include the new python packages. So forcing it to rebuild made it include the new python packages.
You are doing two things that potentially conflict:
Inside the image, as part of the build you copy everything in to /code.
In the compose file you mount current working directory into /code.
I am not sure that's the problem, but I suggest removing the volumes bit from the compose.yml and see if that help.

Docker using Rails setup with errors

So I am creating a minimal rails app with postgresql database. I want to ensure the rails app works and make sure the docker and your working rails app are as similar as possible.
Here's my Dockerfile content:
FROM ruby:latest
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker
WORKDIR /rails_docker
COPY Gemfile /rails_docker/Gemfile
COPY Gemfile.lock /rails_docker/Gemfile.lock
RUN bundle install
COPY . /rails_docker
And here's my docker-compose.yml file content:
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_USER: 'samnorton'
POSTGRES_PASSWORD: 'grace0512'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- '9999:5432'
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/rails_docker
ports:
- "3000:3000"
depends_on:
- db
volumes:
postgres-data:
driver: local
I have also set up a minimal Gemfile and Gemfile lock. When I run sudo docker-compose up I run in the ff error:
data-K54C:~/Desktop/rails_docker$ sudo docker-compose up
[sudo] password for sam:
railsdocker_db_1 is up-to-date
Starting railsdocker_web_1 ...
Starting railsdocker_web_1 ... done
Attaching to railsdocker_db_1, railsdocker_web_1
db_1 | 2019-12-12 14:20:38.333 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-12 14:20:38.342 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-12 14:20:38.342 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-12 14:20:38.411 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-12 14:20:38.547 UTC [25] LOG: database system was shut down at 2019-12-12 14:20:06 UTC
db_1 | 2019-12-12 14:20:38.609 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-12 14:24:47.800 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-12 14:24:47.841 UTC [1] LOG: background worker "logical replication launcher" (PID 31) exited with exit code 1
db_1 | 2019-12-12 14:24:47.844 UTC [26] LOG: shutting down
db_1 | 2019-12-12 14:24:48.094 UTC [1] LOG: database system is shut down
db_1 | 2019-12-12 15:54:38.528 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-12 15:54:38.543 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-12 15:54:38.543 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-12 15:54:38.627 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-12 15:54:38.806 UTC [23] LOG: database system was shut down at 2019-12-12 14:24:48 UTC
db_1 | 2019-12-12 15:54:39.053 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-12 16:40:42.473 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-12 16:40:42.590 UTC [1] LOG: background worker "logical replication launcher" (PID 29) exited with exit code 1
db_1 | 2019-12-12 16:40:42.590 UTC [24] LOG: shutting down
db_1 | 2019-12-12 16:40:43.398 UTC [1] LOG: database system is shut down
db_1 | 2019-12-13 00:02:44.643 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-13 00:02:44.665 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-13 00:02:44.665 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-13 00:02:44.751 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-13 00:02:44.947 UTC [23] LOG: database system was shut down at 2019-12-12 16:40:43 UTC
db_1 | 2019-12-13 00:02:45.179 UTC [1] LOG: database system is ready to accept connections
db_1 | 2019-12-13 00:32:00.742 UTC [1] LOG: received smart shutdown request
db_1 | 2019-12-13 00:32:01.089 UTC [1] LOG: background worker "logical replication launcher" (PID 29) exited with exit code 1
db_1 | 2019-12-13 00:32:01.089 UTC [24] LOG: shutting down
db_1 | 2019-12-13 00:32:02.353 UTC [1] LOG: database system is shut down
db_1 | 2019-12-13 01:01:34.874 UTC [1] LOG: starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2019-12-13 01:01:34.896 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2019-12-13 01:01:34.896 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2019-12-13 01:01:35.035 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2019-12-13 01:01:35.479 UTC [23] LOG: database system was shut down at 2019-12-13 00:32:01 UTC
db_1 | 2019-12-13 01:01:35.873 UTC [1] LOG: database system is ready to accept connections
web_1 | Usage:
web_1 | rails new APP_PATH [options]
web_1 |
web_1 | Options:
web_1 | [--skip-namespace], [--no-skip-namespace] # Skip namespace (affects only isolated applications)
web_1 | -r, [--ruby=PATH] # Path to the Ruby binary of your choice
web_1 | # Default: /usr/local/bin/ruby
web_1 | -m, [--template=TEMPLATE] # Path to some application template (can be a filesystem path or URL)
web_1 | -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/postgresql/sqlite3/oracle/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc)
web_1 | # Default: sqlite3
web_1 | [--skip-yarn], [--no-skip-yarn] # Don't use Yarn for managing JavaScript dependencies
web_1 | [--skip-gemfile], [--no-skip-gemfile] # Don't create a Gemfile
web_1 | -G, [--skip-git], [--no-skip-git] # Skip .gitignore file
web_1 | [--skip-keeps], [--no-skip-keeps] # Skip source control .keep files
web_1 | -M, [--skip-action-mailer], [--no-skip-action-mailer] # Skip Action Mailer files
web_1 | -O, [--skip-active-record], [--no-skip-active-record] # Skip Active Record files
web_1 | [--skip-active-storage], [--no-skip-active-storage] # Skip Active Storage files
web_1 | -P, [--skip-puma], [--no-skip-puma] # Skip Puma related files
web_1 | -C, [--skip-action-cable], [--no-skip-action-cable] # Skip Action Cable files
web_1 | -S, [--skip-sprockets], [--no-skip-sprockets] # Skip Sprockets files
web_1 | [--skip-spring], [--no-skip-spring] # Don't install Spring application preloader
web_1 | [--skip-listen], [--no-skip-listen] # Don't generate configuration that depends on the listen gem
web_1 | [--skip-coffee], [--no-skip-coffee] # Don't use CoffeeScript
web_1 | -J, [--skip-javascript], [--no-skip-javascript] # Skip JavaScript files
web_1 | [--skip-turbolinks], [--no-skip-turbolinks] # Skip turbolinks gem
web_1 | -T, [--skip-test], [--no-skip-test] # Skip test files
web_1 | [--skip-system-test], [--no-skip-system-test] # Skip system test files
web_1 | [--skip-bootsnap], [--no-skip-bootsnap] # Skip bootsnap gem
web_1 | [--dev], [--no-dev] # Setup the application with Gemfile pointing to your Rails checkout
web_1 | [--edge], [--no-edge] # Setup the application with Gemfile pointing to Rails repository
web_1 | [--rc=RC] # Path to file containing extra configuration options for rails command
web_1 | [--no-rc], [--no-no-rc] # Skip loading of extra configuration options from .railsrc file
web_1 | [--api], [--no-api] # Preconfigure smaller stack for API only apps
web_1 | -B, [--skip-bundle], [--no-skip-bundle] # Don't run bundle install
web_1 | [--webpack=WEBPACK] # Preconfigure for app-like JavaScript with Webpack (options: react/vue/angular/elm/stimulus)
web_1 |
web_1 | Runtime options:
web_1 | -f, [--force] # Overwrite files that already exist
web_1 | -p, [--pretend], [--no-pretend] # Run but do not make any changes
web_1 | -q, [--quiet], [--no-quiet] # Suppress status output
web_1 | -s, [--skip], [--no-skip] # Skip files that already exist
web_1 |
web_1 | Rails options:
web_1 | -h, [--help], [--no-help] # Show this help message and quit
web_1 | -v, [--version], [--no-version] # Show Rails version number and quit
web_1 |
web_1 | Description:
web_1 | The 'rails new' command creates a new Rails application with a default
web_1 | directory structure and configuration at the path you specify.
web_1 |
web_1 | You can specify extra command-line arguments to be used every time
web_1 | 'rails new' runs in the .railsrc configuration file in your home directory.
web_1 |
web_1 | Note that the arguments specified in the .railsrc file don't affect the
web_1 | defaults values shown above in this help message.
web_1 |
web_1 | Example:
web_1 | rails new ~/Code/Ruby/weblog
web_1 |
web_1 | This generates a skeletal Rails installation in ~/Code/Ruby/weblog.
railsdocker_web_1 exited with code 0
in the the curent case you are not using the right rails directory i think
I am not sure if I am doing it right, I wonder if its the WORKDIR I have setup or something is running on port 3000. Any idea what is wrong with my setup?
UPDATE:
After running sudo docker-compose build and sudo docker-compose up I got these errors:
data-K54C:~/Desktop/rails_docker$ sudo docker-compose up
railsdocker_db_1 is up-to-date
Recreating railsdocker_web_1 ...
Recreating railsdocker_web_1 ... error
ERROR: for railsdocker_web_1 no such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae: No such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae
ERROR: for web no such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae: No such image: sha256:6d066f5f04e34f6f442d4a68fb4124e1093bb6a976593087d5ebc92478abfaae
ERROR: Encountered errors while bringing up the project.
sam#sam-K54C:~/Desktop/rails_docker$ clear
Try to replace the command as the following:
command: bundle exec bin/rails s -p 3000 -b '0.0.0.0'
That should work.

Docker Compose with Rails hanging up on command

I'm following the Docker Rails tutorial via https://docs.docker.com/compose/rails/#build-the-project (using Windows 10 Home)
EDIT: Just a note, I am using docker-toolbox because Docker requires Windows 10 Pro for Hyper-V, and I have Windows 10 Home edition.
I have gone through the tutorial several times, and each time I run docker-compose up, it gets hung up and doesn't mention the local port the app is running on. I have also tried making a new app and changing the port number in docker-compose.yml to see if that would fix the issue.
All of the previous commands in the tutorial have worked properly, and I have edited the config/database.yml file correctly, before running the docker-compose up command.
I have deleted images, and all files and started from scratch several times. I still run into the same issue.
Here are the commands from the tutorial:
docker-compose run web rails new . --force --database=postgresql --skip-bundle
docker-compose build
docker-compose up
docker-compose run web rails db:create
Here is my docker-compose.yml file:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
Here is my Dockerfile:
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
ADD Gemfile /myapp/Gemfile
ADD Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
ADD . /myapp
Here is the output from docker-compose up:
$ docker-compose up
mydockerbuild_db_1 is up-to-date
Creating mydockerbuild_web_1
Attaching to mydockerbuild_db_1, mydockerbuild_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | syncing data to disk ... ok
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | ****************************************************
db_1 | WARNING: No password has been set for the database.
db_1 | This will allow anyone with access to the
db_1 | Postgres port to access your database. In
db_1 | Docker's default configuration, this is
db_1 | effectively any other container on the same
db_1 | system.
db_1 |
db_1 | Use "-e POSTGRES_PASSWORD=password" to set
db_1 | it in "docker run".
db_1 | ****************************************************
db_1 | waiting for server to start....LOG: database system was shut down at 2017-02-14 18:56:05 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | done
db_1 | server started
db_1 | ALTER ROLE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint- initdb.d/*
db_1 |
db_1 | LOG: received fast shutdown request
db_1 | LOG: aborting any active transactions
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | waiting for server to shut down....LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | LOG: database system was shut down at 2017-02-14 18:56:06 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2017-02-14 19:06:12 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | ERROR: database "myapp_development" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_development" ENCODING = 'unicode'
db_1 | ERROR: database "myapp_test" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_test" ENCODING = 'unicode'
db_1 | ERROR: database "myapp_development" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_development" ENCODING = 'unicode'
db_1 | ERROR: database "myapp_test" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_test" ENCODING = 'unicode'
db_1 | ERROR: database "myapp_development" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_development" ENCODING = 'unicode'
db_1 | ERROR: database "myapp_test" already exists
db_1 | STATEMENT: CREATE DATABASE "myapp_test" ENCODING = 'unicode'

docker-compose generating duplicate entries in /etc/hosts

I have a fairly simple docker-compose.yml:
db:
build: docker/db
env_file:
- .env
ports:
- "5432"
web:
build: .
env_file:
- .env
volumes:
- .:/home/app/emerson
ports:
- "80:80"
links:
- db
The web container launches a rails app. Everything goes smoothly, but there is one thing that confuses me. Looking inside /etc/hosts on the web container, I see the following entries:
172.17.0.10 db_1
172.17.0.10 emerson_db_1
172.17.0.10 db
I would expect db, since that's the container I'm linking to the web container, but where did the other guys come from? FYI, here's the output of docker-compose up:
Creating emerson_db_1...
Creating emerson_web_1...
Attaching to emerson_db_1, emerson_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
web_1 | *** Running /etc/my_init.d/00_configure_nginx.sh...
web_1 | *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
web_1 | No SSH host key available. Generating one...
db_1 | ok
db_1 | initializing pg_authid ... ok
web_1 | Creating SSH2 RSA key; this may take some time ...
db_1 | initializing dependencies ... ok
web_1 | Creating SSH2 DSA key; this may take some time ...
web_1 | Creating SSH2 ECDSA key; this may take some time ...
web_1 | Creating SSH2 ED25519 key; this may take some time ...
db_1 | creating system views ... ok
db_1 | loading system objects' descriptions ... ok
db_1 | creating collations ... ok
db_1 | creating conversions ... ok
db_1 | creating dictionaries ... ok
db_1 | setting privileges on built-in objects ... ok
web_1 | invoke-rc.d: policy-rc.d denied execution of restart.
db_1 | creating information schema ... ok
web_1 | *** Running /etc/my_init.d/30_presetup_nginx.sh...
web_1 | *** Running /etc/rc.local...
db_1 | loading PL/pgSQL server-side language ... ok
web_1 | *** Booting runit daemon...
web_1 | *** Runit started as PID 98
db_1 | vacuuming database template1 ... ok
db_1 | copying template1 to template0 ... ok
db_1 | copying template1 to postgres ... ok
web_1 | Apr 24 02:44:26 1d3b7bb27612 syslog-ng[105]: syslog-ng starting up; version='3.5.3'
db_1 | syncing data to disk ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | postgres -D /var/lib/postgresql/data
db_1 | or
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | ****************************************************
db_1 | WARNING: No password has been set for the database.
db_1 | This will allow anyone with access to the
db_1 | Postgres port to access your database. In
db_1 | Docker's default configuration, this is
db_1 | effectively any other container on the same
db_1 | system.
db_1 |
db_1 | Use "-e POSTGRES_PASSWORD=password" to set
db_1 | it in "docker run".
db_1 | ****************************************************
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: ALTER USER "postgres" WITH SUPERUSER ;
db_1 |
web_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 118) 0s
db_1 | backend>
db_1 | No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
db_1 |
db_1 | backend> *******************************************
db_1 | LOG: database system was shut down at 2015-04-24 02:44:28 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | [ 2015-04-24 02:44:27.9386 119/7f4c07f13780 agents/Watchdog/Main.cpp:538 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'passenger_version' => '4.0.58', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_passenger_version' => '4.0.58', 'web_server_pid' => '107', 'web_server_type' => 'nginx', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' }
web_1 | [ 2015-04-24 02:44:27.0007 122/7f0c3eb9a780 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/request
web_1 | [ 2015-04-24 02:44:28.1065 127/7f5e5b4377c0 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.107/generation-0/logging
web_1 | [ 2015-04-24 02:44:28.1072 119/7f4c07f13780 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
But there are only two containers docker ps -a outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d3b7bb27612 emerson_web:latest "/sbin/my_init" About an hour ago Up About an hour 443/tcp, 0.0.0.0:80->80/tcp emerson_web_1
0c047c3ce103 emerson_db:latest "/docker-entrypoint. About an hour ago Up About an hour 0.0.0.0:49156->5432/tcp emerson_db_1
In addition, I also see duplicate environment variables in the web container, corresponding to db, db_1 and emerson_db_1 prefixes.
They are coming from pre-1.0 docker-compose, where multiple db instances where named after _1, _2 pattern.
PR 364 introduced link name (by default, the name of the linked service) as the hostname to connect to, instead of using environment variable.
There are still aliases with _x added for each container instances, and that can be an issue (Issue 472: Hostnames with underscore fails in ruby URI validation
The current answer is:
You can use the name of the service in the docker-compose.yml as the hostname. It doesn't contain any underscores.
You can also add an alias to your link to the container, which should allow you to access it as just the alias.
In the 1.3 release of compose there should be support for naming your container as anything you want, which will make this more obvious.

Resources