How to fix the WARNINGs when running the redis:alpine Docker image - docker

If I run the redis:alpine Docker image using the commmand
docker run redis:alpine
I see several warnings:
1:C 08 May 08:29:32.308 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.8 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
1:M 08 May 08:29:32.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 08 May 08:29:32.311 # Server started, Redis version 3.2.8
1:M 08 May 08:29:32.311 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 08 May 08:29:32.311 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 08 May 08:29:32.311 * The server is now ready to accept connections on port 6379
I've tried to fix the first two of these warnings using the following Dockerfile:
FROM redis:alpine
COPY somaxconn /proc/sys/net/core/somaxconn
COPY sysctl.conf /etc/sysctl.conf
CMD ["redis-server", "--appendonly", "yes"]
where my local file somaxconn contains the single entry 511 and sysctl.conf contains the line vm.overcommit_memory = 1. However, I still get the same warnings when I build and run the container.
How can I get rid of these warnings? (There is mention of the issues in https://www.techandme.se/performance-tips-for-redis-cache-server/ but the solution described there, involving modifying rc.local, seems to pertain to Rasperry Pi).

Bad way to handle things: /proc is read-only filesystem to modify it you can run Docker in privileged mode than you can modify it after the container was started.
If running the container in privileged mode, you can disable THP using these commands:
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# echo never > /sys/kernel/mm/transparent_hugepage/defrag
Proper way: Ensure that you run newer versions of Docker (upgrade if needed). run subcommand has the --sysctl option:
$ docker run -ti --sysctl net.core.somaxconn=4096 --rm redis:alpine /bin/sh
root#9e850908ddb7:/# sysctl net.core.somaxconn
net.core.somaxconn = 4096
...
Unfortunately: vm.overcommit_memory is currently not allowed to be set via --sysctl paramter the same applies to THP (transparent_hugepage), this is because they are not namespaced. Thus to fix these warning in a container running on a Linux Host you can change them directly on host. Here the related Issues:
#19
#55
You don't need privileged mode for the proper way approach.

Following #ovanes response in this thread, I would complement that the proper way to fix it by running the container using Ansible playbook would be adding to the playbook the follow:
sysctls:
net.core.somaxconn: "4096"
Here is the ansible documentation about it and here a good explanation about this redis warning.
As for the overcommit_memory issue, it must be solved directly on the docker host, by adding vm.overcommit_memory = 1 in the /etc/sysctl.conf.

Related

How to connect to Docker containers using hostnames from Docker host? [duplicate]

This question already has answers here:
Access docker container from host using containers name
(7 answers)
Closed 3 years ago.
I want to connect to my Docker containers from my Docker host using hostnames.
I already know how to connect to containers by mapping their ports using docker run -p <host-port>:<container-port> ... and then access them through localhost.
Also, I can connect to containers using their IP-addresses given by docker inspect <container>. But these IP-adresses are not static.
How can I give containers hostnames, so that I can connect to them through exposed ports without having to think about non-static IPs?
Use docker-compose and make services in them. Each container will be a part of a Service and one container can talk to other container using the service name the container belongs to.
Ex:
$ cat docker-compose.yml
version: '3.1'
services:
server:
image: redis
command: [ "redis-server" ]
client:
image: redis
command: [ "redis-cli", "-h", "server", "ping" ]
links:
- server
$
$
$ docker-compose up
Starting server_1 ... done
Starting client_1 ... done
Attaching to server_1, client_1
client_1 | PONG
server_1 | 1:C 10 Dec 2019 12:59:20.161 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
server_1 | 1:C 10 Dec 2019 12:59:20.161 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
server_1 | 1:C 10 Dec 2019 12:59:20.161 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
server_1 | 1:M 10 Dec 2019 12:59:20.162 * Running mode=standalone, port=6379.
server_1 | 1:M 10 Dec 2019 12:59:20.162 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
server_1 | 1:M 10 Dec 2019 12:59:20.162 # Server initialized
server_1 | 1:M 10 Dec 2019 12:59:20.162 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
server_1 | 1:M 10 Dec 2019 12:59:20.162 * Ready to accept connections
client_1 exited with code 0
Here, I created two services, server and client. server starts a redis-server and client tries to connect to the server. Also, note that I haven't exposed ports here, so the client container is talking to the server container using server (service-name)
client:
image: redis
command: [ "redis-cli", "-h", "server", "ping" ]

Redis is shutting down for no reason in docker container

I am trying to launch a redis docker container using docker-compose, but I always get this error. This is my docker-compose run commands docker-compose -f docker-compose.yml build and docker-compose -f docker-compose.yml up -d --force-recreate. I am running the docker containers on aws ecs and using a t2.micro ec2 instance. I am not sure if that is the reason why. Any insight would be helpful.
I have also included my docker-compose.yml
version: '2.1'
services:
redis:
image: redis:latest
container_name: redis
volumes:
- redis_data:/data
ports:
- 6379:6379
app:
image: custom_image
build: .
depends_on:
redis:
condition: service_started
ports:
- 8003:8003
links:
- redis
volumes:
redis_data:
Error
1:C 11 Sep 00:18:34.345 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 11 Sep 00:18:34.348 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 11 Sep 00:18:34.348 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 11 Sep 00:18:34.349 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
1:M 11 Sep 00:18:34.349 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
1:M 11 Sep 00:18:34.349 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
1:M 11 Sep 00:18:34.350 * Running mode=standalone, port=6379.
1:M 11 Sep 00:18:34.350 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 11 Sep 00:18:34.350 # Server initialized
1:M 11 Sep 00:18:34.350 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 11 Sep 00:18:34.350 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 11 Sep 00:18:34.350 * Ready to accept connections
1:signal-handler (1536625117) Received SIGTERM scheduling shutdown...
1:M 11 Sep 00:18:37.375 # User requested shutdown...
1:M 11 Sep 00:18:37.375 * Saving the final RDB snapshot before exiting.
1:M 11 Sep 00:18:37.378 * DB saved on disk
1:M 11 Sep 00:18:37.378 # Redis is now ready to exit, bye bye...
Ran into the same issues. After a bit of digging we found that it was killed by systemd due to inactive.
Run systemctl show docker.service command show that the inactive and active enter timestamp match up with when the redis service stop and start again.
InactiveEnterTimestamp=Tue 2021-08-03 22:07:19 AEST
ActiveEnterTimestamp=Wed 2021-08-04 09:30:36 AEST
Our solution is just to perform some activity on redis so that it doesn't enter inactive state.

How to debug "WSREP: SST failed: 1 (Operation not permitted)" with a MariaDB Galera cluster in Docker?

Requirement: CentOS-based Docker container providing a MariaDB 10.x Galera cluster
Host Environment: OX X El Capitan 10.11.6, Docker 1.12.5 (14777)
Docker Container OS: CentOS Linux release 7.3.1611 (Core)
DB: 10.1.20-MariaDB
I found a promising Docker image, but the documentation seems to be obsolete, the commands to start the cluster do not work. At the time of writing the image uses wsrep_sst_method = rsync and so I figured that the following commands should work (replace /Users/Me/somedb with an empty directory on your host):
docker pull dayreiner/centos7-mariadb-10.1-galera
docker run -d --name db1 -h db1host -p 3306:3306 -e CLUSTER_NAME=joe -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
docker run -d --name db2 -h db2host -p 3307:3306 --link db1 -e CLUSTER_NAME=joe -e CLUSTER=db1host,db2host -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
The first container (db1) comes up and seems OK. But the last line that tries to add db2 as a second node to the Galera cluster results in the following error (docker logs db2):
2017-01-10 15:26:10 139742710823680 [Note] WSREP: New cluster view: global state: :-1, view# 0: Primary, number of nodes: 1, my index: 0, protocol version 3
2017-01-10 15:26:10 139742711142656 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2017-01-10 15:26:10 139742711142656 [ERROR] Aborting
I could not figure out what is wrong here and would appreciate ideas on how to analyze this further. Is this a problem of rsync, Galera or even Docker?
That's my image on dockerhub.
I had not tested the cluster (until now) on a single host, only running on multiple hosts. You're right though, running two on a single host seems to abort the second node on start.
This looks to be caused by the default bridge network not behaving nicely. Possibly some issue with handling the ports for state transfer. Not really sure why.
If you modify your commands to first create a custom network for your clustered containers to use on the backend, and then run the cluster members using that network, that seems to work when running two nodes on a single host:
# docker network create mariadb
# docker run -d --network=mariadb -p 3307:3306 --name db1 -e CLUSTER_NAME=test -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db1:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
# docker run -d --network=mariadb -p 3308:3306 --name db2 -e CLUSTER_NAME=test -e CLUSTER=db1,db2 -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db2:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
No errors this time on the second node:
# docker logs db2 -f
...snip
2017-01-12 20:33:08 139726185019648 [Note] WSREP: Signalling provider to continue.
2017-01-12 20:33:08 139726185019648 [Note] WSREP: SST received: 42eaa277-d906-11e6-b98a-3e6b9531c1b7:0
2017-01-12 20:33:08 139725604124416 [Note] WSREP: 1.0 (f170852fe1b6): State transfer from 0.0 (951fdda2454b) complete.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINER -> JOINED (TO: 0)
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Member 1.0 (f170852fe1b6) synced with group.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2017-01-12 20:33:08 139726105180928 [Note] WSREP: Synchronized with group, ready for connections
2017-01-12 20:33:08 139726105180928 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2017-01-12 20:33:08 139726185019648 [Note] mysqld: ready for connections.
Version: '10.1.20-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
Try that, see how it goes. Also, if you run it using docker-compose it will also work without any problems. This is likely because compose creates a dedicated compose container network by default. You can see an example compose file in this gist.
Just make sure to use a different directory for each mariadb instance, and after you have your cluster started, stop db1 and relaunch it as a regular cluster member (otherwise the next time db1 is started it will keep bootstrapping a new cluster).
Works after upgrading the Docker image to MariaDB 10.2.3 (from 10.1.20).
I am not 100% sure whether I have a truly valid cluster now, but at least show status like "wsrep_cluster_size"; produces the following output and the DB is usable:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
Note: I also omitted the -v option and placed the DB files inside the Docker container instead of on an external volume. I don't think that this makes a difference regarding the cluster, but I did not verify 10.2.3 with -v. However, I tried 10.1.20 with both variations (external volume with -v and container-internal files) and both did not work.

Unable to connect to Postgres after docker-compose up.

I am attempting to Dockerize a rails app using a number of online tutorials. I've reached the point where I can successfully fire up a docker container using docker-compose up. But once after that point, I have trouble connecting to my database. The following is my docker-compose up output:
docker-compose up
Pulling redis (redis:latest)...
latest: Pulling from library/redis
75a822cd7888: Pull complete
e40c2fafe648: Pull complete
ce384d4aea4f: Pull complete
5e29dd684b84: Pull complete
29a3c975c335: Pull complete
a405554540f9: Pull complete
4b2454731fda: Pull complete
Digest: sha256:eed4da4937cb562e9005f3c66eb8c3abc14bb95ad497c03dc89d66bcd172fc7f
Status: Downloaded newer image for redis:latest
Pulling postgres (postgres:9.5.4)...
9.5.4: Pulling from library/postgres
43c265008fae: Pull complete
215df7ad1b9b: Pull complete
833a4a9c3573: Pull complete
e5716357a052: Pull complete
6552dfce18a3: Pull complete
b75b371d1e9f: Pull complete
ecc63fd465b8: Pull complete
8eb11995a95a: Pull complete
9c82fb17fc44: Pull complete
389787480cc2: Pull complete
01988d09a399: Pull complete
Digest: sha256:1480f2446dabb1116fafa426ac530d2404277873a84dc4a4d0d9d4b37a5601e8
Status: Downloaded newer image for postgres:9.5.4
Creating redis
Creating postgres
Attaching to postgres, redis
postgres | The files belonging to this database system will be owned by user "postgres".
postgres | This user must also own the server process.
postgres |
redis | 1:C 02 Jan 21:08:36.583 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | _._
postgres | The database cluster will be initialized with locale "en_US.utf8".
redis | _.-``__ ''-._
postgres | The default database encoding has accordingly been set to "UTF8".
redis | _.-`` `. `_. ''-._ Redis 3.2.6 (00000000/0) 64 bit
postgres | The default text search configuration will be set to "english".
redis | .-`` .-```. ```\/ _.,_ ''-._
redis | ( ' , .-` | `, ) Running in standalone mode
postgres |
redis | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
postgres | Data page checksums are disabled.
redis | | `-._ `._ / _.-' | PID: 1
postgres |
postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok
redis | `-._ `-._ `-./ _.-' _.-'
redis | |`-._`-._ `-.__.-' _.-'_.-'|
postgres | creating subdirectories ... ok
redis | | `-._`-._ _.-'_.-' | http://redis.io
redis | `-._ `-._`-.__.-'_.-' _.-'
redis | |`-._`-._ `-.__.-' _.-'_.-'|
redis | | `-._`-._ _.-'_.-' |
redis | `-._ `-._`-.__.-'_.-' _.-'
postgres | selecting default max_connections ... 100
redis | `-._ `-.__.-' _.-'
redis | `-._ _.-'
redis | `-.__.-'
redis |
redis | 1:M 02 Jan 21:08:36.584 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
postgres | selecting default shared_buffers ... 128MB
postgres | selecting dynamic shared memory implementation ... posix
redis | 1:M 02 Jan 21:08:36.584 # Server started, Redis version 3.2.6
redis | 1:M 02 Jan 21:08:36.584 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis | 1:M 02 Jan 21:08:36.584 * The server is now ready to accept connections on port 6379
postgres | creating configuration files ... ok
postgres | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
postgres | initializing pg_authid ... ok
postgres | initializing dependencies ... ok
postgres | creating system views ... ok
postgres | loading system objects' descriptions ... ok
postgres | creating collations ... ok
postgres | creating conversions ... ok
postgres | creating dictionaries ... ok
postgres | setting privileges on built-in objects ... ok
postgres | creating information schema ... ok
postgres | loading PL/pgSQL server-side language ... ok
postgres | vacuuming database template1 ... ok
postgres | copying template1 to template0 ... ok
postgres | copying template1 to postgres ... ok
postgres | syncing data to disk ... ok
postgres |
postgres | Success. You can now start the database server using:
postgres |
postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres |
postgres |
postgres | WARNING: enabling "trust" authentication for local connections
postgres | You can change this by editing pg_hba.conf or using the option -A, or
postgres | --auth-local and --auth-host, the next time you run initdb.
postgres | ****************************************************
postgres | WARNING: No password has been set for the database.
postgres | This will allow anyone with access to the
postgres | Postgres port to access your database. In
postgres | Docker's default configuration, this is
postgres | effectively any other container on the same
postgres | system.
postgres |
postgres | Use "-e POSTGRES_PASSWORD=password" to set
postgres | it in "docker run".
postgres | ****************************************************
postgres | waiting for server to start....LOG: database system was shut down at 2017-01-02 21:08:37 UTC
postgres | LOG: MultiXact member wraparound protections are now enabled
postgres | LOG: database system is ready to accept connections
postgres | LOG: autovacuum launcher started
postgres | done
postgres | server started
postgres | CREATE DATABASE
postgres |
postgres | ALTER ROLE
postgres |
postgres |
postgres | /docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres |
postgres | LOG: received fast shutdown request
postgres | waiting for server to shut down...LOG: aborting any active transactions
postgres | .LOG: autovacuum launcher shutting down
postgres | LOG: shutting down
postgres | LOG: database system is shut down
postgres | done
postgres | server stopped
postgres |
postgres | PostgreSQL init process complete; ready for start up.
postgres |
postgres | LOG: database system was shut down at 2017-01-02 21:08:39 UTC
postgres | LOG: MultiXact member wraparound protections are now enabled
postgres | LOG: database system is ready to accept connections
postgres | LOG: autovacuum launcher started
postgres | FATAL: role "boguthrie" does not exist
postgres | FATAL: role "boguthrie" does not exist
postgres | FATAL: role "user" does not exist
You can see there in the final outputs that I have tried a number of different user roles in my database.yml that I know exist (e.g. when I use the Postgres app I can successfully access by db using those roles). When I try to take a look at my running databases with psql <dbname> or psql -U user -d <dbname> -h localhost I get the following error.
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Finally, here are potentially relevant files.
database.yml
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
database: example
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
username: boguthrie
# The password associated with the postgres role (username).
password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
development:
<<: *default
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: example_test
docker-compose.yml
version: '2'
services:
postgres:
container_name: a
image: postgres:9.5.4
environment:
POSTGRES_PASSWORD:
POSTGRES_USER:
POSTGRES_DB: example
ports:
- "5432:5432"
redis:
container_name: redis
image: redis
ports:
- "6379:6379"
Dockerfile
# The following are in the Dockerfile instructions
# The first non-comment instruction must be `FROM` in order to specify the Base Image from which you are building.
# 'FROM' can appear multiple times within a single Dockerfile in order to create multiple images.
# Simply make a note of the last image ID output by the commit before each new FROM command.
FROM ruby:2.3
MAINTAINER Bo
# The LABEL instruction adds metadata to an image.
# A LABEL is a key-value pair.
# To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing.
# User docker inspect command to see labels.
LABEL version="0.1"
LABEL description="Example App"
# 'RUN' has two forms:
# The shell form or the executable form. All of the run commands in this file are in the shell form.
# This will throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
# Here we're creating the directory /usr/src/app and using it as or working directory.
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y mysql-client postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y imagemagick --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y graphviz --no-install-recommends && rm -rf /var/lib/apt/lists/*
# The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.
COPY Gemfile /usr/src/app/
COPY Gemfile.lock /usr/src/app/
RUN bundle install
COPY . /usr/src/app
# The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime.
# EXPOSE does not make the ports of the container accessible to the host.
EXPOSE 3000
# The main purpose of a CMD is to provide defaults for an executing container.
# These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
# There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
# Example common usage: CMD ["rails", "server", "-b", "0.0.0.0", "-P", "/tmp/server.pid"]. This will store the pid in a location not persisted between boots
# Define the script we want run once the container boots
# Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "config/containers/app_cmd.sh" ]
Any help here would be appreciated. Thanks for your time.
Your role does not exist. This is due to POSTGRES_USER not being set in your docker-compose.yml file. If you set that value and recreate the container it will be created. POSTGRES_USER needs to match the user in the database.yml file for rails.

Foreman terminates immediately

I recently installed OSX and Ubuntu on different computers. I then attempted to install redis and foreman for both OS's. Both errors threw no flags, and seemed to execute successfully. However, whenever I go to start foreman with foreman start, I run into the below issue on both computers:
23:48:35 web.1 | started with pid 1316
23:48:35 redis.1 | started with pid 1317
23:48:35 worker.1 | started with pid 1318
23:48:35 redis.1 | [1317] 11 Jun 23:48:35.180 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
23:48:35 redis.1 | [1317] 11 Jun 23:48:35.181 * Increased maximum number of open files to 10032 (it was originally set to 256).
23:48:35 redis.1 | [1317] 11 Jun 23:48:35.181 # Creating Server TCP listening socket *:6379: bind: Address already in use
23:48:35 redis.1 | exited with code 1
23:48:35 system | sending SIGTERM to all processes
23:48:35 worker.1 | terminated by SIGTERM
23:48:35 web.1 | terminated by SIGTERM
For some reason, it seems like a path issue to me because it seems like Redis or Foreman cannot find the files they need to use to successfully execute, but I'm not exactly sure.
On OSX I used gem install foreman and Brew install Redis .
On Ubuntu I used the following:
Redis:
$ cd ~
$ wget http://download.redis.io/redis-stable.tar.gz
$ tar xvzf redis-stable.tar.gz
$ cd redis-stable
$ make
$ make test
Foreman:
$ gem install foreman
My PATH on OSX is as follows:
/Users/c/.rvm/gems/ruby-2.1.0/bin:/Users/c/.rvm/gems/ruby-2.1.0#global/bin:/Users/c/.rvm/rubies/ruby-2.1.0/bin:/Users/c/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
On Ubuntu, my PATH is:
/usr/local/bin:/usr/lib/postgresql:/usr/lib/postgresql/9.3:/usr/lib/ postgresql/9.3/lib:/usr/lib/postgresql/9.3/bin:/usr/share/doc:/usr/share/doc/postgresql-9.3:/usr/share/postgresql:/usr/share/postgresql/9.3:/usr/share/postgresql/9.3/man:$PATH
Redis-server does seem to execute successfully once, and then it fails with the message:
[1457] 12 Jun 00:02:48.481 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
[1457] 12 Jun 00:02:48.482 * Increased maximum number of open files to 10032 (it was originally set to 256).
[1457] 12 Jun 00:02:48.483 # Creating Server TCP listening socket *:6379: bind: Address already in use
Trying $ redis-server stop returns:
[1504] 12 Jun 00:05:56.173 # Fatal error, can't open config file 'stop'
I need help figuring out how to get Foreman and Redis working together so that I can view my local files in the browser at 127.0.0.1
EDIT
Redis does start, but nothing happens when I navigate to localhost:6379. I also tried the suggestion of finding processes. It found
c 751 0.0 0.0 2432768 596 s005 R+ 2:03PM 0:00.00 grep redis
c 616 0.0 0.0 2469952 1652 s004 S+ 2:01PM 0:00.05 redis-server *:6379
Trying to kill the process results in
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec
... or kill -l [sigspec]
Try starting Redis server with the following command :
redis-server <path to your config file>
Also, check if there's an instance of Redis server already running by
ps aux | grep redis
and then if process is found :
kill <process id>
Restart your redis server.
This one liner will kill any existing redis-servers and then start a new redis-server. When run in Foreman it doesn't send a SIGTERM which causes Foreman to quit, sending a SIGINT lets Foreman continue.
(ps aux | grep 6379 | grep redis | awk '{ print $2 }' | xargs kill -s SIGINT) && redis-server
In Procfile.dev:
redis: (ps aux | grep 6379 | grep redis | awk '{ print $2 }' | xargs kill -s SIGINT) && redis-server
List the redis server running using terminal
command : ps aux | grep redis
In list note down 'pid' number of the server which you want to terminate Example pid: 5379
use command : kill 5379
Try this:
$ sudo systemctl stop redis-server
now run: foreman start

Resources