Docker with Redis via Procfile.dev in iTerm2 output is unreadable - ruby-on-rails

This is a bit of a strange one and I can't find answers anywhere else... if I have a Procfile.dev file with a basic Redis command in it such as redis: docker run --rm -it -p 6379:6379 redis:latest and run it via bin/dev the output from the Redis ascii art makes the logs unreable. If I remove the Redis command from the Procfile.dev it goes back to being neat and readable, below is an example of the messed up output:
Does anyone know how to make this look nice? I'm having to run docker outside the procfile atm because of this.
This is a Ruby on Rails 7 app running via bin/dev, if that is relevant.

If possible try running the docker container in non interactive mode:
redis: docker run --rm -p 6379:6379 redis:latest
Note the removal of the -it flag.
It will cause redis to not output the logo ascii art.
The reasoning behind the solution are the comments defined for the configuration property always-show-logo that you can see in redis.conf:
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY and syslog logging is
# disabled. Basically this means that normally a logo is displayed only in
# interactive sessions.
Similar issues has been reported for the service, like this one related to syslog.

Related

Application logging with multiple docker replicas (containers)

We have a .NET core app, which logs its output to files, for eg. portal-20200430-000.log. In DEV environment, all is well :)
App is deployed via docker service, which initializes 3 replicas - 3 docker containers. We want to have all the logs from all the containers (replicas) in one place, so we mapped the file systems between the host machine and the containers via volumes.
docker-compose.yml:
version: "3.7"
services:
web:
image: portal:0.0.1
--- snipped content ---
volumes:
- "/home/portal/Dev/Logs:/app/Logs"
deploy:
replicas: 3
Each container (replica) is outputting its own logs to /app/Logs/portal-20200430-000.log inside container, but this folder is mapped to /home/portal/Dev/Logs on the host. So all 3 containers are writing into the same file on the host, which is not ok - some of the logs get lost, these 3 containers are overwritting each others logs, etc.
I suppose possible solutions are:
change the file name of each container's log (but logging is done via external Karambolo logger, that has filenames hardcoded inside appsettings.json. These settings are common to all container replicas)
instruct each docker replica to map a different volume - is that even possible?
Is there another solution?
Note - This is a partial solution.
When you start docker-compose with replicas, the only difference inside the container is environment variable hostname (unless set by docker-compose.yml to a static value).
Create docker-compose.yml as below:-
version: '3'
services:
test_logging:
image: bash
entrypoint: bash
command: -c "sleep 3600"
deploy:
replicas: 2
Run the containers:-
docker-compose up -d
Now, if you execute bash in interactive mode in the running container, you will see following environment variables.
$ docker exec -it test_test_logging_1 bash
bash-5.1# env
HOSTNAME=f000d941eab2
PWD=/
_BASH_BASELINE_PATCH=16
HOME=/root
_BASH_VERSION=5.1.16
_BASH_BASELINE=5.1.16
_BASH_LATEST_PATCH=16
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env
$ docker exec -it test_test_logging_2 bash
bash-5.1# env
HOSTNAME=0b848ef70202
PWD=/
_BASH_BASELINE_PATCH=16
HOME=/root
_BASH_VERSION=5.1.16
_BASH_BASELINE=5.1.16
_BASH_LATEST_PATCH=16
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
As you can see, only hostname differs between different docker containers. You can use hostname to generate different filenames for different log files.
Disk Space Issue
However, this wont be enough. By default FileHandler (logging) will use as much space as needed. Its best to move to RotatingFileHandler or TimedRotatingFileHandler. Note that, its better to use RotatingFileHandler due to sudden surge of too many error logs which can happen due to any number of reasons. By the time, TimedRotatingFileHandler will rotate the files, you might end up using all the disk space.
Note that RotatingFileHandler doesnt provide compression of rotated log files, this needs to be additionally implemented.
P.S. Be aware that with R replicas, and each replica using B backupCount files for RotatingFileHandler and S maxBytes, you will use R x B x S
Bytes disk space with no compression.
Finally, this is a partial solution, since if you do a restart of the service, the hostname will change, and new log files will be created. However, even with this issue, the above partial solution will work if you integrate log services like ELK stack to collect older log files for further analysis of log files such as error monitoring.

Elasticsearch docker container in non-prod mode to eliminate vm.max_map_count=262144 requirement

How can I configure elasticsearch docker containers (elasticsearch:7.5.0) to use fewer resources and run in a nonproduction mode?
I want to run containers in Jenkins and on my desktop and am hitting the requirement from this elastic doc for running docker images in production
I'd like to figure out how I can modify my elasticsearch.yml which I copy into the container to configure it to set the container into a less resource-intensive mode.
anyone know how to do this?
You can run your docker in development mode and create a single node ES cluster by following official ES link on single node ES cluster. As mention in this link.
To start a single-node Elasticsearch cluster for development or
testing, specify single-node discovery to bypass the bootstrap checks:
In-short all you need to do is add -e "discovery.type=single-node" in your docker command, which would enable the dev mode and then you don't have to satisfy the hard limits of production environments ie it bypass bootstrap checks.
More information on your settings and how to turn it off can be found here
node.store.allow_mmap. This is a boolean setting indicating whether or
not memory-mapping is allowed. The default is to allow it.
So, if -e "discovery.type=single-node env. doesn't turn it off, then you can explicitly set it false in your elasticsearch.yml.
If you're reading this trying to find out how to do it when using docker-compose:
With an environment key
docker-compose.yml:
elasticsearch:
environment:
- discovery.type=single-node
With a custom elasticsearch.yml
Create elasticsearch.yml:
discovery:
type: single-node
Mount it as a volume on your container in docker-compose.yml:
elasticsearch:
volumes:
- /path-to/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
Make sure the first part actually a path, it has to start with / or ./ or else it's going to treat it as a named volume. The second part is the path inside the container so it can be left as is.
The file must be in a location you've enabled File Sharing for in your Docker application. Set this up in Preferences > Resources > File Sharing if you haven't.
I have also faced this issue when I was using this docker.elastic.co/elasticsearch/elasticsearch:7.6.2 elastic-search docker image for a single node cluster.
The error I was getting is:
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
To start a single-node Elasticsearch cluster with Docker
Solution1
So the solution would be to run a docker image with an environment variable -e "discovery.type=single-node" in docker run command.
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
Solution2
Add this "discovery.seed_hosts : 127.0.0.1:9300" in eleasticsearch.yml file. And build your own docker image and use it.
Dockerfile will look like this.
FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
RUN echo discovery.seed_hosts : 127.0.0.1:9300 >> /usr/share/elasticsearch/config/elasticsearch.yml
RUN cat /usr/share/elasticsearch/config/elasticsearch.yml
For more details click here.

Advantage of using docker-compose file version 3 over a shellscript?

My initial reason for creating a docker-compose.yml, was to take advantage of features such as build: and depends-on: to make a single file that builds all my images and runs them in containers. However, I noticed version
3 depreciates most of these functions, and I'm curious why I would use this over building a shellscript.
This is currently my shellscript that runs all my containers (I assume this is what the version 3 docker-compose file would replace if I were to use it):
echo "Creating docker network net1"
docker network create net1
echo "Running api as a container with port 5000 exposed on net1"
docker run --name api_cntr --net net1 -d -p 5000:5000 api_img
echo "Running redis service with port 6379 exposed on net1"
docker run --name message_service --net net1 -p 6379:6379 -d redis
echo "Running celery worker on net1"
docker run --name celery_worker1 --net net1 -d celery_worker_img
echo "Running flower HUD on net1 with port 5555 exposed"
docker run --name flower_hud --net net1 -d -p 5555:5555 flower_hud_img
Does docker-swarm rely on using stacks? If so then I can see a use for docker-compose and stacks, but I couldn't seem to find an answer online. I would use version 3 because it is compatible with swarm, unlike version 2 if what I've read it true. Maybe I am missing the point of docker-compose completely, but as of right I'm a bit confused as to what it brings to the table.
Readability
Compare your sample shell script to a YAML version of same:
services:
api_cntr:
image: api_img
network: net1
ports:
- 5000:5000
message_service:
image: redis
network: net1
ports:
- 6379:6379
celery_worker1:
image: celery_worker_img
network: net1
flower_hud:
image: flower_hud_img
network: net1
ports:
- 5555:5555
To my eye at least, it is much easier to determine the overall architecture of the application from reading the YAML than from reading the shell commands.
Cleanup
If you use docker-compose, then running docker-compose down will stop and clean up everything, remove the network, etc. To do that in your shell script, you'd have to separately write a remove section to stop and remove all the containers and the network.
Multiple inheriting YAML files
In some cases, such as for dev & testing, you might want to have a main YAML file and another that overrides certain values for dev/test work.
For instance, I have an application where I have a docker-compose.yml as well as docker-compose.dev.yml. The first contains all of the production settings for my app. But the "dev" version has a more limited set of things. It uses the same service names, but with a few differences.
Adds a mount of my code directory into the container, overriding the version of the code that was built into the image
Exposes the postgres port externally (so I can connect to it for debugging purposes) - this is not exposed in production
Uses another mount to fake a user database so I can easily have some test users without wiring things up to my real authentication server just for development
Normally the service only uses docker-compose.yml (in production). But when I am doing development work, I run it like this:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
It will load the normal parameters from docker-compose.yml first, then read docker-compose.dev.yml second, and override only the parameters found in the dev file. The other parameters are all preserved from the production version. But I don't require two completely separate YAML files where I might need to change the same parameters in both.
Ease of maintenance
Everything I described in the last few paragraphs can be done using shell scripts. It's just more work to do it that way, and probably more difficult to maintain, and more prone to mistakes.
You could make it easier by having your shell scripts read a config file and such... but at some point you have to ask if you are just reimplementing your own version of docker-compose, and whether that is worthwhile to you.

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources