Redis error: being thrown by Docker container: The queue is full - docker

I have a number of docker containers running on a vm, all of which are using the same Redis cache db, each has the same Redis config settings:
-e CACHE_ENABLED=true -e CACHE_KEY_IGNORED_PROPS=meta -e CACHE_TYPE=redis -e CACHE_REDIS_PORT=6379 -e CACHE_REDIS_HOST=xx.xx.xx.x -e CACHE_MAX_AGE=60000 -e RATE_LIMIT_ENABLED=false -e WS_ENABLED=true -e CACHE_REDIS_TIMEOUT=2000 --log-opt max-size=10m --log-opt max-file=3
and I'm getting the following error in the Docker logs on just a couple of them, one after a while (days), another almost immediately after restarting:
"msg":"[Redis] Method getResponse errored: <...> \nError: The queue is full"}
...and am having a bit of a hard time trying to trouble shoot. Anyone have any advice???
Thanks so much!

Related

Elastic Entreprise Search 7.9.0 with docker

I am trying to run Elastic Enterprise search 7.9.0 using the docker image by following the stpeps here : https://www.elastic.co/guide/en/enterprise-search/current/docker.html
docker run -p 3002:3002 -e elasticsearch.host='http://elastic:changeme#host.docker.internal:9200' -e elasticsearch.username=elastic -e elasticsearch.password=changeme -e allow_es_settings_modification=true -e secret_management.encryption_keys='[xxxxxxx]' docker.elastic.co/enterprise-search/enterprise-
search:7.9.0
I get the following warning and the service doesn't start :
Found java executable in PATH
Java version detected: 1.8.0_252 (major version: 8)
Enterprise Search is starting...
[2020-09-01T12:10:12.887+00:00][1][2000][app-server][INFO]: Enterprise Search version=7.9.0, JRuby version=9.2.9.0, Ruby version=2.5.7, Rails version=4.2.11.3
[2020-09-01T12:10:13.251+00:00][1][2000][app-server][INFO]: Successfully connected to Elasticsearch
[2020-09-01T12:10:25.949+00:00][1][2000][app-server][INFO]: [db_lock] [installation] Status: [Starting] Ensuring migrations tracking index exists
[2020-09-01T12:10:26.083+00:00][1][2000][app-server][INFO]: [db_lock] [installation] Status: [Finished] Ensuring migrations tracking index exists
[2020-09-01T12:10:26.981+00:00][1][2000][app-server][ERROR]:
--------------------------------------------------------------------------------
We need to perform 11/32 migrations before the service can be started.
Migrations pending: 20200604175830, 20200610113647, 20200611093100, 20200612155336, 20200617164710, 20200617210501, 20200623134305, 20200624153999, 20200709120000, 20200717204953, 20200723200724
Proceeding with migrations while indices are allowing writes can have unintended consequences.
Please enable read-only mode before proceeding:
https://www.elastic.co/guide/en/enterprise-search/current/read-only-mode.html
I don't know how to resolve this, as I can't set the read-only mode as the service is not starting.
Any idea ?
I'm not sure if this is the best solution, but here is what worked for me. Based on https://www.elastic.co/guide/en/enterprise-search/current/read-only-mode.html
Start Docker container with --enable-read-only-mode where it will run and then stop saying read only mode is enabled
Run the Docker container without --enable-read-only-mode until it successfully starts up and runs. Once successfully running I stopped the docker container
Started Docker container with --disable-read-only-mode where it will run and then stop saying read only mode is disabled
Run the docker container as you had previously, no issues
Using your docker command for example:
docker run -p 3002:3002 -e
elasticsearch.host='http://elastic:changeme#host.docker.internal:9200'
-e elasticsearch.username=elastic -e elasticsearch.password=changeme -e allow_es_settings_modification=true -e secret_management.encryption_keys='[xxxxxxx]'
docker.elastic.co/enterprise-search/enterprise- search:7.9.1
--enable-read-only-mode
docker run -p 3002:3002 -e
elasticsearch.host='http://elastic:changeme#host.docker.internal:9200'
-e elasticsearch.username=elastic -e elasticsearch.password=changeme -e allow_es_settings_modification=true -e secret_management.encryption_keys='[xxxxxxx]'
docker.elastic.co/enterprise-search/enterprise- search:7.9.1
docker run -p 3002:3002 -e
elasticsearch.host='http://elastic:changeme#host.docker.internal:9200'
-e elasticsearch.username=elastic -e elasticsearch.password=changeme -e allow_es_settings_modification=true -e secret_management.encryption_keys='[xxxxxxx]'
docker.elastic.co/enterprise-search/enterprise- search:7.9.1
--disable-read-only-mode
docker run -p 3002:3002 -e
elasticsearch.host='http://elastic:changeme#host.docker.internal:9200'
-e elasticsearch.username=elastic -e elasticsearch.password=changeme -e allow_es_settings_modification=true -e secret_management.encryption_keys='[xxxxxxx]'
docker.elastic.co/enterprise-search/enterprise- search:7.9.1
Back to normal. Good luck!
As response to #Christophvh, using docker-compose, you can enable read-only-mode simply using command, for example:
enterprise-search:
image: docker.elastic.co/enterprise-search/enterprise-search:${ELK_VERSION}
command: --enable-read-only-mode
You have to follow the same steps as described by #Brandon using the command method.

How to get logs out of a crashing Docker Desktop container

This should be a duplicate of this question: Docker look at the log of an exited container. But I cannot get anything in that question to work.
I'm running container with this command (copied from Azure WebApp startup script)
docker run -d -p 5785:80 --name web-bt_0_096c876f -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=web-bt -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=web-bt.azurewebsites.net -e WEBSITE_INSTANCE_ID=5c991bc5716941ff1fb1eb90137ac2f13e1afffea161b14571d6ea1fb1356b3d -e HTTP_LOGGING_ENABLED=1 bt:latest -e environment='Production' -e ASPNETCORE_ENVIRONMENT='Production'
This crashes for some reason (If I replace 'Production' with 'Development' in the two last parameters it works). This is what I'm trying to debug.
Now according to that other thread I should be able to do: docker logs -t web-bt_0_096c876f but this just returns immediately without printing anything at all.
Why is this empty. This returns nothing even if I replace the startup script with -e environment='Development' -e ASPNETCORE_ENVIRONMENT='Development' which actually works. I can browse to the web-app. But still no logs at all.
So how do I view the container logs/console output?

Failed to start docker container after reboot

I am hosting an own registry.
After rebooting the Server my registry container is unable to start.
I used this command to start the registry
docker run -d -p 5000:5000 --name registry -v /var/lib/registry/:/var/lib/registry -v /root/certs:/certs -v /root/auth/:/auth -eEGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH=htpasswd" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" --restart always registry:2.7.1
After a reboot i get this message when i try "docker start registry":
Error response from daemon: OCI runtime create failed: container with id exists: dfb0bef21bdfc8a89b59498befd37f83513e75527c0beb552e0400df2a2b7c7d: unknown
Error: failed to start containers: registry
Starting a new container works fine.
How can a fix, it and sould the container not start by itself because of "--restart alway"
docker --version
Docker version 18.09.6, build 481bc77
Thx for your help.
Update
Intesting news I have written an init script to to the job,
but the problem is exactly the same. The container exists but isn't started.
If I try to start it, I get the error message from above.
On the boot screen in get this information.
So the docker daemon seam to be not ready.
Do you have any suggestions why?

java running inside docker container cannot see environment variables

I am new with Docker. I have a small Java application that I am trying to run inside Docker. I have created a Dockerfile to build the image.
My application is reading Environment Variables to know which database to connect to.
When running the command
docker run -d -p 80:80 occm -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost"
and then enumerating all the variables using System.getenv, I dont see any of them. So I have added to the Docker file
ENV MYSQL_HOST=localhost
now when I run the container I see this variable, but I see it with the localhost value and not somehost.
What am I doing wrong?
The problem is how you are running your docker image.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So, you are passing -e "..." -e "..." as command and arguments
You need to use -e as [OPTIONS].
$ docker run -d -p 80:80 -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost" occm

ElasticSearch on docker - 2nd instance kills the first instance

I'm trying to run multiple versions of ElasticSearch at the same time, should be easy. Here are my commands:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
The first docker starts up great. The 2nd docker starts, but at the cost of killing the first docker. If I run it without the -d I don't get any info back to the UI about why the docker stopped.
By default, ES on docker tries to take 2G of memory. So 2 dockers was trying to take up 4G of memory, which my machine didn't have.
The solution: limit the amount of memory each ES instance tried to take to 200mb using the following switch -e ES_JAVA_OPTS="-Xms200m -Xmx200m"
Full, working commands for 4 concurrent dockers:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
docker run -d --rm -p 9252:9200 -p 9352:9300 --name es_5_5_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -d --rm -p 9253:9200 -p 9353:9300 --name es_5_6_4_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.6.4
Thank you to #Val who really answered this question in the comments.
If this is a lack of memory problem, you can check if your container was OOMKilled (OOM).
First check if the exit code of the container is 137 = (128+9) Container received a SIGKILL.
You can test it with docker ps -a or
docker inspect --format='{{.State.ExitCode}}' $INSTANCE_ID
Then you can check the state of the container with :
docker inspect --format='{{.State.OOMKilled}}' $INSTANCE_ID
If it returns true, it was a OOM problem.
Further details at https://docs.docker.com/engine/reference/run/#user-memory-constraints .
Extract :
By default, kernel kills processes in a container if an out-of-memory
(OOM) error occurs. To change this behaviour, use the
--oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not
set, this can result in the host running out of memory and require
killing the host’s system processes to free memory.

Resources