docker-compose redis won't start : docker-entrypoint.sh: Permission denied - docker

I try to get started with docker-compose : https://docs.docker.com/compose/gettingstarted/ (i've exactly the files described there)
I do it on a debian (stretch) server.
It all goes well until I do docker-compose up:
the web service goes up all right but redis service can't start docker-entrypoint.sh and exits.
root#12456:~/composeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating composetest_redis_1 ... done
Creating composetest_web_1 ... done
Attaching to composetest_redis_1, composetest_web_1
redis_1 | su-exec: /usr/local/bin/docker-entrypoint.sh: Permission denied
composetest_redis_1 exited with code 1
...
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I feel like I'm the only one with this issue, so maybe it's something so dumb everybody figured it out right away!
Thanks.

Related

How does one connect two services in the local docker-compose network?

I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.
The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.

How to fix a malfunctioning docker container?

I currently have a Docker container that runs NGINX for me. While trying to learn how to set up a proxy pass example I created a setting that crashes this container and I can no longer start the container.
Creating a new NGINX container is not a big deal, but I would like to use this example for a learning experience.
Is it possible to start up this stopped container with a different entree point rather than having it start NGINX?
I've read that I have to commit the broken container into an image and then can start up a new container from this image which I have been able to do, but this seems rather cumbersome.
If the above is the only method than I might as well just create a new container.
I encountered a similar problem, which can be fixed in the following ways:
Method 1: based on docker cp, copy the file contents of the damaged container to the current environment (even if the container cannot be started);
Method 2: based on docker commit resubmit the damaged container as another image, and then start the additional entry point entry;
Note: the above methods are just a few tricks to use during debugging or development, and eventually we should write the related operations into the Dockerfile configuration or docker-compose.yml configuration.
Fix Progress:
Because I temporarily modified the ./php-fpm.d/www.conf configuration in my PHP-FPM
container, the container could not be started:
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------
phpfpm_fpm_1 docker-php-entrypoint php-fpm Restarting
phpfpm_nginx_1 nginx -g daemon off; Up 80/tcp, 0.0.0.0:86->86/tcp
Check related error information by docker-compose log-f:
fpm_1 | [03-Dec-2019 03:57:50] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:57:50] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:57:50] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:57:50] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:57:50] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:57:50] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:58:51] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:58:51] ERROR: Unable to create or open slowlog(/usr/local/log/www.log.slow): No such file or directory (2)
fpm_1 | [03-Dec-2019 03:58:51] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:58:51] ERROR: failed to post process the configuration
fpm_1 | [03-Dec-2019 03:58:51] ERROR: FPM initialization failed
fpm_1 | [03-Dec-2019 03:58:51] ERROR: FPM initialization failed
Check the configuration information manually debugged in the container by docker diff container-id:
$ docker ps -a|grep php
5dfe26f00059 tkstorm/phpngx "nginx -g 'daemon of…" 2 weeks ago Up 41 hours 80/tcp, 0.0.0.0:86->86/tcp phpfpm_nginx_1
6f8a2044ba36 tkstorm/phpfpm "docker-php-entrypoi…" 2 weeks ago Restarting (78) 7 seconds ago phpfpm_fpm_1
Copy the wrong ./php-fpm.d/www.conf configuration that damaged the container to the local directory and fix it:
$ docker cp phpfpm_fpm_1:/usr/local/etc/php-fpm.d/www.conf fix-www.conf
$ vi fix-www.conf
...
slowlog = /var/log/$pool.log.slow
...
Copy the repaired configuration to the damaged container again:
// after fix up
$ docker cp fix-www.conf phpfpm_fpm_1:/usr/local/etc/php-fpm.d/www.conf
Restart the container:
$ docker restart phpfpm_fpm_1
// it' fix ok
$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------
phpfpm_fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
phpfpm_nginx_1 nginx -g daemon off; Up 80/tcp, 0.0.0.0:86->86/tcp

hyperledger-fabric byfn.sh -m failed with script/scripts.sh not found

I am running the byfn.sh script within docker container on windows 10.
Docker version 18.03.0-ce, build 0520e24302
I am getting the script.sh not found error message, please help.
$ ./byfn.sh -m up
Starting with channel 'mychannel' and CLI timeout of '10' seconds and CLI
delay of '3' seconds
Continue? [Y/n] y
proceeding ...
2018-04-28 20:28:24.254 UTC [main] main -> INFO 001 Exiting.....
LOCAL_VERSION=1.1.0
DOCKER_IMAGE_VERSION=1.1.0
Starting peer1.org2.example.com ... done
Starting peer0.org2.example.com ... done
Starting peer1.org1.example.com ... done
Starting peer0.org1.example.com ... done
Starting orderer.example.com ... done
cli is up-to-date
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"scripts/script.sh\": stat scripts/script.sh: no such file or directory": unknown
ERROR !!!! Test failed
Resolved the issue by copying the entire fabric-samples directory to c:\users\ directory.
Having the fabric-samples directory anywhere else on c:\ drive gives the error. Perhaps an explicit path needs to be defined somewhere if placing fabric-samples in any location other than c:\users\
I figured that the volumes from the docker container under Windows are not correctly mounted (not at all). But I don't know how to fix it... I'll get back if I have more information on this issue or even a solution.

docker - driver "devicemapper" failed to remove root filesystem after process in container killed

I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!

Foreman Cannot Start Nginx, But I Can Start it Manually. Why?

I am currently running Foreman on staging (Ubuntu) and once I get it working will switch to using upstart.
My Procfile.staging looks like this:
nginx: sudo service nginx start
unicorn: bundle exec unicorn -c ./config/unicorn.rb
redis: bundle exec redis-server
sidekiq: bundle exec sidekiq -v -C ./config/sidekiq.yml
I can successfully start nginx using:
$ sudo service nginx start
However when I run $ foreman start, whilst the other three processes start successfully, nginx does not:
11:15:46 nginx.1 | started with pid 15966
11:15:46 unicorn.1 | started with pid 15968
11:15:46 redis.1 | started with pid 15971
11:15:46 sidekiq.1 | started with pid 15974
11:15:46 nginx.1 | Starting nginx: nginx.
11:15:46 nginx.1 | exited with code 0
11:15:46 system | sending SIGTERM to all processes
SIGTERM received
11:15:46 unicorn.1 | terminated by SIGTERM
11:15:46 redis.1 | terminated by SIGTERM
11:15:46 sidekiq.1 | terminated by SIGTERM
So why isn't nginx starting when started by Foreman?
The is a problem in your Procfile.
The nginx command can't use sudo inside foreman, because it will always ask for a password and then it will fail. That's why you are not starting nginx and the logs are empty.
If you really need to use sudo inside a procfile you could use something like this:
sudo_app: echo "sudo_password" | sudo -S app_command
nginx: echo "sudo_password" | sudo -S service nginx start
which I really don't recommend. Other option is to call sudo foreman start
For more information check out this issue on github, it is precisely what you want to solve.
Keep me posted if it works for you.
You should be able to add sudo access without a password for your local user to allow managing this service. This can be a big security hole, but if you whitelist what commands can be run you dramatically reduce the risk. I recommend adding no-password sudoers entry for the services command and anything else you want to script:
/etc/sudoers:
your_user_name ALL = (ALL) NOPASSWD: /usr/sbin/service
Another option if you're not comfortable with this would be to run nginx directly, not through the service manager:
nginx: /usr/sbin/nginx -c /path/to/nginx.conf

Resources