Setting up Rails console on Docker container not taking any input - ruby-on-rails

I was trying to setup Rails console in my dockerized container. The entire application has multiple components and I have set up the orchestration using docker-compose. Here is the relevant service from my docker-compose.yml file.
app:
image: <image_name>
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
tty: true
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
depends_on:
- other-db
# the environment variables are used in docker/config/env_config.rb to connect to different database containers
container_name: application
environment:
- CONSOLE=$CONSOLE
My Dockerfile has the following command ENTRYPOINT /usr/src/app/docker-entrypoint.sh
And in the docker-entrypoint.sh
#!/bin/bash
echo "waiting for all db connections to be healthy... Sleeping..."
sleep 1m
mkdir -p /usr/src/app/tmp/pids/
mkdir -p /usr/src/app/tmp/sockets/
if [ "$CONSOLE" = "Y" ];
then
echo "Starting Padrino console"
bundle exec padrino console
fi
When I run
export CONSOLE=Y
docker-compose -f docker-compose.yml up -d && docker attach application
The console starts up and I see >> but I cannot type in it. Where am I going wrong?

Try starting your container with -i mode.
-i, --interactive Attach container's STDIN
something like
docker-compose -f docker-compose.yml up -i && docker attach application
you can also mix -d and -i as per need.

With help from this post, I figured that I was missing stdin_open: true in the docker-compose.yml. Adding it worked like a breeze.

Related

How to find volume files from host while inside docker container?

In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash

image runs properly using docker run -dit, but exits using docker stack deploy -c

I've been porting a web service to docker recently. As mentioned in the title, I'm encountering a weird scenario where in when I run it using docker run -dit, the service runs in the background, but when I use a docker-compose.yml, the service exits.
To be clearer, I have this entrypoint in my Dockerfile:
ENTRYPOINT ["/data/start-service.sh"]
this is the code of start-service.sh:
#!/bin/bash
/usr/local/bin/uwsgi --emperor=/data/vassals/ --daemonize=/var/log/uwsgi/emperor.log
/etc/init.d/nginx start
exec "$#";
as you can see, I'm just starting uwsgi and nginx here in this shell script. The last line (exec) is just make the script accept a parameter and keep it running. Then I run this using:
docker run -dit -p 8080:8080 --name=web_server webserver /bin/bash
As mentioned, the service runs OK and I can access the webservice.
Now, I tried to deploy this using a docker-compose.yml, but the service keeps on exiting/shutting down. I attempted to retrieve the logs, but I have no success. All I can see from doing docker ps -a is it runs for a second or 2 (or 3), and then exits.
Here's my docker-compose.yml:
version: "3"
services:
web_server:
image: webserver
entrypoint:
- /data/start-service.sh
- /bin/bash
ports:
- "8089:8080"
deploy:
resources:
limits:
cpus: "0.1"
memory: 2048M
restart_policy:
condition: on-failure
networks:
- webnet
networks:
- webnet
The entrypoint entry in the yml file is just to make sure that start-service.sh script will be ran with /bin/bash as its parameter, to keep the service running. But again, the service shuts down.
bash will exit without a proper tty. Since you execute bash via exec it becomes PID 1. Whenever PID 1 exits the container is stopped.
To prevent this add tty: true to the service's description in your compose file. This is basically the same thing as you do with -t with the docker run command.

Why does docker compose exit right after starting?

I'm trying to configure docker-compose to use GreenPlum db in Ubuntu 16.04. Here is my docker-compose.yml:
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:
The issue is when I run it with sudo docker-compose up the GrrenPlum db is shutdowm immedately after starting. It looks as this:
greenplum_1 | 20170602:09:01:01:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Starting Master instance 72ba20be3774 directory /gpdata/master/gpseg-1
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Command pg_ctl reports Master 72ba20be3774 instance active
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-No standby master configured. skipping...
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Database successfully started
greenplum_1 | ALTER ROLE
dockergreenplumn_greenplum_1 exited with code 0 <<----- Here
Actually, when I start it with just sudo docker run pivotaldata/gpdb-base it's ok.
What's wrong with the docker compose?
First of all, be cautious running this image: the image looks to be badly maintained, and the information on Docker Hub indicates it's neither "official", nor "supported" in any way;
2017-01-09: Toolsmiths reviewed this image; it is not one we create. We make no promises about whether this is up to date or if it works. Feel free to email pa-toolsmiths#pivotal.io if you are the owner and are interested in collaborating with us on this image.
When using images from Docker Hub, it's recommended to either use official images, or when not available, prefer automated builds (in which case the source code of the image can be verified to see what's used to build theimage).
I think the image is built from this GitHub repository, which means it has not been updated for over a year, and uses an outdated (CentOS 6.7) base image that has a huge amount of critical vulnerabilities
Back to your question;
I tried starting the image, both with docker-compose and docker run, and both resulted in the same for me.
Looking at that image, it is designed to be run interactively, or to be used as a base image (and overriding the command).
I inspected the image to find out what the container's command is;
docker inspect --format='{{json .Config.Cmd}}' pivotaldata/gpdb-base
["/bin/sh","-c","echo \"127.0.0.1 $(cat ~/orig_hostname)\" >> /etc/hosts && service sshd start && su gpadmin -l -c \"/usr/local/bin/run.sh\" && /bin/bash"]
So, this is what's executed when the container is started;
echo "127.0.0.1 $(cat ~/orig_hostname)" >> /etc/hosts \
&& service sshd start \
&& su gpadmin -l -c "/usr/local/bin/run.sh" \
&& /bin/bash"
Based on the above, there is no "foreground" process in the container, so the moment /usr/local/bin/run.sh finishes, a bash shell is started. A bash shell wothout a tty attached, exits immediately, at which point the container exits.
To run this image
(Again; be cautious running this image)
Either run the image interactively, by passing it stdin and a tty (-i -t, or -it as a shorthand);
docker run -it pivotaldata/gpdb-base
Or can run it "detached", as long as a tty is passed as well (add the -d and -t flags, or -dt as a shorthand); doing so, keeps the container running in the background;
docker run -dit pivotaldata/gpdb-base
To do the same in docker-compose, add a tty to your service;
tty: true
Your compose file will then look like this;
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
tty: true
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:

Strange way to launch a background apache/mysql docker container

I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources