Push Rails 6 application to Docker Hub - ruby-on-rails

I'm trying to get my head around docker. I got it working with my Rails 6 application, it builds and runs succesful. Now I want to push the application into my docker hub repository.
I'm not quite sure how to do this, because I got 3 containers but in every tutorial I read the people just push one.
That's the output of docker ps -a:
That's my docker-compose.yml:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I read about the --link flag, but this seems to be deprecated.
So: Do I need to push all containers or is there a way to get them into one container?

That is easy in short as simple as: if you have multiple images, you need to push multiple times ;)
Now the helpful answer, you have 2 images: postgress and your current working directory. Postgress is already an image you download from docker hub, so no need to push it.
As for your other 2 apps, they are currently in one docker file, so only one push is needed.
In your compose file they are both using the build: ., therefore they share the docker image. Let's say you pushed it with: docker push max-kirsch/my-cool-app:1.0.0, you would change the docker compose file to look like this:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
image: max-kirsch/my-cool-app:1.0.0
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
image: max-kirsch/my-cool-app:1.0.0
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I think you are confused because you have 2 apps in your compose file that shares the same docker file. And it might be that this confusing is correct. In my personal opinion, you should create them both there ow docker file where only the components are installed that you need. It is hard to be sure without seeing the docker file, but as an example, it does not look like your webpacker needs ruby installed and can start with a simple node image as the base instead of installing node in your ruby image.

Related

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

docker-compose: run a command on a pgsql container

I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?
You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.

Rails Active Storage in Docker

I'm running a docker compose which consists of a web worker, a postgres database and a redis sidekiq worker. I created a background job to process images after uploading user images. ActiveStorage is used to store images. Normally without docker, in local development, the images are stored in a temporary storage folder to simulate a cloud storage. I'm fairly new to Docker, so I'm not sure how storage works. I believe storage in Docker works a bit differently. The sidekiq worker seems fine, it just seems like it's complaining about not able to find a place to store images. Below is the error that I get from the sidekiq worker.
WARN: Errno::ENOENT: No such file or directory # rb_sysopen - /myapp/storage
And here is my docker-compose.yml
version: '3'
services:
setup:
build: .
depends_on:
- postgres
environment:
- RAILS_ENV=development
command: "bin/rails db:migrate"
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=mysecurepass
- POSTGRES_DB=myapp_development
- PGDATA=/var/lib/postgresql/data
postgres_data:
image: postgres:10-alpine
volumes:
- /var/lib/postgresql/data
command: /bin/true
sidekiq:
build: .
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
command: "bin/bundle exec sidekiq -C config/sidekiq.yml"
redis:
image: redis:4-alpine
ports:
- "6379:6379"
web:
build: .
depends_on:
- redis
- postgres
- setup
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- REDIS_URL=redis://localhost:6379
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Perhaps you need to add myapp volume for sidekiq as well like this:
sidekiq:
volumes:
- .:/myapp

how to run docker exec on a docker-compose.yml

I am trying to create a mysql database schema during the docker-compose.yml file is getting executed
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
links:
- web
onrun:
command: "docker exec -i test_mysql_1 mysql -uroot -proot test <dummy1.sql"
I tried onrun but this is not working .
i am building the first image but pulling the second image from the docker hub.
kindly help in how to execute the following command after the docker-compose up
There is nothing like onrun in docker-compose. It will only bring up the containers and execute the command. Now you have few possible options
Use mysql Image Initialization
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
volumes:
- ./dummy1.sql:/docker-entrypoint-initdb.d/dummy1.sql
ports:
- "3306:3306"
You may your sql files inside /docker-entrypoint-initdb.d inside the container
Use bash script
docker-compose up -d
# Give some time for mysql to get up
sleep 20
docker-compose exec mysql mysql -uroot -proot test <dummy1.sql
Use another docker service to initialize the DB
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
mysqlinit:
image: mysql:latest
volumes:
- ./dummy1.sql:/dump/dummy1.sql
command: bash -c "sleep 20 && mysql -h mysql -uroot -proot test < /dump/dummy1.sql"
You run another service which will init the DB for you, like mysqlinit in the above one
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order.
From https://hub.docker.com/_/mysql/
That is the convenient way how many databases (postgresql, mysql, ...) are initializing themselves on container-creation. You should create a *.sql / *.sh file and bind it via volume into the new container:
db:
image: mysql:latest
volumes:
- ./db/entrypoint:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=iamgroot
- MYSQL_DATABASE=gotg
This loads all your sql / sh files into the container which are then automatically executed.

Adding sphinx container docker-compose shows an error

i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8

Resources