Adding sphinx container docker-compose shows an error - ruby-on-rails

i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?

According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)

Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8

Related

Possble to Share folders between container to container?

I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"

Push Rails 6 application to Docker Hub

I'm trying to get my head around docker. I got it working with my Rails 6 application, it builds and runs succesful. Now I want to push the application into my docker hub repository.
I'm not quite sure how to do this, because I got 3 containers but in every tutorial I read the people just push one.
That's the output of docker ps -a:
That's my docker-compose.yml:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I read about the --link flag, but this seems to be deprecated.
So: Do I need to push all containers or is there a way to get them into one container?
That is easy in short as simple as: if you have multiple images, you need to push multiple times ;)
Now the helpful answer, you have 2 images: postgress and your current working directory. Postgress is already an image you download from docker hub, so no need to push it.
As for your other 2 apps, they are currently in one docker file, so only one push is needed.
In your compose file they are both using the build: ., therefore they share the docker image. Let's say you pushed it with: docker push max-kirsch/my-cool-app:1.0.0, you would change the docker compose file to look like this:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
image: max-kirsch/my-cool-app:1.0.0
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
image: max-kirsch/my-cool-app:1.0.0
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I think you are confused because you have 2 apps in your compose file that shares the same docker file. And it might be that this confusing is correct. In my personal opinion, you should create them both there ow docker file where only the components are installed that you need. It is hard to be sure without seeing the docker file, but as an example, it does not look like your webpacker needs ruby installed and can start with a simple node image as the base instead of installing node in your ruby image.

Rails Active Storage in Docker

I'm running a docker compose which consists of a web worker, a postgres database and a redis sidekiq worker. I created a background job to process images after uploading user images. ActiveStorage is used to store images. Normally without docker, in local development, the images are stored in a temporary storage folder to simulate a cloud storage. I'm fairly new to Docker, so I'm not sure how storage works. I believe storage in Docker works a bit differently. The sidekiq worker seems fine, it just seems like it's complaining about not able to find a place to store images. Below is the error that I get from the sidekiq worker.
WARN: Errno::ENOENT: No such file or directory # rb_sysopen - /myapp/storage
And here is my docker-compose.yml
version: '3'
services:
setup:
build: .
depends_on:
- postgres
environment:
- RAILS_ENV=development
command: "bin/rails db:migrate"
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=mysecurepass
- POSTGRES_DB=myapp_development
- PGDATA=/var/lib/postgresql/data
postgres_data:
image: postgres:10-alpine
volumes:
- /var/lib/postgresql/data
command: /bin/true
sidekiq:
build: .
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
command: "bin/bundle exec sidekiq -C config/sidekiq.yml"
redis:
image: redis:4-alpine
ports:
- "6379:6379"
web:
build: .
depends_on:
- redis
- postgres
- setup
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- REDIS_URL=redis://localhost:6379
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Perhaps you need to add myapp volume for sidekiq as well like this:
sidekiq:
volumes:
- .:/myapp

Database not persisting in docker

I know I am missing something very basic here. I have see some of the older questions on persisting data using docker, but I think I am following the most recent documentation found here.
I have a rails app that I am trying to run in docker. It runs fine but every time I start it up i get ActiveRecord::NoDatabaseError. After I create the database and migrate it, the app runs fine, until I shut it down and restart it.
here is my docker file:
FROM ruby:2.3.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV RAILS_ROOT /ourlatitude
RUN mkdir -p $RAILS_ROOT/tmp/pids
WORKDIR $RAILS_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY . .
and here is my docker-compose.yml file
version: '2'
services:
db:
image: postgres:9.4.5
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
the basic flow I am following is this:
export RAILS_ENV=development
docker-compose build
docker-compose up
docker-compose run app rake db:create
docker-compose run app rake db:migrate
at this point the app will be running fine
but then I do this
docker-compose down
docker-compose up
and then I am back to the ActiveRecord::NoDatabaseError
So as I said, I think I am missing something very basic.
It doesn't look like you put your postgres on a volume, you may be missing other persistent data sources in your app container, and it appears you missed some indentation on your app container definition.
version: '2'
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
volumes:
postgres-data:
driver: local
In the example above, the postgres data is stored in a named volume. See the advice on docker hub for more details on persisting data for that application. If you are still losing data, check the output of docker diff $container_id on a container to see what files are changing outside of your volumes that would be lost on a down/up.
I managed to get this to work properly using the following docker-compose.yml file.
version: '2'
volumes:
postgres-data:
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
depends_on:
- db
The key was to add the
volumes:
postgres-data:
which creates the named volume and then the
volumes:
- postgres-data:/var/lib/postgresql/data
under the db section which maps the named volume to the expected location in the container of /var/lib/postgresql/data

Storing secret env variables as separate data container in Docker

I want to use 'secret' (passwords etc) env variables from host OS inside docker-compose.
I thought it is possible to achieve this using data container. But it is not working - 'No file present'. Please advice what is wrong and if this doable this way.
docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec puma
env_file: .env
environment:
- RACK_ENV=production
- RAILS_ENV=production
volumes_from:
- config
ports:
- "3000:3000"
links:
- db
config:
image: busybox
volumes:
- /myapp/config/.env:.env
I also tried to use /myapp/config:/config as volume and /config/.env as env_file but with same result.
Dockerfile doesnt mention or reference 'config' or '.env' in any way.
Thanks for responses.
Compose env file
The env_file file is read locally by docker-compose, in the directory you are running it from. The variables are then passed to the containers at run time (like docker run -e).
web:
build: .
command: bundle exec puma
env_file: ./.env
environment:
- RACK_ENV=production
- RAILS_ENV=production
ports:
- "3000:3000"
links:
- db
Substitution
Compose can also substitute environment variables into the config
If you have SECRET_HOST_ENV_VARIABLE set on your host when you run docker-compose with an environment definition like below
web:
build: .
command: bundle exec puma
environment:
- RACK_ENV=production
- RAILS_ENV=production
- SECRET_ENV_VARIABLE=${SECRET_HOST_ENV_VARIABLE}
ports:
- "3000:3000"
links:
- db
SECRET_ENV_VARIABLE will become available in the containers environment.

Resources