Can't launch sphinxsearch in docker - ruby-on-rails

I'm trying to dockerize a Rails application. It uses Sphinx for search, and I can't make it run through docker.
This is what happens when I run docker-compose up and try to perform search:
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Sphinx Query (1.9ms) SELECT * FROM `field_core` WHERE MATCH('soccer') AND `sphinx_deleted` = 0 ORDER BY `name` ASC LIMIT 0, 10000
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Completed 500 Internal Server Error in 27ms (ActiveRecord: 3.0ms)
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807]
web_1 | ThinkingSphinx::ConnectionError (Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111)):
web_1 | app/controllers/fields_controller.rb:7:in `search'
This is result of docker-compose run sphinx rake ts:index:
sh: 1: searchd: not found
The Sphinx start command failed:
Command: searchd --pidfile --config "/app/config/development.sphinx.conf"
Status: 127
Output: See above
There may be more information about the failure in /app/log/development.searchd.log.
docker-compose.yml:
version: '3'
services:
db:
image: circleci/mysql:5.7
restart: always
volumes:
- mysql_data:/var/lib/mysql
ports:
- "3309:3309"
expose:
- '3309'
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
expose:
- '3000'
depends_on:
- db
- sphinx
volumes:
- app:/app
sphinx:
container_name: sociaball_sphinx
image: stefobark/sphinxdocker
restart: always
links:
- db
volumes:
- /app/config/sphinxy.conf:/etc/sphinxsearch/sphinxy.conf
- /app/sphinx:/var/lib/sphinx
volumes:
mysql_data:
app:
Dockerfile:
FROM ruby:2.4.1
RUN apt-get update && apt-get install -qq -y build-essential nodejs --fix-missing --no-install-recommends
RUN curl -s \
http://sphinxsearch.com/files/sphinxsearch_2.3.2-beta-1~wheezy_amd64.deb \
-o /tmp/sphinxsearch.deb \
&& dpkg -i /tmp/sphinxsearch.deb \
&& rm /tmp/sphinxsearch.deb \&& mkdir -p /var/log/sphinxsearch
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
COPY . ./
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
thinking_sphinx.yml:
development: &common
min_infix_len: 1
charset_table: "0..9, english, U+0021..U+002F"
port: 9306
address: sociaball_mysql_1
production:
<<: *common
So, rake isn't available in sphinx container, and sphinx scripts aren't available in app's container. What am I doing wrong?

Thinking Sphinx expects a copy of the Rails app when running the rake tasks, so you'll need to have a copy of your app within your Sphinx container. This will ensure (once you've bundled the gems) that rake exists in the Sphinx container, where the Sphinx binaries are also present.

So.
Instead of running rake task, I just did what it does directly in sphinx container. Like this:
docker-compose run --rm sphinx indexer \
--config "/etc/sphinxsearch/sphinxy.conf" --all --rotate
Regarding 500 error. It was caused by incorrect configuration in thinking_sphinx.yml. It should have pointed to remote host with sphinx instead of db:
development: &common
# ...
address: sociaball_sphinx_1

Related

ActiveRecord::AdapterNotSpecified: 'development' database is not configured. Available: []

I am trying to set up my development environment in rails with docker compose. Getting an error saying
ActiveRecord::AdapterNotSpecified: 'development' database is not configured. Available: []
Dockerfile:
# syntax=docker/dockerfile:1
FROM ruby:2.5.8
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN apt-get install cron -y
RUN apt-get install vim -y
RUN export EDITOR="/usr/bin/vim"
RUN addgroup deploy && adduser --system deploy && adduser deploy deploy
USER deploy
WORKDIR /ewagers
RUN (crontab -l 2>/dev/null || true; echo "*/5 * * * * /config/schedule.rb -with args") | crontab -
COPY Gemfile .
COPY Gemfile.lock .
RUN gem install bundler -v 2.2.27
RUN bundle install
COPY . .
USER root
COPY docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh
COPY wait-for-it.sh /usr/bin/
RUN chmod +x /usr/bin/wait-for-it.sh
RUN chown -R deploy *
RUN chmod 644 app
RUN chmod u+x app
RUN whenever --update-crontab ewagers --set environment=production
COPY config/database.example.yml ./config/database.yml
RUN mkdir data
ARG RAILS_MASTER_KEY
RUN printenv
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
database.example.yml:
# database.yml
default: &default
adapter: postgresql
encoding: unicode
host: db
username: postgres
password: ewagers
pool: 5
development:
<<: *default
database: postgres
docker compose:
version: "3.9"
services:
app:
build: .
command: docker-entrypoint.sh
ports:
- 4000:3000
environment:
DB_URL: postgres://db/ewagers_dev # db is host, ewagers_dev is db name
RAILS_ENV: development
volumes:
- .:/ewagers # mapping our current directory to ewagers directory in the container
# - ewagers-sync:/ewagers:nocopy
image: ksun/ewagers:latest
depends_on:
- db
db:
image: postgres:12
volumes:
- ewagers_postgres_volume:/var/lib/postgresql/data # default storage location for postgres
environment:
POSTGRES_PASSWORD: ewagers
ports:
- 5432:5432 # default postgres port
volumes: # we specify a volume so postgres does not write data to temporary db of its container
ewagers_postgres_volume:
I have double-checked indentations and spacing, done a docker build to make sure the database.example.yml is being copied to database.yml. However it seems it can't even find my development configuration in database.yml.
What's interesting is if I have what's in my database.example.yml and create a database.yml file locally with the same contents, it will work. But it should work without that, since I am copying database.example.yml to databse.yml in the dockerfile.

Docker compose with Rails and Postgres could not connect to server: No route to host Is the server

I'm currently having an issue with my docker-compose that have these services.
Rails app and Postgres. These are my configurations:
docker-compose.yml
version: '3'
services:
db:
image: postgres:alpine
restart: always
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
app:
build: .
restart: always
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
- bundle_path:/bundle
ports:
- "3000:3000"
depends_on:
- db
volumes:
bundle_path:
Dockerfile
FROM ruby:2.5.3-slim
# install rails dependencies
RUN apt-get update -qq \
&& apt-get install -y \
# Needed for certain gems
build-essential \
# Needed for postgres gem
libpq-dev \
# Others
nodejs \
vim-tiny \
# The following are used to trim down the size of the image by removing unneeded data
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf \
/var/lib/apt \
/var/lib/dpkg \
/var/lib/cache \
/var/lib/log
# Changes localtime to Singapore
RUN cp /usr/share/zoneinfo/Asia/Singapore /etc/localtime
# create a folder /myapp in the docker container and go into that folder
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Run bundle install to install gems inside the gemfile
RUN bundle install
ADD . /myapp
CMD bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
development:
<<: *default
database: myapp_development
host: db
username: postgres
password: postgres
port: 5432
I can build the app using docker-compose build but whenever I docker-compose up the service db exited but my rails app is running.
This is the logs I'm getting when I run docker-compose up
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | initdb: error: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
The error I'm getting when I access http://localhost:3000 is
could not connect to server: No route to host Is the server running on host "db" (172.18.0.2) and accepting TCP/IP connections on port 5432?
I think you should use volume for Postgres too.
services:
db:
image: postgres:alpine
restart: always
volumes:
- postgres_volume:/var/lib/postgresql/data
volumes:
postgres_volume:
I had similar issue and fixed it with that. Try also to restart Docker.

docker web server exiting without error message

I have my project set with docker-compose. Was working fine until today when I rebuilt the web container. Now whenever I want to start it using docker-compose up it just exits without an Error message:
web_1 | => Booting Unicorn
web_1 | => Rails 3.2.22.5 application starting in development on http://0.0.0.0:3000
web_1 | => Call with -d to detach
web_1 | => Ctrl-C to shutdown server
web_1 | Exiting
If I run it with --verbose, docker-compose --verbose up, the following lines show after 'Exiting':
compose.cli.verbose_proxy.proxy_callable: docker wait <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/wait HTTP/1.1" 200 30
compose.cli.verbose_proxy.proxy_callable: docker wait -> 1
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'redis:6379',
u'--',
u'./wait-for-it.sh',
u'postgres:5432',
u'--',
u'bundle',
u'exec',
u'rails',
u's',
This is my docker-compose.yml:
version: '3'
services:
memcached:
image: memcached:1.5.2-alpine
restart: always
ports:
- "11211:11211"
postgres:
image: postgres:9.4-alpine
restart: always
volumes:
- ~/.myapp-data/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myapp_development
- POSTGRES_USER=default
- POSTGRES_PASSWORD=secret
redis:
image: redis:3.2.0-alpine
restart: always
volumes:
- ~/.myapp-data/redis:/data
ports:
- "6379:6379"
web:
build:
context: .
dockerfile: "Dockerfile-dev"
stdin_open: true
tty: true
command: ./wait-for-it.sh redis:6379 -- ./wait-for-it.sh postgres:5432 -- bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/opt/apps/myapp
depends_on:
- memcached
- redis
- postgres
ports:
- "80:3000"
env_file:
- .env
extra_hosts:
- "api.myapp:127.0.0.1"
- "api.getmyapp:127.0.0.1"
- "my.app:127.0.0.1"
EDIT:
Here are the contents of Dockerfile-dev, required on the comments:
FROM ruby:2.3.7-slim
RUN apt-get update
RUN apt-get -y install software-properties-common libpq-dev build-essential \
python-dev python-pip wget curl git-core \
--fix-missing --no-install-recommends --allow-unauthenticated
# Set install path for reference later.
ENV INSTALL_PATH /opt/apps/engine
RUN mkdir -p $INSTALL_PATH
RUN gem install bundler
WORKDIR $INSTALL_PATH
ADD Gemfile $INSTALL_PATH
ADD Gemfile.lock $INSTALL_PATH
RUN bundle install
RUN find /tmp -type f -atime +10 -delete
ADD . $INSTALL_PATH
RUN cp config/database.docker-dev.yml config/database.yml
CMD [ "bundle", "exec", "rails", "s", "-p", "3000", "-b" "0.0.0.0" ]
Docker containers exit as soon as the commands run inside them are finished. From the logs you have posted, it seems like the container has 'nothing else' to do after starting up the server. You need to run the process in the foreground or have some sort of sleep to keep the container alive.

127.0.0.1:11211 is down party_manager | DalliError: No server available

I am using docker. Whenever my application is trying to read or write the cache it is getting following error :
Cache read: send_otp_request_count_3
Dalli::Server#connect 127.0.0.1:11211
127.0.0.1:11211 failed (count: 0) Errno::ECONNREFUSED:
Connection refused - connect(2) for "127.0.0.1" port 11211
DalliError: No server available
My Gemfile has:
gem 'dalli'
My Dockerfile is:
FROM ruby:2.3.6
RUN mkdir -p /railsapp
WORKDIR /railsapp
RUN apt-get update && apt-get install -y nodejs --no-install-recommends
RUN apt-get update && apt-get install -y mysql-client --no-install-recommends
COPY Gemfile /railsapp/
COPY Gemfile.lock /railsapp/
RUN bundle install
COPY . /railsapp
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose.yml file is:
version: '3.3'
services:
cache:
image: memcached:1.4-alpine
mysql:
image: mysql
restart: always
ports:
- "3002:3002"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=dev
web:
container_name: party_manager
build:
context: .
dockerfile: Dockerfile
environment:
- RAILS_ENV=development
ports:
- '3000:3000'
volumes:
- .:/railsapp
links:
- mysql
I have also also installed the memchaced in container shell through
docker exec -it 499e3d1efe44 bash
**499e3d1efe44 is my container id.
Then install the gem with command: gem install memcached
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
So according to your docker-compose.yaml file you can access you cache container on cache:112111 from web container.

Docker-compose rails postgres

I've been following this tutorial to 'dockerize' my rails application and have hit a snag with connecting to the db after some searching around, no solutions seem to work. I've also tried the default user 'postgres' and no password, but still no luck. My error indicates that my password is incorrect, but everything I try doesn't change the error:
web_1 | I, [2017-06-02T00:58:29.217947 #7] INFO -- : listening on addr=0.0.0.0:3000 fd=13
postgres_1 | FATAL: password authentication failed for user "web"
postgres_1 | DETAIL: Connection matched pg_hba.conf line 95: "host all all 0.0.0.0/0 md5"
web_1 | E, [2017-06-02T00:58:29.230868 #7] ERROR -- : FATAL: password authentication failed for user "web"
Here's what I have:
.env
LISTEN_ON=0.0.0.0:3000
DATABASE_URL=postgresql://web:mypassword#postgres:5432/web?encoding=utf8&pool=5&timeout=5000
Dockerfile
FROM ruby:2.3.4
RUN apt-get update && apt-get install -qq -y build-essential nodejs libpq-dev postgresql-client-9.4 --fix-missing --no-install-recommends
ENV INSTALL_PATH /web
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install
COPY . .
# precompile assets using dummy data
RUN bundle exec rake RAILS_ENV=production DATABASE_URL=postgresql://user:pass#127.0.0.1/dbname SECRET_TOKEN=pickasecuretoken assets:precompile
VOLUME ["$INSTALL_PATH/public"]
VOLUME /postgres
CMD RAILS_ENV=development bundle exec unicorn -c config/unicorn.rb
docker-compose.yml
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: mypassword
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
web:
build: .
links:
- postgres
volumes:
- .:/web
ports:
- "3000:3000"
env_file:
- .env
config/database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: 5
development:
<<: *default
url: <%= ENV['DATABASE_URL'] %>
The line in database.yml grabs the DATABASE_URL environment variable that is stored in the container from the .env file.
I spent the better part of a day fiddling with this. What finally worked for me was to fall back to the Postgres defaults.
docker-compose.yml
postgres:
image: postgres:9.4.5
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
.env
DATABASE_URL=postgresql://web:#postgres:5432/web?encoding=utf8&pool=5&timeout=5000
In the DATABASE_URL, keeping the password separator in the url but leaving the password blank finally made it work.

Resources