I have a rails application with mongodb, in development environment.
Unable to connect mongodb with docker. Can connect to local mongodb with same mongoid config. I tried changing host as localhost to 0.0.0.0 but did not work.
What is missing in the settings ?
My doubt is mongo in Docker hasn't started or binded. If i make changes in mongoid config to read: :nearest, it says no nodes found.
error message is,
Moped::Errors::ConnectionFailure in Product#index
Could not connect to a primary node for replica set #]>
Dockerfile
#FROM ruby:2.2.1-slim
FROM rails:4.2.1
MAINTAINER Sandesh Soni, <my#email.com>
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
RUN mkdir /gmv
WORKDIR /gmv
# Add db directory to /db
ADD Gemfile /gmv/Gemfile
RUN bundle install
ADD ./database /data/db
ADD . /gmv
docker-compose.yml
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/gmv
ports:
- "3000:3000"
links:
- db
db:
image: mongo
command: "--smallfiles --bind_ip 0.0.0.0 --port 27027 -v"
volumes:
- data/mongodb:/data/db
ports:
- "27017:27017"
On your host machine execute docker run yourapp env, then in output look for ip address related to your database. That ip address & port you need to use to connect to your database running in container.
Similar question asked here
Related
I'm working on a project required dockerizing a rails application, the app is using mongodb (mongoid gem), and sidekiq & redis.
our goal is to create 3 containers, one for redis, the other is for sidekiq, and the third is for the rails application, we do not want to create a container for mongodb, but we will use the rails app container to connect to the mongodb running on our local machine (because on staging and production we're using mongodb atlas so no need for a mongodb container at all).
Every time I try to run the 3 containers, I get this error when trying to access endpoints dealing with mongo
Mongo::Error::NoServerAvailable (No server is available matching preference: #<Mongo::ServerSelector::Primary:0x41321220 tag_sets=[] max_staleness=nil> using server_selection_timeout=30 and local_threshold=0.015):
and here are the files I used to dockerize my application
Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY /docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3.7'
services:
redis:
image: redis:latest
ports:
- "6379:6379"
elagi_app:
build:
context: '..'
dockerfile: 'docker/Dockerfile'
environment:
RAILS_ENV: development
ELASTICSEARCH_URL: 192.168.1.109:9200
MONGO_CONNECTION_STRING: 192.168.1.109:27017
REDIS_URL: redis://redis:6379
ports:
- "3000:3000"
volumes:
- ./../app:/myapp/app
- ./../config:/myapp/config
- ./../lib:/myapp/lib
- ./../db:/myapp/db
- ./../spec:/myapp/spec
sidekiq:
build:
context: '..'
dockerfile: 'docker/Dockerfile'
environment:
RAILS_ENV: development
ELASTICSEARCH_URL: 192.168.1.109:9200
MONGO_CONNECTION_STRING: 192.168.1.109:27017
REDIS_URL: redis://redis:6379
volumes:
- ./../app:/myapp/app
- ./../config:/myapp/config
- ./../lib:/myapp/lib
- ./../db:/myapp/db
- ./../spec:/myapp/spec
depends_on:
- 'redis'
command: 'sidekiq -C config/sidekiq.yml'
entrypoint.sh
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
mongoid.yml
development:
clients:
default:
database: elagi
hosts:
- <%= ENV["MONGO_CONNECTION_STRING"] %>
options:
user: 'admin'
password: 'admin123'
max_pool_size: 20
wait_queue_timeout: 15
options:
raise_not_found_error: false
how can I solve this problem ?
You are juggling a lot of moving pieces.
First, the exception message you referenced indicates you are using an old version of the driver (mongo gem). Update to the current version to get improved diagnostics, including for this particular scenario, as well as bugfixes.
Then, start verifying that each piece is functioning by itself. You are running the database on the host; can you connect to it from the host machine? Are you able to connect to other services on the host from the app container (e.g. ssh)? Are you able to connect from the app container to other services (e.g. elasticsearch)?
I am getting issues while setup and run the docker instance on my local system with Ruby on Rail. Please see my docker configuration files:-
Dockerfile
FROM ruby:2.3.1
RUN useradd -ms /bin/bash web
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get -y install nginx
RUN apt-get -y install sudo
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
RUN apt-get update
# For docker cache
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
ENV BUNDLE_PATH /bundle
RUN gem install bundler --no-rdoc --no-ri
RUN bundle install
# END
ENV APP_HOME /home/web/cluetap_app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ADD . $APP_HOME
RUN chown -R web:web $APP_HOME
ADD ./nginx/nginx.conf /etc/nginx/
RUN unlink /etc/nginx/sites-enabled/default
ADD ./nginx/cluetap_nginx.conf /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-available/cluetap_nginx.conf /etc/nginx/sites-enabled/cluetap_nginx.conf
RUN usermod -a -G sudo web
docker-compose.yml
version: '2'
services:
postgres:
image: 'postgres:9.6'
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=
- POSTGRES_USER=postgres
- POSTGRES_HOST=cluetapapi_postgres_1
networks:
- default
- service-proxy
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
labels:
description: "Postgresql Database"
service: "postgresql"
web:
container_name: cluetap_api
build: .
command: bash -c "thin start -C config/thin/development.yml && nginx -g 'daemon off;'"
volumes:
- .:/home/web/cluetap_app
ports:
- "80:80"
depends_on:
- 'postgres'
networks:
service-proxy:
volumes:
postgres:
When I have run docker-compose build and docker-compose up -d these two commands it run succesfully but when I have hit from the url the it thorught internal server error and the error is
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
I have applied some solution but it did not work for me, please guide me I am new to docker and AWS.
The issue is that you are trying to connect to localhost inside the container for DB. The port mapping that you do 5432:5432 for postgres map 5432 to localhost of your host machine.
Now your web container code is running inside the container. And there is nothing on its localhost:5432.
So you need to change your connection details in the config to connect to postgres:5432 and this is because you named the postgres DB service as postgres
Change that and it should work.
By default the postgres image is already exposing to 5432 so you can just remove that part in your yml.
Then if you would like to check if web service can connect to your postgres service you can run this docker-compose exec web curl postgres:5432 then it should return:
curl: (52) Empty reply from server
If it cannot connect it will return:
curl: (6) Could not resolve host: postgres or curl: (7) Failed to connect to postgres port 5432: Connection refused
UPDATE:
I know the problem now. It's because you are trying to connect on the localhost you should connect to the postgres service.
I had this same issue when working on a Rails 6 application in Ubuntu 20.04 using Docker and Traefik.
In my case I was trying to connect the Ruby on Rails application running in a docker container to the PostgreSQL database running on the host.
So each time I try to connect to the database I get the error:
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Here's how I fixed it:
So first the applications running in the container are exposed to the host via Traefik which maps to the host on port 80.
Firstly, I had to modify my PostgreSQL database configuration file to accept remote connections from other IP addresses. This Stack Overflow answer can help with that - PostgreSQL: FATAL - Peer authentication failed for user (PG::ConnectionBad)
Secondly, I had to create a docker network that Traefik and other applications that will proxy through it will use:
docker network create traefik_default
Thirdly, I setup the applications in the docker-compose.yml file to use the same network that I just created:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env
environment:
RAILS_ENV: ${RAILS_ENV}
RACK_ENV: ${RACK_ENV}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_HOST_AUTH_METHOD: ${DATABASE_HOST}
POSTGRES_PORT: ${DATABASE_PORT}
expose:
- ${RAILS_PORT}
networks:
- traefik_default
labels:
- traefik.enable=true
- traefik.http.routers.my_app.rule=Host(`${RAILS_HOST}`)
- traefik.http.services.my_app.loadbalancer.server.port=${NGINX_PORT}
- traefik.docker.network=traefik_default
restart: always
volumes:
- .:/app
- gem-cache:/usr/local/bundle/gems
- node-modules:/app/node_modules
web-server:
build:
context: .
dockerfile: ./nginx/Dockerfile
depends_on:
- app
expose:
- ${NGINX_PORT}
restart: always
volumes:
- .:/app
networks:
traefik_default:
external: true
volumes:
gem-cache:
node-modules:
Finally, in my .env file, I specified the private IP address of my host machine as the DATABASE_HOST as an environment variable this way:
DATABASE_NAME=my_app_development
DATABASE_USER=my-username
DATABASE_PASSWORD=passsword1
DATABASE_HOST=192.168.0.156
DATABASE_PORT=5432
RAILS_HOST=my_app.localhost
NGINX_PORT=80
RAILS_ENV=development
RACK_ENV=development
RAILS_MASTER_KEY=e879cbg21ff58a9c50933fe775a74d00
RAILS_PORT=3000
I also had this problem just now. My solution is
Remove port port: '5432:5432' in Postgres service
Change POSTGRES_HOST=cluetapapi_postgres_1 to POSTGRES_HOST=localhost
When you need to access your db, just use like example sqlalchemy.url = postgresql+psycopg2://postgres:password#postgres/dbname
I'm having an issue with my travis-ci before_script while trying to connect to my docker postgres container:
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
I've seen this problem raised but never fully addressed around SO and github issues, and i'm not clear whether it is specific to docker or travis. One linked issue (below) works around it by using 5433 as the host postgres address but i'd like to know for sure what is going on before i jump into something.
my travis.yml:
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.7.1
DOCKER_VERSION: 1.11.1-0~trusty
before_install:
# list docker-engine versions
- apt-cache madison docker-engine
# upgrade docker-engine to specific version
- sudo apt-get -o Dpkg::Options::="--force-confnew" install -y docker-engine=${DOCKER_VERSION}
# upgrade docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
before_script:
- echo "Before Script:"
- docker-compose -f docker-compose.ci.yml build
- docker-compose -f docker-compose.ci.yml run app rake db:setup
- docker-compose -f docker-compose.ci.yml run app /bin/sh
script:
- echo "Running Specs:"
- rake spec
my docker-compose.yml for ci:
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: yourpassword
expose:
- '5432' # added this as an attempt to open the port
ports:
- '5432:5432'
volumes:
- web-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- web-redis:/var/lib/redis/data
web:
build: .
links:
- postgres
- redis
volumes:
- ./code:/app
ports:
- '8000:8000'
# env_file: # setting these directly in the environment
# - .docker.env # (they work fine locally)
sidekiq:
build: .
command: bundle exec sidekiq -C code/config/sidekiq.yml
links:
- postgres
- redis
volumes:
- ./code:/app
Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use
How to get Docker host IP on Travis CI?
It seems that Postgres service is enabled by default in Travis CI.
So you could :
Try to disable the Postgres service in your Travis config. See How to stop services on Travis CI running by default?. See also https://docs.travis-ci.com/user/database-setup/#PostgreSQL .
Or
Map your postgres container to another host port (!= 5432). Like -p 5455:5432.
It could also be useful to check if the service is already running : Check If a Particular Service Is Running on Ubuntu
Do you use Travis' Postgres?
services:
- postgresql
Would be easier if you provide travis.yml
I know I am missing something very basic here. I have see some of the older questions on persisting data using docker, but I think I am following the most recent documentation found here.
I have a rails app that I am trying to run in docker. It runs fine but every time I start it up i get ActiveRecord::NoDatabaseError. After I create the database and migrate it, the app runs fine, until I shut it down and restart it.
here is my docker file:
FROM ruby:2.3.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV RAILS_ROOT /ourlatitude
RUN mkdir -p $RAILS_ROOT/tmp/pids
WORKDIR $RAILS_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY . .
and here is my docker-compose.yml file
version: '2'
services:
db:
image: postgres:9.4.5
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
the basic flow I am following is this:
export RAILS_ENV=development
docker-compose build
docker-compose up
docker-compose run app rake db:create
docker-compose run app rake db:migrate
at this point the app will be running fine
but then I do this
docker-compose down
docker-compose up
and then I am back to the ActiveRecord::NoDatabaseError
So as I said, I think I am missing something very basic.
It doesn't look like you put your postgres on a volume, you may be missing other persistent data sources in your app container, and it appears you missed some indentation on your app container definition.
version: '2'
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/ourlatitude/database
depends_on:
- db
volumes:
postgres-data:
driver: local
In the example above, the postgres data is stored in a named volume. See the advice on docker hub for more details on persisting data for that application. If you are still losing data, check the output of docker diff $container_id on a container to see what files are changing outside of your volumes that would be lost on a down/up.
I managed to get this to work properly using the following docker-compose.yml file.
version: '2'
volumes:
postgres-data:
services:
db:
image: postgres:9.4.5
volumes:
- postgres-data:/var/lib/postgresql/data
app:
build: .
environment:
RAILS_ENV: $RAILS_ENV
ports:
- "3000:3000"
command: bundle exec rails s -b 0.0.0.0
depends_on:
- db
The key was to add the
volumes:
postgres-data:
which creates the named volume and then the
volumes:
- postgres-data:/var/lib/postgresql/data
under the db section which maps the named volume to the expected location in the container of /var/lib/postgresql/data
I'm trying to configure a simple LAMP app.
Here is my Dockerfile
FROM ubuntu
# ...
RUN apt-get update
RUN apt-get -yq install apache2
# ...
WORKDIR /data
And my docker-compose.yml
db:
image: mysql
web:
build: .
ports:
- 80:80
volumes:
- .:/data
links:
- db
command: /data/run.sh
After docker-compose build & up I was expecting to find db added to my /etc/hosts (into the web container), but it's not there.
How can this be explained ? What am I doing wrong ?
Note1: At up time, I see only Attaching to myapp_web_1, shouldn't I see also myapp_db_1 ?
Note2: I'm using boot2docker
Following #Alexandru_Rosianu's comment, I checked
$ docker-compose logs db
error: database is uninitialized and MYSQL_ROOT_PASSWORD not set
Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?
Since I now set the variable MYSQL_ROOT_PASSWORD
$ docker-compose up
Attaching to myapp_db_1, myapp_web_1
db_1 | Running mysql_install_db
db_1 | ...
I can see the whole db log and the db host effectively set in web's /etc/hosts