I have a Rails app. In the development and test environments, I want the Rails app to connect to a dockerized Postgres. The Rails app itself will not be in a container though - just Postgres.
What should my database.yml look like?
I have a docker default machine running. I created docker-compose.yml:
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_USER=timbuktu
- POSTGRES_PASSWORD=mysecretpassword
I ran docker-compose up to get Postgres running.
Then I ran docker-machine ip default to get the IP address of the Docker virtual machine, and I updated database.yml accordingly:
...
development:
adapter: postgresql
host: 192.168.99.100
port: 5432
database: timbuktu_development
username: timbuktu
password: mysecretpassword
encoding: unicode
pool: 5
...
So all is well and I can connect to Postgres in its container.
But, if someone else pulls the repo, they won't be able to connect to Postgres using my database.yml, because the IP address of their Docker default machine will be different from mine.
So how can I change my database.yml to account for this?
One idea I have is to ask them to get the IP address of their Docker default machine by running docker-machine env default, and pasting the env DOCKER_HOST line into their bash_rc. For example,
export DOCKER_HOST="tcp://192.168.99.100:2376"
Then my database.yml host can include the line
host: <%= ENV['DOCKER_HOST'].match(/tcp:\/\/(.+):\d{3,}/)[1] %>
But this feels ugly and hacky. Is there a better way?
You could set a correct environment variable first, and access it from your database.yml:
host: <%= ENV['POSTGRES_IP'] %>
With a bashrc like (using bash substring removal):
export DOCKER_HOST=$(docker-machine env default)
export POSTGRES_IP=${DOCKER_HOST#tcp://}
I found a simpler way:
host: <%= `docker-machine ip default` %>
Related
I am trying to connect to a remote postgresql database using the bitnami/phppgadmin docker
How to mention the host name
phppgadmin:
image: "bitnami/phppgadmin:7.13.0"
ports:
- "8080:8080"
- '443:8443'
environment:
PHP_PG_ADMIN_SERVER_HOST: 'xx.xx.xx.xx'
PHP_PG_ADMIN_SERVER_PORT: 5432
I am trying this, but i am not able to login in the dash board.
I have set the env variables based on the dockage/phppgadmin. BUt bitnami has no such options
Every image on Docker Hub has a corresponding page; you can look at https://hub.docker.com/r/bitnami/phppgadmin. That has an "Environment variables" section, which documents:
The phpPgAdmin instance can be customized by specifying environment variables on the first run. The following environment values are provided to custom phpPgAdmin:
DATABASE_HOST: Database server host. Default: postgresql.
So use DATABASE_HOST as the environment variable name. There is also DATABASE_PORT_NUMBER but you don't need to explicitly set it to the PostgreSQL default value.
I have an app inside a Docker container based on Elixir image, that need to connect to a database and run tests using a Gitlab runner.
The build stage works fine but there is a problem to connect to a database to run tests. I tried both connecting to a service and running another database container, but from the logs it looks like the problem is with the Phoenix app:
** (RuntimeError) :database is nil in repository configuration
lib/ecto/adapters/postgres.ex:121: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:40: anonymous fn/3 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:675: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:675: Enum.each/2
(mix) lib/mix/task.ex:301: Mix.Task.run_task/3
(mix) lib/mix/cli.ex:75: Mix.CLI.run_task/2
This is how the config/test.exs file looks like
config :app, App.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("POSTGRES_USER"),
password: System.get_env("POSTGRES_PASSWORD"),
database: System.get_env("POSTGRES_DB"),
hostname: System.get_env("POSTGRES_HOSTNAME"),
pool: Ecto.Adapters.SQL.Sandbox
This is the output from the runner:
$ docker run --rm -t $CONTAINER echo $MIX_ENV $POSTGRES_USER $POSTGRES_HOSTNAME $POSTGRES_DB
test username db test_db
I'm trying to figure out why I get this error :database is nil, and if it is related to Gitlab, Ecto or Phoenix.
Edit
I wrote static values in the config/*.exs files (for some reason it didn't pick them up), but now it can't find the postgresql hostname. Although the postgresql instance is running it can't find it.
I checked if the instance is running with docker ps
Based on the message :database is nil in repository configuration it seems to me like your POSTGRES_DB variable is not set. You can try to change that configuration line to
database: System.get_env("POSTGRES_DB") || "postgres"
to see whether you still get the same error. If you don't, you can debug from there.
How does one go about configuring a .gitlab-ci.yml file for a Rails app that depends on PosgreSQL and Elasticsearch via the searchkick gem to run my tests when I push it to GitLab?
I wanted to post this question, as it took me far too long to find the answer and don't want others to feel my pain. The below example not only builds my application, but also runs all my specs.
Setup
Rails 5+
PostreSQL 9.6
Rspec gem
Searchkick gem (handles Elasticsearch queries and configuration)
Configuration
Add the following files to your Rails app with the configurations listed.
config/gitlab-database.yml
test:
adapter: postgresql
encoding: unicode
pool: 5
timeout: 5000
host: postgres
database: test_db
user: runner
password: ""
.gitlab-ci.yml
image: ruby:2.4.1
services:
- postgres:latest
- elasticsearch:latest
variables:
POSTGRES_DB: test_db
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
ELASTICSEARCH_URL: "http://elasticsearch:9200"
stages:
- test
before_script:
- bundle install --without postgres production --jobs $(nproc) "${FLAGS[#]}"
- cp config/gitlab-ci/gitlab-database.yml config/database.yml
- RAILS_ENV=test bundle exec rails db:create db:schema:load
test:
stage: test
script:
- bundle exec rspec
And that's it! You're now configured to auto-run your specs on gitlab for each push.
Further Explaination
Let's start with PostgreSQL. When we start our new runner, the application we're copying in won't know how to properly connect to Postgres. Thus we create a new database.yml file, which we prefixed with gitlab- so it doesn't conflict with our actual configuration, and copy that file into runner's config directory. The cp command not only copy's the file, but will replace the file if it currently exists.
The items we're connecting to via GitLab are database:, user:, and password:. We do this by specifying those same names within our environment variables, ensuring everything connects properly.
Okay, connecting to PostgreSQL is well explained and documented on GitLab's website. So how did I get Elasticsearch working, which isn't explained very well anywhere?
The magic happens again in variables. We needed to set the ELASTICSEARCH_URL environmental variable, made available to us through the Searchkick gem, as Elasticsearch looks for http://localhost:9200 by default. But since we're using Elasticsearch through a service, we need to explicitly tell it to not use the default and use our service's hostname. So we then replaced http://localhost:9200 with http://elasticsearch:9200, which does map properly to our service.
I'm having some trouble deploying a stack to a local 3-clusters swarm (for now) created with docker-machine.
All services are in the same network and when working locally running docker-compose run.. I have no problem in connecting services using service names as hostname, for instance:
# docker-compose.yml
---
version: '3'
services:
app:
build: .
# ...
depends_on:
- db
db:
image: postgres:9.6
We are talking about a Ruby application and the database configuration is the following;
# database.yml
---
default: &default
# ... typical stuff
username: postgres
host: postgres # The name of the service in docker-compose.yml
development:
<<: *default
production:
<<: *default
host: <%= ENV['DATABASE_HOST'] %>
database: <%= ENV['DATABASE_NAME'] %>
username: <%= ENV['DATABASE_USERNAME'] %>
password: <%= ENV['DATABASE_PASSWORD'] %>
AFAIK this looks pretty standard when developing locally. In fact, the app service connects correctly to the postgres service using the service name as hostname. This happens also with a sidekiq service and a rabbitmq one.
The issue, and I'm almost a 100% sure I'm missing something very basic, is that when I the stack is deployed to a swarm, services are not able to see each other, so when I check the logs for some services I see that connections are being refused to the app service.
I'm not pretty sure if I have to configure something in the manager node or this manager is able to manage the routing between hosts names and service names and physical addresses.
I'd really appreciate if someone can put me in the right direction on how to configure services for deploying to a swarm. As I mentioned, this is done in a local swarm with 3 nodes created using docker-machine create... and I already run eval $(docker-machine env manager-node).
Thanks in advance!
I have a Rails repo on Travis. It has a docker-compose.yml file:
postgres:
image: postgres
ports:
- "5433:5432"
environment:
- POSTGRES_USER=calories
- POSTGRES_PASSWORD=secretpassword
(I had to use 5433 as the host port because 5432 gave me an error: Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use)
And a travis.yml:
sudo: required
services:
- docker
language: ruby
cache: bundler
before_install:
# Install docker-compose
- curl -L https://github.com/docker/compose/releases/download/1.4.0/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
# TODO: Remove this temporary fix when it's safe to:
# https://github.com/travis-ci/travis-ci/issues/4778
- sudo iptables -N DOCKER || true
- sleep 10
- docker-compose up -d
before_script:
- bundle exec rake db:setup
script:
- bundle exec rspec spec
after_script:
- docker-compose stop
- docker-compose rm -f
I am trying to figure out what to put in my database.yml so my tests can run on Travis CI. In my other environments, I can do:
adapter: postgresql
encoding: unicode
host: <%= `docker-machine ip default` %>
port: 5433
username: calories
password: secretpassword
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
But unfortunately this doesn't work on Travis because there is no docker-machine on Travis. I get an error: docker-machine: command not found
How can I get the Docker host's IP on Travis?
I think what you want is actually the container IP, not the docker engine IP. On your desktop you had to query docker-machine for the IP because the VM docker-machine created wasn't forwarding the port.
Since you're exposing a host port, you can actually use localhost for the host value.
There are two other options as well:
run the tests in a container and link to the database container, so you can just use postgres as the host value.
if you don't want to use a host port, you can use https://github.com/swipely/docker-api (or some other ruby client) to query the docker API for the container IP, and use that for the host value. Look for the inspect or inspect container API call.
For others coming across this, you should be able to get the host IP by running
export HOST_IP_ADDRESS="$(/sbin/ip route|awk '/default/ { print $3 }')"
from within the container. You can then edit your database configuration via a script to insert this in - it'll be in the $HOST_IP_ADDRESS variable.
However like dnephin said, I'm not sure this is what you want. This would probably work if you were running Travis' Postgres service and needed to access it from within a container (depending on which IP address they bind it to).
But, it appears you're running it the opposite way to this, in which case I'm fairly sure localhost should get you there. You might need to try some other debugging steps to make sure the container has started and is ready, etc.
EDIT: If localhost definitely isn't working, have you tried 127.0.0.1?