Could not find Docker hostname on Gitlab CI - docker

I have an app inside a Docker container based on Elixir image, that need to connect to a database and run tests using a Gitlab runner.
The build stage works fine but there is a problem to connect to a database to run tests. I tried both connecting to a service and running another database container, but from the logs it looks like the problem is with the Phoenix app:
** (RuntimeError) :database is nil in repository configuration
lib/ecto/adapters/postgres.ex:121: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:40: anonymous fn/3 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:675: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:675: Enum.each/2
(mix) lib/mix/task.ex:301: Mix.Task.run_task/3
(mix) lib/mix/cli.ex:75: Mix.CLI.run_task/2
This is how the config/test.exs file looks like
config :app, App.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("POSTGRES_USER"),
password: System.get_env("POSTGRES_PASSWORD"),
database: System.get_env("POSTGRES_DB"),
hostname: System.get_env("POSTGRES_HOSTNAME"),
pool: Ecto.Adapters.SQL.Sandbox
This is the output from the runner:
$ docker run --rm -t $CONTAINER echo $MIX_ENV $POSTGRES_USER $POSTGRES_HOSTNAME $POSTGRES_DB
test username db test_db
I'm trying to figure out why I get this error :database is nil, and if it is related to Gitlab, Ecto or Phoenix.
Edit
I wrote static values in the config/*.exs files (for some reason it didn't pick them up), but now it can't find the postgresql hostname. Although the postgresql instance is running it can't find it.
I checked if the instance is running with docker ps

Based on the message :database is nil in repository configuration it seems to me like your POSTGRES_DB variable is not set. You can try to change that configuration line to
database: System.get_env("POSTGRES_DB") || "postgres"
to see whether you still get the same error. If you don't, you can debug from there.

Related

Rails container cannot connect to mysql container with gitlab ci

I am setting up a simple gitlab ci for a Rails app with build, test and release stages:
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
services:
- docker:dind
script:
- docker pull $TEST_IMAGE
- docker run -d --name mysql -e MYSQL_ROOT_PASSWORD=mysql_strong_password mysql:5.7
- docker run -e RAILS_ENV=test --link mysql:db $TEST_IMAGE bundle exec rake db:setup
build succeeds building the docker image and pushing to registry
test launches another mysql container which I use as my host db, but fails when establishing connection to mysql.
Couldn't create database for {"host"=>"db", "adapter"=>"mysql2", "pool"=>5, "username"=>"root", "encoding"=>"utf8", "timeout"=>5000, "password"=>"mysql_strong_password", "database"=>"my_tests"}, {:charset=>"utf8"}
(If you set the charset manually, make sure you have a matching collation)
rails aborted!
Mysql2::Error: Can't connect to MySQL server on 'db' (111 "Connection refused")
I also tried creating seperate docker network using --network instead of link approach, did not help.
That happens only on Gitlab runner instance. When I perform those steps on local machine it works fine.
After much reading I get to think it is a bug with docker executor. Am I missing something?
Connection refused indicates that the containers know how to reach each other, but the target container does not have anything accepting connections on the selected port. This most likely means you are starting your application up before the database has finished initializing. My recommendation is to update/create your application or create an entrypoint in your application container that polls the database for it to be up and running, and fail after a few minutes if it doesn't start up. I'd also recommend using networks and not links since links are deprecated and do not gracefully handle containers being recreated.
The behavior you're seeing is documented in the mysql image:
No connections until MySQL init completes
If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as docker-compose, which start several containers simultaneously.
If the application you're trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then a putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see WordPress or Bonita.
From the linked wordpress example, you can see their retry code:
$maxTries = 10;
do {
$mysql = new mysqli($host, $user, $pass, '', $port, $socket);
if ($mysql->connect_error) {
fwrite($stderr, "\n" . 'MySQL Connection Error: (' . $mysql->connect_errno . ') ' . $mysql->connect_error . "\n");
--$maxTries;
if ($maxTries <= 0) {
exit(1);
}
sleep(3);
}
} while ($mysql->connect_error);
A sample entrypoint script to wait for mysql without changing your application itself could look like:
#!/bin/sh
wait-for-it.sh mysql:3306 -t 300
exec "$#"
The wait-for-it.sh comes from vishnubob/wait-for-it, and the exec "$#" at the end replaces pid 1 with the command you passed (e.g. bundle exec rake db:setup). The downside of this approach is that the database could potentially be listening on a port before it is really ready to accept connections, so I still recommend doing a full login with your application in a retry loop.

GitLab CI for Rails App Using Postgres and Elasticsearch (searchkick gem)

How does one go about configuring a .gitlab-ci.yml file for a Rails app that depends on PosgreSQL and Elasticsearch via the searchkick gem to run my tests when I push it to GitLab?
I wanted to post this question, as it took me far too long to find the answer and don't want others to feel my pain. The below example not only builds my application, but also runs all my specs.
Setup
Rails 5+
PostreSQL 9.6
Rspec gem
Searchkick gem (handles Elasticsearch queries and configuration)
Configuration
Add the following files to your Rails app with the configurations listed.
config/gitlab-database.yml
test:
adapter: postgresql
encoding: unicode
pool: 5
timeout: 5000
host: postgres
database: test_db
user: runner
password: ""
.gitlab-ci.yml
image: ruby:2.4.1
services:
- postgres:latest
- elasticsearch:latest
variables:
POSTGRES_DB: test_db
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
ELASTICSEARCH_URL: "http://elasticsearch:9200"
stages:
- test
before_script:
- bundle install --without postgres production --jobs $(nproc) "${FLAGS[#]}"
- cp config/gitlab-ci/gitlab-database.yml config/database.yml
- RAILS_ENV=test bundle exec rails db:create db:schema:load
test:
stage: test
script:
- bundle exec rspec
And that's it! You're now configured to auto-run your specs on gitlab for each push.
Further Explaination
Let's start with PostgreSQL. When we start our new runner, the application we're copying in won't know how to properly connect to Postgres. Thus we create a new database.yml file, which we prefixed with gitlab- so it doesn't conflict with our actual configuration, and copy that file into runner's config directory. The cp command not only copy's the file, but will replace the file if it currently exists.
The items we're connecting to via GitLab are database:, user:, and password:. We do this by specifying those same names within our environment variables, ensuring everything connects properly.
Okay, connecting to PostgreSQL is well explained and documented on GitLab's website. So how did I get Elasticsearch working, which isn't explained very well anywhere?
The magic happens again in variables. We needed to set the ELASTICSEARCH_URL environmental variable, made available to us through the Searchkick gem, as Elasticsearch looks for http://localhost:9200 by default. But since we're using Elasticsearch through a service, we need to explicitly tell it to not use the default and use our service's hostname. So we then replaced http://localhost:9200 with http://elasticsearch:9200, which does map properly to our service.

How to set up Travis CI and postgresql using custom db credentials?

I'm trying to set up custom Postgres credentials with Travis CI as I'd like to avoid the existing credentials definition in the code to be tested.
The testing code defines that the database should be accessed on:
'sqlalchemy.url': 'postgresql://foo:bar#localhost/testing_db'
I've therefore created a database.travis.yml file:
postgresql: &postgresql
adapter: postgresql
username: foo
password: bar
database: testing_db
...and added the following to my .travis.yml:
services:
- postgresql
before_script:
- psql -c 'create database stalker_test;' -U postgres
- mkdir config && cp database.travis.yml config/database.yml
However, I am still getting this during testing:
OperationalError: (psycopg2.OperationalError) FATAL: role "foo" does not exist
What am I doing wrong?
Adding the following to .travis.yml solved my issue. No need for a database.travis.yml file.
before_script:
- psql -c "CREATE DATABASE testing_db;" -U postgres
- psql -c "CREATE USER foo WITH PASSWORD 'bar';" -U postgres
Database.yml seems to be a Ruby On Rails thing. Travis CI started with rails / ruby testing, so the docs might be reflecting that.
You most probably need to do your setup in a separate script or migration setup, and not rely on travis, except for running the service.

How to get Docker host IP on Travis CI?

I have a Rails repo on Travis. It has a docker-compose.yml file:
postgres:
image: postgres
ports:
- "5433:5432"
environment:
- POSTGRES_USER=calories
- POSTGRES_PASSWORD=secretpassword
(I had to use 5433 as the host port because 5432 gave me an error: Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use)
And a travis.yml:
sudo: required
services:
- docker
language: ruby
cache: bundler
before_install:
# Install docker-compose
- curl -L https://github.com/docker/compose/releases/download/1.4.0/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
# TODO: Remove this temporary fix when it's safe to:
# https://github.com/travis-ci/travis-ci/issues/4778
- sudo iptables -N DOCKER || true
- sleep 10
- docker-compose up -d
before_script:
- bundle exec rake db:setup
script:
- bundle exec rspec spec
after_script:
- docker-compose stop
- docker-compose rm -f
I am trying to figure out what to put in my database.yml so my tests can run on Travis CI. In my other environments, I can do:
adapter: postgresql
encoding: unicode
host: <%= `docker-machine ip default` %>
port: 5433
username: calories
password: secretpassword
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
But unfortunately this doesn't work on Travis because there is no docker-machine on Travis. I get an error: docker-machine: command not found
How can I get the Docker host's IP on Travis?
I think what you want is actually the container IP, not the docker engine IP. On your desktop you had to query docker-machine for the IP because the VM docker-machine created wasn't forwarding the port.
Since you're exposing a host port, you can actually use localhost for the host value.
There are two other options as well:
run the tests in a container and link to the database container, so you can just use postgres as the host value.
if you don't want to use a host port, you can use https://github.com/swipely/docker-api (or some other ruby client) to query the docker API for the container IP, and use that for the host value. Look for the inspect or inspect container API call.
For others coming across this, you should be able to get the host IP by running
export HOST_IP_ADDRESS="$(/sbin/ip route|awk '/default/ { print $3 }')"
from within the container. You can then edit your database configuration via a script to insert this in - it'll be in the $HOST_IP_ADDRESS variable.
However like dnephin said, I'm not sure this is what you want. This would probably work if you were running Travis' Postgres service and needed to access it from within a container (depending on which IP address they bind it to).
But, it appears you're running it the opposite way to this, in which case I'm fairly sure localhost should get you there. You might need to try some other debugging steps to make sure the container has started and is ready, etc.
EDIT: If localhost definitely isn't working, have you tried 127.0.0.1?

How to setup database.yml to connect to Postgres Docker container?

I have a Rails app. In the development and test environments, I want the Rails app to connect to a dockerized Postgres. The Rails app itself will not be in a container though - just Postgres.
What should my database.yml look like?
I have a docker default machine running. I created docker-compose.yml:
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_USER=timbuktu
- POSTGRES_PASSWORD=mysecretpassword
I ran docker-compose up to get Postgres running.
Then I ran docker-machine ip default to get the IP address of the Docker virtual machine, and I updated database.yml accordingly:
...
development:
adapter: postgresql
host: 192.168.99.100
port: 5432
database: timbuktu_development
username: timbuktu
password: mysecretpassword
encoding: unicode
pool: 5
...
So all is well and I can connect to Postgres in its container.
But, if someone else pulls the repo, they won't be able to connect to Postgres using my database.yml, because the IP address of their Docker default machine will be different from mine.
So how can I change my database.yml to account for this?
One idea I have is to ask them to get the IP address of their Docker default machine by running docker-machine env default, and pasting the env DOCKER_HOST line into their bash_rc. For example,
export DOCKER_HOST="tcp://192.168.99.100:2376"
Then my database.yml host can include the line
host: <%= ENV['DOCKER_HOST'].match(/tcp:\/\/(.+):\d{3,}/)[1] %>
But this feels ugly and hacky. Is there a better way?
You could set a correct environment variable first, and access it from your database.yml:
host: <%= ENV['POSTGRES_IP'] %>
With a bashrc like (using bash substring removal):
export DOCKER_HOST=$(docker-machine env default)
export POSTGRES_IP=${DOCKER_HOST#tcp://}
I found a simpler way:
host: <%= `docker-machine ip default` %>

Resources