I have a Rails repo on Travis. It has a docker-compose.yml file:
postgres:
image: postgres
ports:
- "5433:5432"
environment:
- POSTGRES_USER=calories
- POSTGRES_PASSWORD=secretpassword
(I had to use 5433 as the host port because 5432 gave me an error: Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use)
And a travis.yml:
sudo: required
services:
- docker
language: ruby
cache: bundler
before_install:
# Install docker-compose
- curl -L https://github.com/docker/compose/releases/download/1.4.0/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
# TODO: Remove this temporary fix when it's safe to:
# https://github.com/travis-ci/travis-ci/issues/4778
- sudo iptables -N DOCKER || true
- sleep 10
- docker-compose up -d
before_script:
- bundle exec rake db:setup
script:
- bundle exec rspec spec
after_script:
- docker-compose stop
- docker-compose rm -f
I am trying to figure out what to put in my database.yml so my tests can run on Travis CI. In my other environments, I can do:
adapter: postgresql
encoding: unicode
host: <%= `docker-machine ip default` %>
port: 5433
username: calories
password: secretpassword
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
But unfortunately this doesn't work on Travis because there is no docker-machine on Travis. I get an error: docker-machine: command not found
How can I get the Docker host's IP on Travis?
I think what you want is actually the container IP, not the docker engine IP. On your desktop you had to query docker-machine for the IP because the VM docker-machine created wasn't forwarding the port.
Since you're exposing a host port, you can actually use localhost for the host value.
There are two other options as well:
run the tests in a container and link to the database container, so you can just use postgres as the host value.
if you don't want to use a host port, you can use https://github.com/swipely/docker-api (or some other ruby client) to query the docker API for the container IP, and use that for the host value. Look for the inspect or inspect container API call.
For others coming across this, you should be able to get the host IP by running
export HOST_IP_ADDRESS="$(/sbin/ip route|awk '/default/ { print $3 }')"
from within the container. You can then edit your database configuration via a script to insert this in - it'll be in the $HOST_IP_ADDRESS variable.
However like dnephin said, I'm not sure this is what you want. This would probably work if you were running Travis' Postgres service and needed to access it from within a container (depending on which IP address they bind it to).
But, it appears you're running it the opposite way to this, in which case I'm fairly sure localhost should get you there. You might need to try some other debugging steps to make sure the container has started and is ready, etc.
EDIT: If localhost definitely isn't working, have you tried 127.0.0.1?
Related
I have a local project early in development which uses Nestjs and TypeORM to connect to a Docker postgres instance (called 'my_database_server'). Things were working on my old computer, an older Macbook Pro.
I've just migrated everything onto a new Macbook Pro with the new M2 chip (Apple silicon). I've downloaded the version of Docker Desktop that's appropriate for Apple silicon. It runs fine, it still shows 'my_database_server', it can launch that fine, and I can even use the Terminal to go into its Postgres db and see the data that existed in my old computer.
But, I can't figure out how to adjust the config of my project to get it to connect to this database. I've read from other articles that because Docker is running on Apple silicon now and is using emulation, that the host should be different.
This is what my .env used to look like:
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_USER=postgres
On my new computer, the above doesn't connect. I have tried these other values for POSTGRES_HOST, many inspired by other SO posts, but these all yield Error: getaddrinfo ENOTFOUND _____ errors:
my_database_server (the container name)
docker (since I didn't use a docker-compose.yaml file - see below - I don't know what the 'service name' is in this case)
192.168.65.0/24 (the "Docker subnet" value in Docker Desktop > Preferences > Resources > Network)
Next, for some other values I tried, the code is trying to connect for a longer time, but it's getting stuck on something later in the process. With these, eventually I get Error: connect ETIMEDOUT ______:
192.168.65.0
172.17.0.2 (from another SO post, I tried the terminal command docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 78f6e532b324 - the last part being the container ID of my_database_server)
In case it helps, I originally set up this docker container using the script I found here, not using a docker-compose.yaml file. Namely, I ran this script once at the beginning:
#!/bin/bash
set -e
SERVER="my_database_server";
PW="mysecretpassword";
DB="my_database";
echo "echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]"
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run --name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
-d postgres
# wait for pg to start
echo "sleep wait for pg-server [$SERVER] to start";
SLEEP 3;
# create the db
echo "CREATE DATABASE $DB ENCODING 'UTF-8';" | docker exec -i $SERVER psql -U postgres
echo "\l" | docker exec -i $SERVER psql -U postgres
What should be my new db config settings?
I never figured the above problem out, but it was blocking me so I found a different away around.
Per other SO questions, I decided to go with the more typical route of using a docker-compose.yml file to create the Docker container. In case it helps others in this problem, this is what the main part of my docker-compose.yml looks like:
version: '3'
services:
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DB_NAME}
container_name: postgres-db
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "54320:5432"
I then always run this with docker-compose up -d, not starting the container through the Docker Desktop app (though after that command, you should see the new container light up in the app).
Then in .env, I have this critical part:
POSTGRES_HOST=localhost
POSTGRES_PORT=54320
I mapped Docker's internal 5432 to the localhost-accessible 54320 (a suggestion I found here). Doing "5432:5432" as other articles suggest was not working for me, for reasons I don't entirely understand.
Other articles will suggest changing the host to whatever the service name is in your docker-compose.yml (for the example above, it would be db) - this also did not work for me. I believe the "54320:5432" part maps the ports correctly so that host can remain localhost.
Hope this helps others!
In my docker-compose.yml
version: '3'
services:
db:
image: mariadb:latest
volumes:
- ./dc_test_db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
When I connect via:
sudo docker exec -it docker_db_1 mysql -u root -p
I have to let the password empty to login. What is wrong?
I had this problem in version 10.4 of mariadb which was fixed by changing to version 10.3.
But there can be another reason for this problem.
In Docker architecture, it should be noted that images are immutable after the first build. That is, by changing the local variables defined in the docker compose file and re-running the service or re-uping the service, there will be no change in the initial settings of the builded image. To apply these changes, the steps of building the image and container and running the service must be performed again. Which can be done as follows.
1.docker-compose stop (First we stop the service)
2.Docker-compose rm (Then we clean all the related containers)
3.Docker-compose up --build -d (Finally run the service with the --build option to rebuild the images with the newly defined settings.)
Note that performing these steps will erase all data stored inside the containers.
It seems you are starting the mariadb container from an existing db data directory, this will result in using the current database instead of initializing a new one. So to solve this I would suggest to remove any existing mariadb container, remove the current db directory content, run the docker compose again:
$ docker-compose down -v
$ rm -Rf dc_test_db/*
$ docker-compose up -d
That is because you are using the client locally from inside the container itself. The local connection doesn't ask for password.
Try to connect from your host computer to the docker containerip:3306 and then it will ask for password
MySQL user is defined by username and host that request come from. For example, there is three different user root#192.168.0.123, root#localhost and wildcard root#%.
If you set MYSQL_ROOT_PASSWORD env in docker-compose file, your mariadb will set password for user root#%, not password for user root#localhost.
But when you try to test password of mariadb, you use sudo docker exec -it docker_db_1 mysql -u root -p command, it mean mariadb-client in container will use user root#local (without password) to access mariadb-server, not user root#%(that have password you set before).
So if you want to test password you set for that user, use that command:
docker run -it mariadb mysql -u root -h MARIADB-CONTAINER-IP -p
MARIADB-CONTAINER-IP is ip address of your mariadb container.(use docker inspect to check ip address of container).
Thanks.
MYSQL_ROOT_PASSWORD will only be set during 1st time running the container with the given volume.
To set the password using MYSQL_ROOT_PASSWORD:
Option1: delete old DB files and start fresh.
rm -rf ./dc_test_db
Option2: use named volume:
version: '3.5'
services:
db:
image: mariadb:latest
volumes:
- dc_test_db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
dc_test_db:
I have Laradock setup and serving a website in larval, but when I try to run php artisan migrate I get this error.
SQLSTATE[HY000] [2002] No such file or directory (SQL: select * from information_schema.tables where table_schema = yt and table_name = migrations)
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=yt
DB_USERNAME=root
DB_PASSWORD=root
I can not seem to find a solution to my issue.
First thing you should check which container run the mysql service :
sudo docker ps
Maybe it not expose the port from mysql container to localhost (127.0.0.1) so laravel can't connect to it .
Find the mysql container name then change the DB_HOST .Let take an example:
app-container 172.0.0.1
mysql-container 172.0.0.2
Because when docker run up ,it will create a virtual networking for itself ,then it will expose to your computer .So if you want laravel can work with msql ,you should change the DB_HOST to 172.0.0.2 in this example case .
I had same issue with Laradock on MacOS, couldn't connect to MariaDB container.
My way:
Get correct name for MariaDB container:
docker ps
Inspect container (for example container name is: container_mariadb_1)
docker inspect container_mariadb_1
At very bottom of long list of parameters you can see IPAddress
"IPAddress": "172.26.0.3"
I put this IP in Laravel's .env config file as DB_HOST and this is it.
Of course I'm not sure if this way is really correct, but I know that it's work for me at least twice.
UPDATE: Also in my case Laravel connects to MariaDB normally if I use DB_HOST=mariadb in .env file.
This works:
$ docker-compose run web psql -U dbuser -h db something_development
My docker-compose.yml file has environment variables all over the place. If I run docker-compose run web env I see all kinds of tasty things I'd like to reuse in these one off commands (scripts and one-time shells).
docker-compose run env
...
DATABASE_USER=dbuser
DATABASE_HOST=db
DATABASE_NAME=something_development
DB_ENV_POSTGRES_USER=dbuser
... many more
This won't work because my current shell evals it.
docker-compose run web psql -U ${DATABASE_USER} -h ${DATABASE_HOST} ${DATABASE_NAME}
```
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
````
These environment variables are coming from an env file like .app.env as referenced by docker-compose.yml but docker-compose itself can set environment variables. Seems a shame to even type dbuser when they are right there. I've tried my normal escaping tricks.
docker-compose run web psql -U \$\{DATABASE_USER\} -h \$\{DATABASE_HOST\} \$\{DATABASE_NAME\}
... many other attempts
I totally rubber ducked on SO so I'm going to answer my own question.
The answer (and there may be many ways to do it) is to run bash with -c and use single quotes so that your local shell doesn't interpolate the string.
docker-compose run web bash -c 'psql -U ${DATABASE_USER} \
-h ${DATABASE_HOST} ${DATABASE_NAME}'
Fantastic. DRY shell commands in docker-compose.
prob not the answer you want but the way we set env vars ...
(this is one of many containers)
api:
image: 10.20.2.139:5000/investigation-api:${apiTag}
container_name: "api"
links:
- "database"
- "ldap"
ports:
- 8843:8843
environment:
KEYSTORE_PASSWORD: "ooooo"
KEYSTORE: "${MYVAR}"
volumes_from:
- certs:rw
running compose ...
MYVAR=/etc/ssl/certs/keystoke.jks docker-compose (etcetera)
typically the above line will be in a provision.sh script - cheers
I'm using weave to launch some containers which form a database cluster. I have gotten this working manually on two hosts in EC2 by doing the following:
$HOST1> weave launch
$HOST2> weave launch $HOST1
$HOST1> eval $(weave env)
$HOST2> eval $(weave env)
$HOST1> docker run --name neo-1 -d -P ... my/neo4j-cluster
$HOST2> docker run --name neo-2 -d -P ... my/neo4j-cluster
$HOST3> docker run --name neo-1 -d -P -e ARBITER=true ... my/neo4j-cluster
I can check the logs and everthing starts up ok.
When using ansible I can get the above to work using the command: ... module and an environment variable:
- name: Start Neo Arbiter
command: 'docker run --name neo-2 -d -P ... my/neo4j-cluster'
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
As that's basically all eval $(weave env) does.
But when I use the docker module for ansible, even with the docker_url parameter set to the same thing you see above with DOCKER_HOST, DNS does not resolve between hosts. Here's what that looks like:
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
OR
- name: Start Neo Arbiter
docker:
image: "my/neo4j-cluster:{{neo4j_version}}"
docker_url: unix:///var/run/weave/weave.sock
name: neo-3
pull: missing
state: reloaded
detach: True
publish_all_ports: True
environment:
DOCKER_HOST: 'unix:///var/run/weave/weave.sock'
Neither of those work. The DNS does not resolve so the servers never start. I do have other server options (like SERVER_ID for neo4j, etc set just not shown here for simplicity).
Anyone run into this? I know the docker module for ansible uses docker-py and stuff. I wonder if there's some type of incompatibility with weave?
EDIT
I should mention that when the containers launch they actually show up in WeaveDNS and appear to have been added to the system. I can ping the local hostname of each container as long as its on the host. When I go to the other host though, it cannot ping the ones on the other host. This despite them registering in WeaveDNS (weave status dns) and weave status showing correct # of peers and established connections.
This could be caused by the client sending a HostConfig struct in the Docker start request, which is not really how you're supposed to do it but is supported by Docker "for backwards compatibility".
Weave has been fixed to cope, but the fix is not in a released version yet. You could try the latest snapshot version if you're brave.
You can probably kludge it by explicitly setting the DNS resolver to the docker bridge IP in your containers' config - weave has an undocumented helper weave docker-bridge-ip to find this address, and it generally won't change.