So, I'm attempting to replicate a flow for setting up my docker stack that I have working well locally, inside a Github action to perform some testing under as close to production / real world scenarios as possible.
However, I'm running into the following issue on the GitHub action workflow which causes a failure:
psycopg2.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution
Essentially, what works in terms of networks locally - doesn't work inside the Github action. The only seeming different is potentially the version of docker I am using within the Github action.
This is my workflow/ci.yml file:
name: perseus/ci
on:
pull_request:
branches:
- main
paths-ignore:
- '__pycache__'
- '.pytest_cache'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
name: CI/CD Build & Test w/pytest
strategy:
matrix:
os: [ ubuntu-latest ]
runs-on: ${{ matrix.os }}
env:
PROJECT_NAME: "Perseus FastAPI"
FIRST_SUPERUSER_EMAIL: ${{ secrets.FIRST_SUPERUSER_EMAIL }}
FIRST_SUPERUSER_PASSWORD: ${{ secrets.FIRST_SUPERUSER_PASSWORD }}
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
POSTGRES_SERVER: "db"
POSTGRES_PORT: "5432"
POSTGRES_DB: "postgres"
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
SERVER_NAME: "perseus"
SERVER_HOST: "https://perseus.observerly.com"
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Environment File
run: |
touch .env
echo PROJECT_NAME=${PROJECT_NAME} > .env
echo FIRST_SUPERUSER_EMAIL=${FIRST_SUPERUSER_EMAIL} > .env
echo FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD} > .env
echo POSTGRES_USER=${POSTGRES_USER} > .env
echo POSTGRES_PASSWORD=${POSTGRES_PASSWORD} > .env
echo POSTGRES_SERVER=${POSTGRES_SERVER} > .env
echo POSTGRES_PORT=${POSTGRES_PORT} > .env
echo POSTGRES_DB=${POSTGRES_DB} > .env
echo SENTRY_DSN=${SENTRY_DSN} > .env
echo SERVER_NAME=${SERVER_NAME} > .env
echo SERVER_HOST=${SERVER_HOST} > .env
cat .env
- name: Docker Compose Build
run: docker compose -f local.yml build --build-arg INSTALL_DEV="true"
- name: Docker Compose Up
run: docker compose -f local.yml up -d
- name: Alembic Upgrade Head (Run Migrations)
run: docker compose -f local.yml exec api alembic upgrade head
- name: Seed Body (Stars, Galaxies etc) Data
run: docker compose -f local.yml exec api ./scripts/init_db_seed.sh
Essentially, all steps are working up until the api service needs to talk to the db service, over the network e.g., db:5432
My local.yml file is as follows:
version: '3.8'
services:
traefik:
image: traefik:latest
container_name: traefik_proxy
restart: always
security_opt:
- no-new-privileges:true
command:
## API Settings - https://docs.traefik.io/operations/api/, endpoints - https://docs.traefik.io/operations/api/#endpoints ##
- --api.insecure=true # <== Enabling insecure api, NOT RECOMMENDED FOR PRODUCTION
- --api.dashboard=true # <== Enabling the dashboard to view services, middlewares, routers, etc...
- --api.debug=true # <== Enabling additional endpoints for debugging and profiling
## Log Settings (options: ERROR, DEBUG, PANIC, FATAL, WARN, INFO) - https://docs.traefik.io/observability/logs/ ##
- --log.level=ERROR # <== Setting the level of the logs from traefik
## Provider Settings - https://docs.traefik.io/providers/docker/#provider-configuration ##
labels:
# Enable traefik on itself to view dashboard and assign subdomain to view it
- traefik.enable=false
# Setting the domain for the dashboard
- traefik.http.routers.api.rule=Host("traefik.docker.localhost")
# Enabling the api to be a service to access
- traefik.http.routers.api.service=api#internal
ports:
# HTTP
- 80:80
# HTTPS / SSL port
- 443:443
volumes:
# Volume for docker admin
- /var/run/docker.sock:/var/run/docker.sock:ro
# Map the static configuration into the container
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
# Map the configuration into the container
- ./traefik/config.yml:/etc/traefik/config.yml:ro
# Map the certificats into the container
- ./certs:/etc/certs:ro
networks:
- web
api:
build: .
command: uvicorn app.main:app --host 0.0.0.0 --port 5000 --reload --workers 1 --ssl-keyfile "./certs/local-key.pem" --ssl-certfile "./certs/local-cert.pem" --ssl-cert-reqs 1
container_name: perseus_api
restart: always
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
links:
- db:db
env_file:
- .env
labels:
# The following labels define the behavior and rules of the traefik proxy for this container
# For more information, see: https://docs.traefik.io/providers/docker/#exposedbydefault
# Enable this container to be mapped by traefik:
- traefik.enable=true
# URL to reach this container:
- traefik.http.routers.web.rule=Host("perseus.docker.localhost")
# URL to reach this container for secure traffic:
- traefik.http.routers.websecured.rule=Host("perseus.docker.localhost")
# Defining entrypoint for https:
- traefik.http.routers.websecured.entrypoints=websecured
networks:
- web
- api
db:
image: postgres:14-alpine
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
- ./scripts/init_pgtrgm_extension.sql:/docker-entrypoint-initdb.d/init_pgtrgm_extension.sql
ports:
- 5432:5432
env_file:
- .env
networks:
- api
volumes:
postgres_data:
networks:
web:
name: web
api:
name: api
driver: bridge
Are there any networking tips or GitHub action tips that could help me get over this seemingly small hurdle?
I've done as much research as I can into the problem - but I can't seem to see what the solution could be ...
Related
I have some integration tests that, in order to succesfully run, require a running postgres database, setup via docker-compose, and my go app running from main.go. Here is my docker-compose:
version: "3.9"
services:
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
and my Github Actions are as follows:
jobs:
unit:
name: Test
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12.5
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- 5432:5432
env:
GOMODCACHE: "${{ github.workspace }}/.go/mod/cache"
TEST_RACE: true
steps:
- name: Initiate Database
run: psql -f initdb/init.sql postgresql://postgres:password#localhost:5432/my-db
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
- name: Authenticate with GCP
id: auth
uses: "google-github-actions/auth#v0"
with: credentials_json: ${{ secrets.GCP_ACTIONS_SECRET }}
- name: Configure Docker
run: |
gcloud auth configure-docker "europe- docker.pkg.dev,gcr.io,eu.gcr.io"
- name: Set up Docker BuildX
uses: docker/setup-buildx-action#v1
- name: Start App
run: |
VERSION=latest make images
docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-db?sslmode=disable' --name='app' image/app
- name: Tests
env:
POSTGRES_DB_URL: //postgres:password#localhost:5432/my-db?sslmode=disable
GOMODCACHE: ${{ github.workspace }}/.go/pkg/mod
run: |
make test-integration
docker stop app
My tests run just fine locally firing off the docker-compose with docker-compose up and running the app from main.go. However, in Github actions I am getting the following error:
failed to connect to `host=/tmp user=nonroot database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory
What am I missing? Thanks
I think this code has more than one problem.
Problem one:
In your code I don't see you run docker-compose up, therefore I would assume that Postgres is not running.
Problem two:
is in this line: docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-app?sslmode=disable' --name='app' image/app
You point the host of Postgres to localhost, which on your local machine works. As there localhost is your local comuter. Though, as you use docker run you are not running this on your local machine, but in a docker container. There localhost is pointing to inside the conmtainer.
Posible solution for both
As you are already using docker-compose I suggest you to also add your test web server there.
Change your docker-compose file to:
version: "3.9"
services:
webapp:
build: image/app
environment:
POSTGRES_DB_URL='//postgres:password#postgres:5432/my-app?sslmode=disable'
ports:
- "3000:3000"
depends_on:
- "postgres"
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-app
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
If you now run docker-compose up, both services will be available. And it should work. Though I am not a github-actions expert, so I might have missed something. At least like this, you can run your tests locally the same way as in CI, something that I always see as a big plus.
What you are missing is setting up the actual Postgres Client inside the Github Actions server (that is why there is no psql tool to be found).
Set it up as a step.
- name: Install PostgreSQL client
run: |
apt-get update
apt-get install --yes postgresql-client
Apart from that, if you run everything through docker-compose you will need to wait for postgres to be up and running (healthy & accepting connections).
Consider the following docker-compose:
version: '3.1'
services:
api:
build: .
depends_on:
- db
ports:
- 8080:8080
environment:
- RUN_UP_MIGRATION=true
- PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable
command: ./entry
db:
image: postgres:9.5-alpine
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- ./db:/docker-entrypoint-initdb.d/
There are a couple of things you need to notice. First of all, in the environment section of the api we have PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable which is the connection string to the db being passed as an env variable. Notice the host is host.docker.internal.
Besides that we have command: ./entry in the api section. The entry file contains the following #!/bin/ash script:
#!/bin/ash
NOT_READY=1
while [ $NOT_READY -gt 0 ] # <- loop that waits till postgres is ready to accept connections
do
pg_isready --dbname=gotstockapi --host=host.docker.internal --port=5432 --username=gotstock_user
NOT_READY=$?
sleep 1
done;
./gotstock-api # <- actually executes the build of the api
sleep 10
go test -v ./it # <- executes the integration-tests
And finally, in order for the psql client to work in the above script, the docker file of the api is looking like this:
# syntax=docker/dockerfile:1
FROM golang:1.19-alpine3.15
RUN apk add build-base
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN apk add postgresql-client
RUN go build -o gotstock-api
EXPOSE 8080
Notice RUN apk add postgresql-client which installs the client.
Happy hacking! =)
I'm create an CI/CD process with using docker, heroku and Github Action but I've got an issue with envs.
Right now when I'm running heroku logs I see that MongoServerError: bad auth : Authentication failed. and I think that this problem is in passing envs to my container and then using in github actions cause in code I pass simply process.env.MONGODB_PASS.
In docker-compose.yml I'm using envs from .env file, but Github Actions can't use this file because I put this file into .gitignore...
Here is my config:
.github/workflows/main.yml
name: Deploy
on:
push:
branches:
- develop
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "---"
heroku_email: "---"
usedocker: true
docker-compose.yml
version: '3.7'
networks:
proxy:
external: true
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "---"]
networks:
- proxy
worker:
container_name: ---
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
can someone tell me how to resolve this issue? thanks for any help!
How to run ory/hydra as docker container with a custom configuration file.
Is it the best way to run or providing environment variable is better.
Need help!!
I use a dockerfile with this content:
FROM oryd/hydra:latest
COPY hydra.yml /home/ory/.hydra
COPY hydra.yml /.hydra
EXPOSE 4444 4445 5555
only one copy is needed but i never know which :)
you can run with env. variables as well, but I prefer the yaml, it's easier
Apart from the solution mentioned above, you can also try Docker Compose. Personally, I prefer this as I find it easy to follow and manage.
The official 5-minute tutorial also uses Docker Compose which can be found here: https://www.ory.sh/hydra/docs/5min-tutorial/
I am using PostgreSQL as the database here but you can replace it with any other of the supported databases such as MySQL.
More information about supported Database configurations are here: https://www.ory.sh/hydra/docs/dependencies-environment/#database-configuration
First, you create docker-compose.yml like below and then just docker-compose up
version: '3.7'
networks:
intranet:
driver: bridge
services:
hydra-migrate:
depends_on:
- auth-db
container_name: hydra-migrate
image: oryd/hydra:v1.10.6
environment:
- DSN=postgres://auth:secret#auth-db:5432/auth?sslmode=disable&max_conns=20&max_idle_conns=4
command: migrate sql -e --yes
networks:
- intranet
hydra:
container_name: hydra
image: oryd/hydra:v1.10.6
depends_on:
- auth-db
- hydra-migrate
ports:
- "4444:4444" # Public port
- "4445:4445" # Admin port
- "5555:5555" # Port for hydra token user
command:
serve -c /etc/hydra/config/hydra.yml all --dangerous-force-http
restart: on-failure
networks:
- intranet
volumes:
- type: bind
source: ./config
target: /etc/hydra/config
auth-db:
image: postgres:alpine
container_name: auth-db
ports:
- "5432:5432"
environment:
- POSTGRES_USER=auth
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=auth
networks:
- intranet
hydra-migrate service takes care of the database migrations and we don't actually need to specify an external configuration file for this and just the DSN as an environment variable would suffice.
hydra service starts up the hydra container and here I have done a volume mount where I have bound my local config folder to /etc/hydra/config within the container.
And then when the container spins up the following command is executed
serve -c /etc/hydra/config/hydra.yml all --dangerous-force-http which uses the bound config file.
And here is how my hydra config file looks like:
## ORY Hydra Configuration
version: v1.10.6
serve:
public:
cors:
enabled: true
dsn: postgres://auth:secret#auth-db:5432/auth?sslmode=disable&max_conns=20&max_idle_conns=4
oidc:
subject_identifiers:
supported_types:
- public
- pairwise
pairwise:
salt: youReallyNeedToChangeThis
urls:
login: http://localhost:4455/auth/login
consent: http://localhost:4455/auth/consent
logout: http://localhost:4455/consent
error: http://localhost:4455/error
post_logout_redirect: http://localhost:3000/
self:
public: http://localhost:4444/
issuer: http://localhost:4444/
ttl:
access_token: 1h
refresh_token: 1h
id_token: 1h
auth_code: 1h
oauth2:
expose_internal_errors: true
secrets:
cookie:
- youReallyNeedToChangeThis
system:
- youReallyNeedToChangeThis
log:
leak_sensitive_values: true
format: json
level: debug
I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php
In short:
I have a hard time figuring out how to set custom IP for a Solr container from the docker-compose.yml file.
Detailed
We want to deploy local dev environments, for Drupal instances, via Docker.
The propblem is, that while from the browser I can access the Solr server via the "traditional" http://localhost:8983/solr, Drupal cannot connect to it this way. The internal 0.0.0.0, and 127.0.0.1 doesn't work either. The only way Drupal can connect to the Solr server is via lan IP, which differs for every station obviously, and since the configuration in Drupal needs to be updated anyway, I thought that specifying a custom IP on which they can communicate would be my best choice, but it's not straightforward.
I am aware that assigning static IP to the container is not the best solution, but it seems more feasible than tinkering with solr.in.sh, and if someone has a different approach to achieve this, I am opened to solutions.
Most likely I could use some command line parameter along with docker run, but we need to run the containers with docker-compose up -d, so this wouldn't be an optimal solution.
Ideal would be a Solr container section example for the compose file. Thanks.
Note:
This link shows an example how to set it, but I can't understand it well. Please keep in mind that I am by no means an expert.
Forgot to mention that the host is based on Linux, mostly Ubuntu and Debian.
Edit:
As requested, here is my compose file:
version: "2"
services:
db:
image: wodby/drupal-mariadb
environment:
MYSQL_RANDOM_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
# command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci # The simple way to override the mariadb config.
volumes:
- ./data/mysql:/var/lib/mysql
- ./docker-runtime/mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
php:
image: wodby/drupal-php:7.0 # Allowed: 7.0, 5.6.
environment:
DEPLOY_ENV: dev
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_XDEBUG_ENABLED: 1 # Set 1 to enable.
# PHP_SITE_NAME: dev
# PHP_HOST_NAME: localhost:8000
# PHP_DOCROOT: public # Relative path inside the /var/www/html/ directory.
# PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
# PHP_XDEBUG_ENABLED: 1
# PHP_XDEBUG_AUTOSTART: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0 # This is needed to respect remote.host setting bellow
# PHP_XDEBUG_REMOTE_HOST: "10.254.254.254" # You will also need to 'sudo ifconfig lo0 alias 10.254.254.254'
links:
- db
volumes:
- ./docroot:/var/www/html
nginx:
image: wodby/drupal-nginx
hostname: testing
environment:
# NGINX_SERVER_NAME: localhost
NGINX_UPSTREAM_NAME: php
# NGINX_DOCROOT: public # Relative path inside the /var/www/html/ directory.
DRUPAL_VERSION: 7 # Allowed: 7, 8.
volumes_from:
- php
ports:
- "${PORT_WEB}:80"
pma:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
PMA_USER: ${MYSQL_USER}
PMA_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${PORT_PMA}:80'
links:
- db
mailhog:
image: mailhog/mailhog
ports:
- "8002:8025"
redis:
image: redis:3.2-alpine
# memcached:
# image: memcached:1.4-alpine
# memcached-admin:
# image: phynias/phpmemcachedadmin
# ports:
# - "8006:80"
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
# entrypoint:
# - docker-entrypoint.sh
# - solr-precreate
ports:
- "8983:8983"
# varnish:
# image: wodby/drupal-varnish
# depends_on:
# - nginx
# environment:
# VARNISH_SECRET: secret
# VARNISH_BACKEND_HOST: nginx
# VARNISH_BACKEND_PORT: 80
# VARNISH_MEMORY_SIZE: 256M
# VARNISH_STORAGE_SIZE: 1024M
# ports:
# - "8004:6081" # HTTP Proxy
# - "8005:6082" # Control terminal
# sshd:
# image: wodby/drupal-sshd
# environment:
# SSH_PUB_KEY: "ssh-rsa ..."
# volumes_from:
# - php
# ports:
# - "8006:22"
A docker run example would be
IP_ADDRESS=$(hostname -I)
docker run -d -p 8983:8983 solr bin/solr start -h ${IP_ADDRESS} -p 8983
Instead of assigning static IPs, you could use the following method to get the container's IP dynamically.
When you link containers together, they share there network information (IP, port) to each other. The information is stored in each container as environmental variables.
Example
docker-compose.yml
service:
build: .
links:
- redis
ports:
- "3001:3001"
redis:
build: .
ports:
- "6369:6369"
The service container will now have the following environmental variables:
Dynamic IP Address Stored Within "service" container:
REDIS_PORT_6379_TCP_ADDR
Dynamic PORT Stored Within "service" container:
REDIS_PORT_6379_TCP_PORT
You can always check this out by shelling into the container and looking yourself.
docker exec -it [ContainerID] bash
printenv
Inside your nodeJS app you can use the environmental variable in your connection function by using process.env.
let client = redis.createClient({
port: process.env.REDIS_PORT_6379_TCP_ADDR,
host: process.env.REDIS_PORT_6379_TCP_PORT
});
Edit
Here is the updated docker-compose.yml "solr" section:
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
ports:
- "8983:8983"
links:
- db
In the above example the "solr" container is now linked with the "db" container. this is done using the "links" field.
You can do the same thing if you wanted to link the solr container to any other container within the docker-compose.yml file.
The db containers information will now be available to the solr container (via the enviromental variables I mentioned earlier).
Without the linking, you will not see those enviromental variables listed when you do the printenv command.