docker and bitnami/phppgadmin: How to connect to the remote postgresql database - docker

I am trying to connect to a remote postgresql database using the bitnami/phppgadmin docker
How to mention the host name
phppgadmin:
image: "bitnami/phppgadmin:7.13.0"
ports:
- "8080:8080"
- '443:8443'
environment:
PHP_PG_ADMIN_SERVER_HOST: 'xx.xx.xx.xx'
PHP_PG_ADMIN_SERVER_PORT: 5432
I am trying this, but i am not able to login in the dash board.
I have set the env variables based on the dockage/phppgadmin. BUt bitnami has no such options

Every image on Docker Hub has a corresponding page; you can look at https://hub.docker.com/r/bitnami/phppgadmin. That has an "Environment variables" section, which documents:
The phpPgAdmin instance can be customized by specifying environment variables on the first run. The following environment values are provided to custom phpPgAdmin:
DATABASE_HOST: Database server host. Default: postgresql.
So use DATABASE_HOST as the environment variable name. There is also DATABASE_PORT_NUMBER but you don't need to explicitly set it to the PostgreSQL default value.

Related

docker-compose interpolate environment variables: use default variables provided by docker-compose

I need some help with the following template:
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-${COMPOSE_PROJECT_NAME}.rule=Host(`fuu.bar`)"
networks:
- treafik
My goal is to create a template which I can use e. g. in portainer with almost zero configuration.
I thought that the following variables are available in docker-compose config but the expression ${COMPOSE_PROJECT_NAME} results in an empty string: docker-compose config
services:
nginx:
image: nginx
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-.rule=Host(`fuu.bar`)"
networks:
- treafik
Are there any default environment variables provided by docker-compose which I can use for environment interpolation?
---- Update
I use traefik (v2) as a reverse proxy. To make the containers available through treafik, you need to define routers on every service. The router name has to be unique. Lets imagine you deploy 2 or more stacks of the above template. The router name has to be unique for all services across all stacks. Because Im a lazy guy, I tried to simply integrate the environment variable COMPOSE_PROJECT_NAME (which I know is already unique in my setup because every stack must have a unique name). But the variable is not available when deploying the stack.
Of course, I could simply define the variable COMPOSE_PROJECT_NAME by myself in a .env-file, but i hoped that there are any default environment variables provided by docker.
You can use environment variables to passing strings to your docker file.
There are many ways through docker documentation. For example:
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env. The .env file path is as follows:
Starting with +v1.28, .env file is placed at the base of the project
directory
Project directory can be explicitly defined with the --file option or
COMPOSE_FILE environment variable. Otherwise, it is the current
working directory where the docker compose command is executed
(+1.28).
For previous versions, it might have trouble resolving .env file with
--file or COMPOSE_FILE. To work around it, it is recommended to use --project-directory, which overrides the path for the .env file. This inconsistency is addressed in +v1.28 by limiting the filepath to the
project directory.

How read Environment variable in solr.properties

I'm triying read a Environment Variable in a solr.properties file, the solr is running in a docker container and my docker-compose look:
solr:
environment:
- DB_NAME="xxxx"
My solr.properties is in /var/solr/ and i tried read de Environment Variable how:
jdbc.url=jdbc:mysql://localhost:3306/${DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
jdbc.url=jdbc:mysql://localhost:3306/${env.DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
jdbc.url=jdbc:mysql://localhost:3306/${env:DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
I'm starting with Docker, any idea ?
You access the environment variable with ${DB_NAME}. Also pay attention that you need to replace localhost with the name of the database service if your database runs in a different container (is also a service in docker-compose.yml) than solr (as it should be).

Docker share environment variables using volumes

How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?
Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.

Share connection details with container and host

My docker-compose.yml contains application container and database container
app:
links:
- db
db:
image: postgres
ports:
- 5003:5432
Let's say that during development I want to start only db container and to connect to it and I use localhost:5003.
In production I want to start both containers, one with application and one with database. But now I need to use db:5432 in application container to connect to db
Is it possible to modify docker-compose configuration file to be able to use same database uri in both cases?
I would suggest to create multiple docker-compose files for the different environments.
You can create a base docker-compose file and add overrides for the different environments as described here: https://docs.docker.com/compose/extends/#different-environments

Fig (Docker): how to specify which services to run depending on the environment

I'm using Fig (and Docker) to set up my dev environment.
One of the services that I have configured is Adminer, which is a lightweight web database client. I need it for development, but don't want it running in production. How can I do that? A solution for Fig (preferable) or Docker will do.
Here's a part of my fig.yml:
db:
image: postgres
adminer:
image: clue/adminer
links:
- db
ports:
- "8081:80"
You could use multiple fig files. Fig uses fig.yml by default, but you can specify with the -f flag. Docs.
Thus, whatever you want your default to be could be fig.yml. Then, you could have fig-dev.yml (for example) for your development environment. Use fig -f fig-dev.yml up when using that one.

Resources