What is the docker run -e equivalent in docker-compose - docker

I can't get environmental variables in a docker-compose file written directly in it to work. A similar configuration with the command line work just fine like this:
docker run --name container_name -d --network=my-net --mount type=bind,src=/Users/t2wu/Documents/Work/Dodo/Intron-Exon_expression/DockerCompose/intronexon_db/mnt_mysql,dst=/var/lib/mysql -e MYSQL_DATABASE=db_name -e MYSQL_USER=username -e MYSQL_PASSWORD=passwd mysql/mysql-server:8.0.13
This is an MySQL instance which sets three environmental variables: MYSQL_DATABASE, MYSQL_USER and MYSQL_PASSWORD. I'm later able to launch bash into it docker exec -it container_name bash and launch the client mysql -u username -p and connects just fine.
However when I write it in a docker-compose.yml:
version: "3.7"
services:
intronexon_db:
image: mysql/mysql-server:8.0.13
volumes:
- type: bind
source: ./intronexon_db/mnt_mysql
target: /var/lib/mysql
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: username
MYSQL_PASSWORD: passwd
networks:
- my-net
networks:
my-net:
driver: bridge
Then when I use the mysql client, it's as if the user doesn't exist. How do I set it so that it is equivalent to the -e flag during docker run?
EDIT
docker-compose --version shows docker-compose version 1.24.1, build 4667896b
EDIT 2
The environmental flag did work. But I run into problem because:
Part of the problem was that it takes MySQL sometime to get the database, username and password setup ready. And I was checking it way too early.
I need to specify localhost for some reason: mysql --host=localhost -u user -p. Specifying 127.0.0.1 will not work.
For some unknown reason the example stack.yml from the official docker image did not have to specify --host when the adminer container is run. If I wipe out the adminer, then --host flag needs to be given.
Sometimes MySQL daemon will stop. It might has to do with my mount target /var/lib/mysql but I'm not certain.
command: --default-authentication-plugin=mysql_native_password is actually significant. I don't know why when I did docker run I didn't need to do anything about this.

docker-compose accept both types of ENVs either an array or a dictionary, better to double or try both approaches.
environment
Add environment variables. You can use either an array or a
dictionary. Any boolean values; true, false, yes no, need to be
enclosed in quotes to ensure they are not converted to True or False
by the YML parser.
Environment variables with only a key are resolved to their values on
the machine Compose is running on, which can be helpful for secret or
host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
or
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Might be something with docker-compose version as it working fine with 3.1. as the offical image suggested, so Better to try offical image docker-compose.yml
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
Also, better to debug such cases where everything seems correct but some minor syntax is missing. you can test it before working with DB.
version: "3.7"
services:
intronexon_db:
image: alpine
environment:
MYSQL_DATABASE: myDb
command: tail -f /dev/null
run docker-compose up
Now test and debug in testing enviroment.
docker exec -it composeenv_intronexon_db_1 ash -c "printenv"

the environment params in your yml need the - in front of them could be the likely culprit
version: "3.7"
services:
intronexon_db:
image: mysql/mysql-server:8.0.13
volumes:
- ./intronexon_db/mnt_mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE: db_name
- MYSQL_USER: username
- MYSQL_PASSWORD: passwd
networks:
- my-net
networks:
my-net:
driver: bridge

Related

Trouble making a drupal drush site-install through docker exec

I am trying to develop an automatic Drupal installer which creates an already configured Drupal docker container ready to be used with one single command execution.
In order to achieve this, I have this docker-compose.yml:
version: "3"
services:
# Database
drupal_db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
container_name: drupal_db
ports:
- "33061:3306"
restart: unless-stopped
volumes:
- drupal_db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: drupal
MYSQL_DATABASE: drupal
MYSQL_USER: drupal
MYSQL_PASSWORD: drupal
networks:
- drupal
# Drupal
drupal:
image: drupal:9-php7.4-apache
container_name: drupal
ports:
- "8080:80"
restart: unless-stopped
volumes:
- ./drupal/d_modules:/var/www/html/modules
- ./drupal/d_profiles:/var/www/html/profiles
- ./drupal/d_sites:/var/www/html/sites
- ./drupal/d_sites/default/files/translations:/var/www/html/sites/default/files/translations
- ./drupal/d_themes:/var/www/html/themes
- ./scripts/drush:/opt/drupal/scripts
depends_on:
- drupal_db
env_file:
- drupal-install.env
links:
- drupal_db:mysql
networks:
- drupal
volumes:
drupal_db_data: {}
drupal_data: {}
networks:
drupal:
Together with this Makefile:
clear:
docker-compose down -v
autoinstall:
docker-compose up -d
docker exec drupal composer require drush/drush
docker exec drupal bash -c '/opt/drupal/scripts/autoinstall.sh'
autoinstall.sh is an script which is mounted via one of Drupal's container volumes, which runs this:
#!/bin/bash
drush site-install ${DRUPAL_PROFILE} \
--locale=${LOCALE} \
--db-url=${DB_URL} \
--site-name=${SITE_NAME} \
--site-mail=${SITE_MAIL} \
--account-name=${ACCOUNT_NAME} \
--account-mail=${ACCOUNT_MAIL} \
--account-pass=${ACCOUNT_PASS} \
--yes
This uses environment variables, which are specified at docker-compose.yml through the env-file drupal-install.env:
HOST=drupal_db:33061
DBASE=drupal
USER=drupal
PASS=drupal
DATABASE_HOST=drupal_db:33061
DRUPAL_PROFILE=standard
LOCALE=en
DB_URL=mysql://drupal:drupal#drupal_db:3306/drupal
SITE_NAME=NewSite
SITE_MAIL=newsite#test.com
ACCOUNT_NAME=admin
ACCOUNT_MAIL=admin#test.com
ACCOUNT_PASS=admin
However, when running the make autoinstall command, first two lines run with no issues, but the last one throws this error:
Database settings:<br /><br />Resolve all issues below to continue the installation. For help
configuring your database server, see the <a href="https://www.drupal.org/docs/8/install">in
stallation handbook</a>, or contact your hosting provider.<div class="item-list"><ul><li>Fail
ed to connect to your database server. The server reports the following message: <em class="p
laceholder">SQLSTATE[HY000] [2002] Connection refused</em>.<ul><li>Is the database server run
ning?</li><li>Does the database exist or does the database user have sufficient privileges to
create the database?</li><li>Have you entered the correct database name?</li><li>Have you en
tered the correct username and password?</li><li>Have you entered the correct database hostna
me and port number?</li></ul></li></ul></div>
If I manually run:
docker-compose up -d
docker exec -it drupal bash
composer require drush/drush
/opt/drupal/scripts/autoinstall.sh
Everything works perfectly, but the makefile script doesn't work.
Something really weird happens because if I run make autoinstall twice, the first time will throw this error, but it actually works when I run it a second time. It is really strange and I can't find a solution, but I would like to not to have to run the command twice.

Cannot exec into container using GitBash when using Docker Compose

I'm new to Docker Compose, but have used Docker for years. The screen shot below is of PowerShell and of GitBash. If I run containers without docker-compose I can docker exec -it <container_ref> /bin/bash with no problems from either of these shells.
However, when running using docker-compose up both shells give no error when attempting to use docker-compose exec. They both just hang a few seconds and return to prompt.
Lastly, for some reason I do get an error in GitBash when using what I know: docker exec.... I've used this for years so I'm perplexed and posting a question. What does Docker Compose do that messes with GitBash docker ability, but not with PowerShell? And, why the hang when using docker-compose exec..., but no error?
I am using tty: true in the docker-compose.yml, but that honestly doesn't seem to make a difference. Not to throw a bunch of questions in one post, but whatever is going on could it also be the reason I can't hit my web server in the browser only when using Docker Compose to run it?
version: '3.8'
volumes:
pgdata:
external: true
services:
db:
image: postgres
container_name: trac-db
tty: true
restart: 'unless-stopped'
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: iol
volumes:
- pgdata:/var/lib/postgresql/data
network_mode: 'host'
expose:
- 5432
web:
image: lindben/trac-server
container_name: trac-server
tty: true
restart: 'unless-stopped'
environment:
ADDRESS: localhost
PORT: 3000
NODE_ENV: development
depends_on:
- db
network_mode: 'host'
privileged: true
expose:
- 1234
- 3000
```
I'm gonna be assuming you're using Docker for Desktop and so the reason you can docker exec just fine using powershell is because for windows docker is a native program\command and for GitBash which is based on bash a linux shell (bash = Bourne-Again SHell) not so much.
so when using a windows command that needs a tty you need some sort of "adapter" like winpty for example to bridge the gap between docker's interface and GitBash's one.
Here's a more detailed explanation on winpty
putting all of this aside, if trying to only use the compose options it maybe better for you to advise this question
Now, regarding your web service issue, I think that you're not actually publicly exposing your application using the expose tag. take a look at the docker-compose
expose reference. what you need is to add a "ports" tag like so as referenced here:
db:
ports:
- "5432:5432"
web:
ports:
- "1234:1234"
- "3000:3000"
Hope this solves your pickle ;)

How to initialize a PSQL database in docker-compose file?

I would like to know if it's possible to execute a PSQL command inside the docker-compose file.
I have the following docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:9.6
container_name: postgres-container
ports:
- "5432:5432"
network_mode: host
environment:
- LC_ALL=C.UTF-8
- POSTGRES_DB=databasename
- POSTGRES_USER=username
- POSTGRES_PASSWORD=
- POSTGRES_PORT=5432
And After this is running ok, I run the following command:
docker exec -i postgres-container psql -U username -d databasename < data.sql
These 2 steps works fine. But I would ike to know if it's possible to make one single step.
Every time I want to run this command. It's important the database is always new. That's why I don't persist it in a volume and want to run this command.
Is it possible to run docker-compose up and also run the psql command?
Thanks in advance!
Pure docker-compose solution with volume,
volumes:
- ./data.sql:/docker-entrypoint-initdb.d/init.sql
According to the dockerfile, at start up, it will dump in every sql data in docker-entrypoint-initdb.d

Docker-MySQL5.6 unknown variable lower_case_table_names=1

I want to set the variable lower_case_table_names to 1 in the MySQL 5.6 docker container.
I put the variable in the my.cnf file [mysqld] under /etc/mysql in the container.
After stopping the container it didn't start giving this error:
unknown variable lower_case_table_names=1
So what I'm asking is there is an other way to set this variable to 1 ?
I hope you already found your answer, but something like this works:
docker run -p 3306:3306 mysql:5.6 -e MYSQL_ALLOW_EMPTY_PASSWORD=1 mysqld --lower_case_table_names=1
For docker-compose, this works:
services:
db:
image: mysql:5.7
restart: always
command: --lower_case_table_names=1
environment:
MYSQL_DATABASE: 'test'

How do I convert this docker command to docker-compose?

I run this command manually:
$ docker run -it --rm \
--network app-tier \
bitnami/cassandra:latest cqlsh --username cassandra --password cassandra cassandra-server
But I don't know how to convert it to a docker compose file, specially the container's custom properties such as --username and --password.
What should I write in a docker-compose.yaml file to obtain the same result?
Thanks
Here is a sample of how others have done it. http://abiasforaction.net/apache-cassandra-cluster-docker/
Running the command below
command:
Setting arg's below
environment:
Remember just because you can doesn't mean you should.. Compose is not always the best way to launch something. Often it can be the lazy way.
If your running this as a service id suggest building the dockerfile to start and then creating systemd/init scripts to rm/relaunch it.
an example cassandra docker-compose.yml might be
version: '2'
services:
cassandra:
image: 'bitnami/cassandra:latest'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
although this will not provide you with your commandline arguments but start it with the default CMD or ENTRYPOINT.
As you are actually running another command then the default you might not want to do this with docker-compose. Or you can create a new Docker image with this command as the default and provide the username and password as ENV's
e.g. something like this (untested)
FROM bitnami/cassandra:latest
ENV USER=cassandra
ENV PASSWORD=password
CMD ["cqlsh", "--username", "$USER", "--password", "$PASSWORD", "cassandra-server"]
and you can build it
docker build -t mycassandra .
and run it with something like:
docker run -it -e "USER=foo" -e "PASSWORD=bar" mycassandra
or in docker-compose
services:
cassandra:
image: 'mycassandra'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
environment:
USER:user
PASSWORD:pass
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
You might looking for something like the following. Not sure if it is going to help you....
version: '3'
services:
my_app:
image: bitnami/cassandra:latest
command: /bin/sh -c cqlsh --username cassandra --password cassandra cassandra-server
ports:
- "8080:8080"
networks:
- app-tier
networks:
app-tier:
external: true

Resources