I am trying to run a DB2 docker container using docker-compose but the problem is the container is every time exited with error code 0. below is the docker compose file.
version: "2"
services:
db:
container_name: db2
image: ibmcom/db2express-c:latest
environment:
DB2INST1_PASSWORD: samplepassword!
LICENSE: accept
ports:
- "50000:50000"
tty: true
volumes:
- "./db_create.sh:/opt/db_create.sh"
command:
- "/opt/db_create.sh"
I added the tty: true after seeing some solutions suggest in stackoverflow. But its not working for me below is the docker log
Starting db2 ... done
Attaching to db2
db2 | Changing password for user db2inst1.
db2 | New password: Retype new password: passwd: all authentication tokens updated successfully.
db2 | libnuma: Warning: /sys not mounted or invalid. Assuming one node: No such file or directory
db2 | SQL1063N DB2START processing was successful.
db2 |
db2 | Creating database "SLIM"...
db2 | Existing "SLIM" database found...
db2 | Dropping and recreating database "SLIM"...
db2 | Connecting to database "SLIM"...
db2 | Creating tables and data in schema "DB2INST1"...
db2 | Creating tables with XML columns and XML data in schema "DB2INST1"...
db2 |
db2 | 'db2sampl' processing complete.
db2 |
db2 exited with code 0
not sure why the container is stopping even the log doesn't show anything. Does anyone how to keep the container up and running.
Thanks
A container may exit 0 when it's done all processing. So if your /opt/db_create.sh script runs things and then it doesn't keep a process running (not in background - daemon) then it will exit.
Related
init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.
I would like to connect my GUI database tool to my SQL Database via ssh and docker.
Currently a can connect do database via terminal using ssh user#host and some docker-compose exec mydb ... command. But then, it's of course a command line access to db.
My needs / my question
Is there a way to have my GUI database tool to connect to that DB that way ?
To be more explicit, i would like to connect to that db without any server change of any kind (really important point). So with only local configuration change. Maybe we can use the same way i can use by hand ?
What i tried
I already tried to use some ssh configuration in my config file like ProxyCommand in my ssh config file, but these command are executed in my computer...so i don't find any way to success with this.
I also searched many times anyone with the same will without success.
Somebody with great idea ?
There are some things to consider here.
make sure the port mapping is valid in the docker-compose.yaml file, it should be something like
services:
...
database:
...
ports:
- "3306:3306" # 3306 is the default Non-SSL port for MySQL database images
make sure the machine you are running the database container on has no firewall rule to block incoming traffic
make sure the in the database, the user you are trying to connect with has a whitelisted location.
for example, if you want to be able to connect as root from anywhere, you need check the user table in the mysql database, and it should look something like:
mysql> select User, Host from user;
+------------------+-----------+
| User | Host |
+------------------+-----------+
| root | % |
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+------------------+-----------+
5 rows in set (0.00 sec)
mysql>
In this case, the root user can connect to this particular DB from anywhere, because of the wildcard (%) host.
If I run docker-compose up with the docker-compose.yml below it runs successfully but I'm unable to find the volume anywhere in my Windows 10 files. I checked in C:\Users\Public\Documents\Hyper-V\Virtual hard disks but it is empty.
version: "3"
services:
database:
image: postgres:12.2
volumes:
- /var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
If I try specifying the host location for the volume with a Windows path like below I get an error about permissions
version: "3"
services:
database:
image: postgres:12.2
volumes:
- c:/docker-volumes/database:/var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
database_1 | The files belonging to this database system will be owned by user "postgres".
database_1 | This user must also own the server process.
database_1 |
database_1 | The database cluster will be initialized with locale "en_US.utf8".
database_1 | The default database encoding has accordingly been set to "UTF8".
database_1 | The default text search configuration will be set to "english".
database_1 |
database_1 | Data page checksums are disabled.
database_1 |
database_1 | fixing permissions on existing directory /var/lib/postgresql/data ...
ok
database_1 | creating subdirectories ... ok
database_1 | selecting dynamic shared memory implementation ... posix
database_1 | selecting default max_connections ... 20
database_1 | selecting default shared_buffers ... 400kB
database_1 | selecting default time zone ... Etc/UTC
database_1 | creating configuration files ... ok
database_1 | running bootstrap script ... 2020-04-27 21:00:29.194 UTC [81] FATAL:
data directory "/var/lib/postgresql/data" has wrong ownership
database_1 | 2020-04-27 21:00:29.194 UTC [81] HINT: The server must be started by
the user that owns the data directory.
database_1 | child process exited with exit code 1
database_1 | initdb: removing contents of data directory "/var/lib/postgresql/data"
What is the easiest way to automatically transfer docker container files like this postgres database to the Windows 10 host?
Since docker contents are present in separate container and docker keeps its own file system , so you need to go inside the container to view docker container files.
For this you need to run the following command from command prompt :
docker exec -it <container-id> bash.
I have tried with same docker-compose.yml mentioned in your question and after running docker-compose up , this is the way I was able to view content of container files :
I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile
I'm trying to build a docker container for a Neo4j DB. While running the db locally isn't an issue, the container is having issues starting the JVM. Looking through the neo4j:3.2.2 image I'm building my own Dockerfile from I can't see us using different versions of the JRE. The issue seems to stem from they neo4j.conf, where It crashes on unrecognized VM option flags, such as UseG1GC and OmitStackTraceInFastThrow
The Dockerfile is fairly short
FROM neo4j:3.2.2
ADD ./neo4j.conf /var/lib/neo4j/conf/.
ADD ./data/. /var/lib/neo4j/import
ADD ./scripts/. .
I've also got a docker-compose.yml
version: '2'
services:
neo4j:
image: eu.gcr.io/tine-matsans-v2/neo4j:develop
container_name: neo4j
build:
context: ./neo4j/.
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
environment:
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=litago
I'm on a Windows 10 machine, but the image builds a unix container. My colleague has no issues whatsoever with running the container, using the same configs, though he's using a Mac. That should not be relevant as the issue is within the container.
neo4j | Active database: graph.db
neo4j | Directories in use:
neo4j | home: /var/lib/neo4j
neo4j | config: /var/lib/neo4j/conf
neo4j | logs: /var/lib/neo4j/logs
neo4j | plugins: /var/lib/neo4j/plugins
neo4j | import: /var/lib/neo4j/import
neo4j | data: /var/lib/neo4j/data
neo4j | certificates: /var/lib/neo4j/certificates
neo4j | run: /var/lib/neo4j/run
neo4j | Starting Neo4j.
neo4j | Unrecognized VM option 'UseG1GC
neo4j | Did you mean '(+/-)UseG1GC'?
neo4j | Error: Could not create the Java Virtual Machine.
neo4j | Error: A fatal exception has occurred. Program will exit.
Has anyone run into similar issues? I've searched through several stack overflow posts as well as tried to read up on how the JVM and Containers work, but I can't find any solid information to help me sort this out.
I ran into this same issue. Turned out to be a the line endings on the neo4j.conf file. I used the VS code to switch the line endings to 'LF' and ran docker-compose up and everything worked out. Hope this helps.
Visual Studio Code: How to show line endings
Had to stop the docker-machine, go to the conf file, using notepadd++ convert file to UTF8 even if it's already utf8, edit eof to unix, save, docker-machine start, docker-compose up yey works
I easily solved this issue with Sublime. You can check your current line ending at menu -> view -> line endings. Just turn it into Unix and save.
I hope this helps others.