I'm having trouble creating an InfluxDB user in a dockerfile. I want the dockerfile to also install and start influxd. For postgres I can use this in my Dockerfile after installing postgres:
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER matt WITH PASSWORD 'test123';" &&\
createdb test_db &&\
psql --command "GRANT ALL PRIVILEGES ON DATABASE test_db TO matt;"
Is there an equivalent for influxdb?
I ultimately used pieces of the tutumcloud/influxdb repo: the dockerfile starts a shell script that, when it runs, starts influxd in the background, creates an admin user and then foregrounds the influxd process.
Related
I got a MariaDB database dump from a colleague, and he asked me to run it in a docker container.
So i executed the following:
docker pull mariadb:10.4.26
then created the container
docker run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
then connected to the container:
docker exec -it test_smdb mariadb --user root -p<some_password>
and created a database in it from the mariadb prompt:
MariaDB [(none)]> CREATE DATABASE smdb_dev;
So far, so good. But when i tried to import the dump into it via this command:
docker exec -i test_smdb mariadb -uroot -p<some_password> --force < C:\smdb-dev.sql
i get a lot of lines like
ERROR 1046 (3D000) at line 22: No database selected.
So i am not sure what exactly is the issue?
Should i define a database, in which the dump should be imported? If yes - how exactly, because i look at different pages, like:
https://hub.docker.com/_/mariadb, especially this:
$ docker exec -i some-mariadb sh -c 'exec mariadb -uroot -p"$MARIADB_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql
and i see no database mentioned here.
Or
2) The colleague has not created the dump in the correct way?
I do not use mariadb in a docker environment at my place, but I do use mariadb on a linux machine so it should be really similar.
You said you used this command :
docker exec -i test_smdb mariadb -uroot -p<some_password> --force < C:\smdb-dev.sql
If we breakthrough it :
docker exec -i test_smdb docker stuff where you ask docker to execute the following command on the test_smdb container (or close to it, I'm not a docker daily user).
mariadb -uroot -p<password> --force here is the interesting part. You ask your shell to open mariadb and login as root with then given password with an extra flag --force. But you never specify which database should be overridden.
In my gist, again for mariadb outside docker but I really think it should be the same, I've the following command mariadb -uusername -p<password> <DB_NAME> < /path/to/file.sql
So I would try something like :
docker exec -i test_smdb mariadb -uroot -p<some_password> smdb_dev --force < C:\smdb-dev.sql
Below command should work I believe
docker exec -i test_smdb sh -c "exec mariadb -uroot -pPASSWORD smdb_dev" < /some/path/on/your/host/all-databases.sql
Reference: Import local database to remote host Docker container
I tried to install postgresql on Linux Manjaro and create db and user.
I have executed next commands:
$ sudo pacman -S postgresql postgis
$ sudo -u postgres -i
Than
$ initdb -D '/var/lib/postgres/data'
returns 'access denied' error.
I was trying to create connection via pgAdmin4.
Than I got next error:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
I found a lot of similar issues. So tried some solves like changing the path of db location or reinstall postgresql.
My aim is to run pg in Rails. But now I have
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/run/postgresql/.s.PGSQL.5432"
when I am trying to execute
$ rails db:create
Seems, I do not understand the whole way of pg configuring. Thank you!
UPDATED:
Next commands solved the problem:
sudo pacman -R postgresql
sudo pacman -S postgresql postgis
sudo su - postgres -c "initdb -E UTF8 -D '/var/lib/postgres/data'"
systemctl start postgresql
systemctl status postgresql
sudo su - postgres
createuser user1
createdb -O user1 db1
psql db1 -U user1
Next commands solved the problem:
sudo pacman -R postgresql
sudo pacman -S postgresql postgis
sudo su - postgres -c "initdb -E UTF8 -D '/var/lib/postgres/data'"
systemctl start postgresql
systemctl status postgresql
sudo su - postgres
createuser user1
createdb -O user1 db1
psql db1 -U user1
I have this script with some commands like:
sudo docker exec $container psql -U postgres -c "CREATE DATABASE $gisdb ;"
The $container parameter is no problem, but I cannot get the right value read with $gisdb parameter in the CREATE DATABASE command. Is it possible any other way or do I need to redesign this command or use hardcoded values?
DB name
According to PostgreSQL Documentation, creating database requires name of the database
CREATE DATABASE _name_
So, $gisdb in your code - is the name of the created database.
You might hardcode this name, but you should exactly know - which name you' hardcode.
Because, generally, other services depends on DB name.
So, check the manual of your code.
get rid of sudo
You can omit sudo before docker command.
Just add current user to the docker group:
usermod -aG docker $USER
And then, after login, you'll be able to run docker without sudo
i am creating a mysql 5.6 docker using bash script and i would like to change the password.
how can i send sql commands from bash to docker?
build:
sudo docker build -t mysql-5.6 -f ./.Dockerfile .
run.sh:
#!/bin/bash
sudo docker run --name=mysql1 -d mysql-5.6
sudo docker exec -it mysql1 mysql -uroot -p$base_password \
<<< SET PASSWORD FOR 'root'#'localhost' = PASSWORD('new_pass');
You need to bind MySQL-port like descriped here. To keep the port 3306 you can just expose it on your host the following way:
sudo docker run --name=mysql1 -p 3306:3306 -d mysql-5.6
After that you should be able to use mysql -u USER -p PASSWORD on your local host. This will then allow you to send commands to your docker-container.
In a container, I am trying to start mysqld.
I was able to create an image and push to the registry but when I want to start it, the /var/lib/mysql volume can't be initialized as I try to do a chown mysql on it and it is not allowed.
I checked docker specific solutions but for now I couldn't make any work.
Is there a way to set the right permissions on a bind-mounted folder from bluemix? Or is the option --volumes-from supported, I can't seem to make it work.
The only solution I can see right now is running mysqld as root, but I would rather not.
Try with mount-bind
created a volume on bluemix using cf ic volume create database
try to run mysql_install_db on my db container to initialize it's content
docker run --name init_vol -v database:/var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
mysql_install_db is supposed to populate the /var/lib/mysql and set the rights to the owner set in the --user option, but I get:
chown: changing ownership of '/var/lib/mysql': Permission denied.
I also tried the above in different ways, using sudo or a script. I tried with mysql_install_db --user=root, which does setup my folder correctly, except it is owned by the root user, and I would rather keep mysql running as the mysql user.
Try with volumes-from data container
I create a data container with a volume /var/lib/mysql
docker run --name db_data -v /var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
I run my db container with the option --volumes-from
docker run --name db_srv --volumes-from=db_data registry.ng.bluemix.net/<namespace>/<image>:<tag> sh -c 'mysqld_safe & tail -f /var/log/mysql.err'
docker inspect db_srv shows:
[{ "BluemixApp": null, "Config": {
...,
"WorkingDir": "",
... } ... }]
cf ic logs db_srv shows:
150731 15:25:11 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql 150731 15:25:11 [Note] /usr/sbin/mysqld (mysqld
5.5.44-0ubuntu0.14.04.1-log) starting as process 377 .. /usr/sbin/mysqld: File './mysql-bin.index' not found (Errcode: 13)
150731 15:25:11 [ERROR] Aborting
which is due to --volumes-from not being supported, and to data created in the first not staying in the second run.
In IBM Containers, the user namespace is enabled for docker engine. The "Permission denied " issue appears to be because the NFS is not allowing mapped user, from container, to perform the operation.
On my local setup, on the docker host, mounted a NFS (exported with no_root_squash option). and attached the volume to container using -v option. When the container is
spawned from docker with disabled user namespace, I am able to change the ownership for bind-mount inside the container. But With user namespace enabled docker, I am getting
chown: changing ownership of ‘/mnt/volmnt’: Operation not permitted
The volume created by cf (cf ic volume create ...) is a NFS, to verify just try mount -t nfs4 from container.
When, the user namespace is enabled for docker engine. The effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container.
Here is the work-around, you may want to try
In the Dockerfile
1.1 Create user mysql with UID 1010, or any free ID, before MySql installation.
Other Container or new Container can access mysql data files on Volume with UID 1010
RUN groupadd --gid 1010 mysql
RUN useradd --uid 1010 --gid 1010 -m --shell /bin/bash mysql
1.2 Install MySqlL but do not initialize database
RUN apt-get update && apt-get install -y mysql-server && rm -rf /var/lib/mysql && rm -rf /var/lib/apt/lists/*
In the entry point script
2.1 Create mysql Data directory under bind-mount as user mysql and then link it as /var/lib/mysql
Suppose the volume is mounted at /mnt/db inside the container (ice run -v <volume name>:/mnt/db --publish 3306... or cf ic run --volume <volume name>:/mnt/db ...).
Define mountpath env var
MOUNTPATH="/mnt/db"
Add mysql to group "root"
adduser mysql root
Set permission for mounted volume so that root group members can create directory and files
chmod 775 $MOUNTPATH
Create mysql directory under Volume
su -c "mkdir -p /mnt/db/mysql" mysql
su -c "chmod 700 /mnt/db/mysql" mysql
Link the directory to /var/lib/mysql
ln -sf /mnt/db/mysql /var/lib/mysql
chown -h mysql:mysql /var/lib/mysql
Remove mysql from group root
deluser mysql root
chmod 755 $MOUNTPATH
2.2 For first time, initialize database as user mysql
su -c "mysql_install_db --datadir=/var/lib/mysql" mysql
2.3 Start the mysql server as user mysql
su -c "/usr/bin/mysqld_safe" mysql
You have multiple questions here. I will try to address some. Perhaps that will get you a step further in the right direction.
--volumes-from is not supported yet in IBM Containers. You can get around that by using the same --volume (-v) option on the first and subsequent containers, instead of using -v on the first container creation command and --volumes-from on the subsequent ones.
--user option is not supported also by IBM Containers.
I see your syntax for using --user (I suppose on localhost docker) is not correct. All options for the docker run command must come before the image name. Anything after the image name is considered a command to run inside the container. In this case "--user=mysql" will be considered as a command that the system will attempt to run and fail.
The last error message you shared shows that there is some file not found in the working dir which causes the app to abort. You may work around that by using a script as the command to run in the container which changes dir to the right location.