redisAI Docker container not setting password - docker

I am running the latest redisai official image from Docker, but I can't seem to set my password.
I have changed the redis.conf and uncommented requirepass followed by my own password.
I then run the image with
sudo docker run --name test -v /path/to/redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -p 6379:6379 --gpus all -it --rm redisai/redisai:latest-gpu
and when I investigate the configuration
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) ""
Do I need to set a proper binding?
127.0.0.1:6379> config get bind
1) "bind"
2) ""
127.0.0.1:6379> auth <password>
(error) ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
I am able to access the server from outside the localhost also.

sudo docker run --name test -v /path/to/redis/conf/redis.conf:/redis.conf -p 6379:6379 --gpus all -it --rm redisai/redisai:latest-gpu redis-server /redis.conf
Turns out, I was just missing the last section redis-server /redis.conf to initialize the server with the appropriate configuration file.

Related

nextcloudpi in docker: Cannot use mounted external storage

Recently I installed nextcloudpi in docker with
sudo docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /home/user/storage/nextcloud:/data --name nextcloudpi ownyourbits/nextcloudpi-armhf <ip-adress-of-pi>
the folder /home/user/storage is a mounted external storage (following this tutorial https://www.techjunkie.com/build-nas-raspberry-pi-linux/)
It get me the error:
Running nc-init
Setting up a clean Nextcloud instance... wait until message 'NC init done'
Setting up database...
Setting up Nextcloud...
Console has to be executed with the user that owns the file config/config.php
Current user: www-data
Owner of config.php: root
Try adding 'sudo -u root ' to the beginning of the command (without the single quotes)
If running with 'docker exec' try adding the option '-u root' to the docker command (without the single quotes)
I tried
sudo docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /home/user:/data --name nextcloudpi ownyourbits/nextcloudpi-armhf <ip-adress-of-pi>
and everythink workes well as far as I can tell. I can load the nextcloudpi config UI as well as the nextcloud GUI.
I have tried some chown and chmod for the /home/user/storage folder, without any succes.
How can i use the external storage as directory of the nextcloud?

Docker Login Failed Using mssql-tools

SSMS connects just fine. Run this code in PowerShell to duplicate the following error.
Any advice? Thanks!
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Login failed for user 'SA'..
docker pull mcr.microsoft.com/mssql/server
#Sql Server cmd Tools
docker pull mcr.microsoft.com/mssql-tools
#Set-up the Container:
docker run `
--name MSSQL-Latest `
-p 1433:1433 `
-e "ACCEPT_EULA=Y" `
-e "SA_PASSWORD=F00B4rB4z!" `
-v C:\Docker\SQL:/sql `
-d mcr.microsoft.com/mssql/server:latest
docker exec MSSQL-Latest /opt/mssql-tools/bin/sqlcmd `
-S localhost `
-U "SA" `
-P "SA_PASSWORD=F00B4rB4z!" ```
Just remove "SA_PASSWORD=" from the password you are trying to log in. You assigned "F00B4rB4z!" value to SA_PASSWORD env, not "SA_PASSWORD=F00B4rB4z!".
If you want to then execute some commands inside your container you could add options: --tty to allocate a pseudo-TTY and --interactive to keep STDIN open or just use -it for short.
The correct command should be:
docker exec -it MSSQL-Latest /opt/mssql-tools/bin/sqlcmd -S localhost -U "SA" -P "F00B4rB4z!"
Keep in mind that if you have earlier created any user with SA_PASSWORD, the SA_PASSWORD value being saved on your host disk in the path you specified in the bind mount volume. This bind mount will overrides your SA_PASSWORD in future created containers with the same bind mount volume.
Please, try to avoid using the latest tag. This approach may cause inaccuracies because you are not guaranteed which version of docker image you are currently using.

docker mysql, send sql commands during exec

i am creating a mysql 5.6 docker using bash script and i would like to change the password.
how can i send sql commands from bash to docker?
build:
sudo docker build -t mysql-5.6 -f ./.Dockerfile .
run.sh:
#!/bin/bash
sudo docker run --name=mysql1 -d mysql-5.6
sudo docker exec -it mysql1 mysql -uroot -p$base_password \
<<< SET PASSWORD FOR 'root'#'localhost' = PASSWORD('new_pass');
You need to bind MySQL-port like descriped here. To keep the port 3306 you can just expose it on your host the following way:
sudo docker run --name=mysql1 -p 3306:3306 -d mysql-5.6
After that you should be able to use mysql -u USER -p PASSWORD on your local host. This will then allow you to send commands to your docker-container.

cannot run container after commit changes

Just basic and simple steps illustrating what I have tried:
docker pull mysql/mysql-server
sudo docker run -i -t mysql/mysql-server:latest /bin/bash
yum install vi
vi /etc/my.cnf -> bind-address=0.0.0.0
exit
docker ps
docker commit new_image_name
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret -d new_image_name
docker ps -a STATUS - Exited (1)
Please let me know what I did wrong.
Instead of trying to modify an existing image, try and use (for testing) MYSQL_ROOT_HOST=%.
That would allow root login from any IP. (As seen in docker-library/mysql issue 241)
sudo docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_ROOT_HOST=% -d mysql/mysql-server:latest
The README mentions:
By default, MySQL creates the 'root'#'localhost' account.
This account can only be connected to from inside the container, requiring the use of the docker exec command as noted under Connect to MySQL from the MySQL Command Line Client.
To allow connections from other hosts, set this environment variable.
As an example, the value "172.17.0.1", which is the default Docker gateway IP, will allow connections from the Docker host machine.

how to save a docker redis container

I'm having trouble creating an image of a docker redis container with the data in the redis database. At the moment I'm doing this:
docker pull redis
docker run --name my-redis -p 6379:6379 -d redis
redis-cli
127.0.0.1:6379> set hello world
OK
127.0.0.1:6379> save
OK
127.0.0.1:6379> exit
docker stop my-redis
docker commit my-redis redis_with_data
docker run --name my-redis2 -p 6379:6379 -d redis_with_data
redis-cli
127.0.0.1:6379> keys *
(empty list or set)
I'm obviously not understanding something pretty basic here. Doesn't docker commit create a new image from an existing container?
okay, i've been doing some digging. The default redis image on hub.docker uses a data-volume which is then mounted at /data in a container. In order to share this volume between containers, you have to start a new container with the following argument:
docker run -d --volumes-from <name-of-container-you-want-the-data-from> \
--name <new-container-name> -p 6379:6379 redis
Note that the order of the arguments is important, otherwise docker run will fail silently.
docker volume ls
will tell you which data volumes have already been created by docker on your computer. I haven't yet found a way to give these volumes a trivial name, rather than a long random string.
I also haven't yet found a way to mount a data-volume, but rather just use the --volumes-from command.
Okay. I now have it working, but it's cludgey.
With
docker volume ls
docker volume inspect <id of docker volume>
you can find the path of the docker volume on the local file-system.
You can then mount this in a new container as follows:
docker run -d -v /var/lib/docker/volumes/<some incredibly long string>/_data:/data \
--name my-redis2 -p 6379:6379 redis
This is obviously not the way you're meant to do this. I'll carry on digging.
I put all that i've discovered upto now in a blog post: my blog post on medium.com
Maybe that will be useful for somebody
Data in docker is not persistent, when you restart the container your data will be gone. To prevent this you have to share a map on the host machine with your container. When you container restarts it will get the data from the map on the host.
You can read more about it in the Docker docs: https://docs.docker.com/engine/tutorials/dockervolumes/#data-volumes
From the redis container docs:
Run redis-server
docker run -d --name redis -p 6379:6379 dockerfile/redis
Run redis-server with persistent data directory. (creates dump.rdb)
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis
Run redis-server with persistent data directory and password.
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis redis-server /etc/redis/redis.conf --requirepass <password>
Source:
https://github.com/dockerfile/redis
Using data volume and sharing RDB file manually is not ugly, actually that's what data volume is designed for, to separate data from container.
But if you really need/want to save data to image and share it that way, you can just change the redis working directory from volume /data to somewhere else:
Option 1 is changing --dir when start the redis container:
docker run -d redis --dir /tmp
Then you can follow your steps to create new image. Note that only /tmp could be used by this method due to permission issue.
Option 2 is creating a new image with changed WORKDIR:
FROM redis
RUN mkdir /opt/redis && chown redis:redis /opt/redis
WORKDIR /opt/redis
Then docker build -t redis-new-image and use this image to do your job.

Resources