Docker Login Failed Using mssql-tools - docker

SSMS connects just fine. Run this code in PowerShell to duplicate the following error.
Any advice? Thanks!
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Login failed for user 'SA'..
docker pull mcr.microsoft.com/mssql/server
#Sql Server cmd Tools
docker pull mcr.microsoft.com/mssql-tools
#Set-up the Container:
docker run `
--name MSSQL-Latest `
-p 1433:1433 `
-e "ACCEPT_EULA=Y" `
-e "SA_PASSWORD=F00B4rB4z!" `
-v C:\Docker\SQL:/sql `
-d mcr.microsoft.com/mssql/server:latest
docker exec MSSQL-Latest /opt/mssql-tools/bin/sqlcmd `
-S localhost `
-U "SA" `
-P "SA_PASSWORD=F00B4rB4z!" ```

Just remove "SA_PASSWORD=" from the password you are trying to log in. You assigned "F00B4rB4z!" value to SA_PASSWORD env, not "SA_PASSWORD=F00B4rB4z!".
If you want to then execute some commands inside your container you could add options: --tty to allocate a pseudo-TTY and --interactive to keep STDIN open or just use -it for short.
The correct command should be:
docker exec -it MSSQL-Latest /opt/mssql-tools/bin/sqlcmd -S localhost -U "SA" -P "F00B4rB4z!"
Keep in mind that if you have earlier created any user with SA_PASSWORD, the SA_PASSWORD value being saved on your host disk in the path you specified in the bind mount volume. This bind mount will overrides your SA_PASSWORD in future created containers with the same bind mount volume.
Please, try to avoid using the latest tag. This approach may cause inaccuracies because you are not guaranteed which version of docker image you are currently using.

Related

docker run - autokill container already in use?

I was following this guide on customizing MySQL databases in Docker, and ran this command multiple times after making tweaks to the mounted sql files:
docker run -d -p 3306:3306 --name my-mysql -v /Users/pneedham/dev/docker-testing/sql-scripts:/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=supersecret -e MYSQL_DATABASE=company mysql
On all subsequent executions of that command, I would see an error like this:
docker: Error response from daemon: Conflict. The container name "/my-mysql" is already in use by container "9dc103de93b7ad0166bb359645c12d49e0aa4a3f2330b5980e455cec24843663". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
What I'd like to know is whether that docker run command can be modified to auto-kill the previous container (if it exists)? Or if there is a different command that has the same desired result.
If I were to create a shell script to do that for me, I'd first run docker ps -aqf "name=mysql" and if there is any output, use that resulting container ID by running docker rm -f $containerID. And then run the original command.
docker run command has a --rm arguments that deletes the container after the run is completed. see the docs . So, just change your command to
docker run --rm -d -p 3306:3306 --name my-mysql -v /Users/pneedham/dev/docker-testing/sql-scripts:/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=supersecret -e MYSQL_DATABASE=company mysql

Using Docker in interactive mode using an official Microsoft .Net Core SDK image

I'm trying to enter interactive mode using an official Microsoft .Net Core image and use typical .Net commands such as 'dotnet build', but all I get is an '>' cursor. What am I doing wrong?
I'm using the following command:
docker run -it -v $(pwd):/app' -w '/app' -p 8000:80 mcr.microsoft.com/dotnet/core/sdk /bin/bash
I was hoping to get a root command prompt, but all I'm getting is '>'
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
<name|uid>[:<group|gid>])
-w, --workdir string Working directory inside the container
After running your container, run command docker ps to take [Container ID]
And after you are able to run the command like there docker exec -it [Container ID] bash .
You are misssing the initial quote here:
-v $(pwd):/app'
That should be:
-v "$(pwd):/app"
It needs to be a double-quote for $(pwd) to be evaluated correctly by the shell. Otherwise the shell will send the literal $(pwd) which is not a valid path.
It seems no one gives a direct answer, this one works for me:
docker run --rm -it -v $PWD:/app -w /app -p 8000:80 mcr.microsoft.com/dotnet/core/sdk /bin/bash

cannot run container after commit changes

Just basic and simple steps illustrating what I have tried:
docker pull mysql/mysql-server
sudo docker run -i -t mysql/mysql-server:latest /bin/bash
yum install vi
vi /etc/my.cnf -> bind-address=0.0.0.0
exit
docker ps
docker commit new_image_name
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret -d new_image_name
docker ps -a STATUS - Exited (1)
Please let me know what I did wrong.
Instead of trying to modify an existing image, try and use (for testing) MYSQL_ROOT_HOST=%.
That would allow root login from any IP. (As seen in docker-library/mysql issue 241)
sudo docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_ROOT_HOST=% -d mysql/mysql-server:latest
The README mentions:
By default, MySQL creates the 'root'#'localhost' account.
This account can only be connected to from inside the container, requiring the use of the docker exec command as noted under Connect to MySQL from the MySQL Command Line Client.
To allow connections from other hosts, set this environment variable.
As an example, the value "172.17.0.1", which is the default Docker gateway IP, will allow connections from the Docker host machine.

How to take Oracle-xe-11g backup from running Docker container

I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"

Dump remote MySQL database from a docker container

I'm trying to dump a remote database into a local docker container's database.
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p
This gives me the remote dump well
So here are my attempts:
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p \
| docker exec -i my_container mysql -u my_local_user -pmy_local_password \
-D my_local_database
$ docker exec -i my_container bash -c \
"mysqldump my_remote_database -h my_remote_host.pp -u my_remote_user -p \
| mysql -u my_local_user -pmy_local_password -D my_local_database"
Both don't seem to have any effect on the local database (no error though)
How can I transfer these data ?
I always like to hammer out problems from inside the container in an interactive terminal.
First, get the cantainer of image running and check to see the following:
If you bash onto the local container docker exec -ti my_container bash, does the remote hostname my_remote_host.me resolve and can you route to it? Use ping or nslookup.
From inside the interactive terminal bash shell can you connect to the remote db? Just try a vanilla mysql cli connect using the credentials
Finally try the dump from inside the interactive terminal and just create a mydump.sql dump file.
Then check that inside the container:
You can connect to the local DB the credetials provided (if using tcp connection not socket the hostname should resolve, but it looks like your local db is using a socket)
See if you can load the dump file using mysql -u my_local_user -pmy_local_password -D mylocaldb < mydump.sql
If this all works then you can start looking at why the pipe is failing, but I suspect the issue may be with one of the resolutions.
I notice you say local database. Is the 'local database' inside the container or on the Docker host running a socket connection?

Resources