Restore a SQL Server DB.bak in a Dockerfile - docker

I am running a .NET Razor application, an instance of gitea, and a SQL Server database each in separate containers that communicate with one another. I would like to start my database image with a database schema and data (by restoring a .bak file).
I can do this with my current Dockerfile, if once it is up and running, I run these additional commands:
docker exec -it myContainer /opt/mssql-tools/bin/sqlsmd -S localhost -U sa -P myPassword
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
This gets the job done, but I want to fully automate the process so that this is configured 100% by my docker-compose.yml and Dockerfile so I need only type: docker-compose up -d.
I don't think the content of my docker-compose.yml file is relevant, but here is my Dockerfile (where I am trying to run that script that I currently need to run after docker-compose up):
FROM microsoft/mssql-server-linux
ENV SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./ACES_DB.bak /var/opt/mssql/backup/MY_DB_NAME.bak
RUN docker exec -it myContainer bin/sh /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
Any help would be much appreciated.

A friend and I puzzled through this together and eventually found this solution. Here's what the docker file looks like:
FROM microsoft/mssql-server-linux
ENV MSSQL_SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./My_DB.bak /var/opt/mssql/backup/My_DB.bak
COPY restore.sql restore.sql
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'myPassword' -d master -i restore.sql
*Note that I moved the SQL restore statement to a .sql file.

Expanding on #joshua-abbott 's answer. Here is my setup for restoring multiple DB to mssql 2019 docker image, and replacing the 'default' password used to restore the DB.
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV DEFAULT_MSSQL_SA_PASSWORD=myStrongDefaultPassword
ENV ACCEPT_EULA=Y
USER root
COPY restore-db.sh entrypoint.sh /opt/mssql/bin/
RUN chmod +x /opt/mssql/bin/restore-db.sh /opt/mssql/bin/entrypoint.sh
ADD data.tar.gz /var/opt/mssql/
RUN chown -R mssql:root /var/opt/mssql/data && \
chmod 0755 /var/opt/mssql/data && \
chmod -R 0650 /var/opt/mssql/data/*
USER mssql
RUN /opt/mssql/bin/restore-db.sh
CMD [ "/opt/mssql/bin/sqlservr" ]
ENTRYPOINT [ "/opt/mssql/bin/entrypoint.sh" ]
restore-db.sh
#!/bin/bash
export MSSQL_SA_PASSWORD=$DEFAULT_MSSQL_SA_PASSWORD
(/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Server is listening on" && sleep 2
for restoreFile in /var/opt/mssql/data/*.bak
do
fileName=${restoreFile##*/}
base=${fileName%.bak}
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $MSSQL_SA_PASSWORD -Q "RESTORE DATABASE [$base] FROM DISK = '$restoreFile'"
rm -rf $restoreFile
done
entrypoint.sh
#!/bin/bash
/opt/mssql-tools/bin/sqlcmd \
-l 60 \
-S localhost -U SA -P "$DEFAULT_MSSQL_SA_PASSWORD" \
-Q "ALTER LOGIN SA WITH PASSWORD='${MSSQL_SA_PASSWORD}'" &
/opt/mssql/bin/permissions_check.sh "$#"

I voted for the answer of #Joshua Abbott , but I needed to customize the answer to match the question i.e. to restore from .bak file as it was required:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=xxxxxxxx
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
WORKDIR /src
COPY ["API/db/db.bak", "dbbackups/"]
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'xxxxxxxx' -Q "RESTORE FILELISTONLY FROM DISK='/dbbackups/db.bak';"
just you need to change xxxxxxx with your password, you can name your container as you want using the docker compose file/override files

It is simple, I use SQL Server Management Studio, when you create your DOCKER you declare a var for the directory, just put de Backup there and then you just restore it on your SQL
You can create a stored procedure in one of your databases for creating an automatic backup, I found this an made some adaptations for my use.
------ If you create this and then execute it------
CREATE PROCEDURE [dbo].[P_M_Backup]
AS
DECLARE #name VARCHAR(50) -- database name
DECLARE #path VARCHAR(256) -- path for backup files
DECLARE #fileName VARCHAR(256) -- filename for backup
DECLARE #fileDate VARCHAR(20) -- used for file name
-- specify database backup directory
SET #path = '/var/opt/mssql/data/Backup/'
-- specify filename format
SELECT #fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR READ_ONLY FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('master', 'model', 'msdb', 'tempdb', 'Eikon_CDEEE') -- exclude these databases
AND state = 0 -- database is online
AND is_in_standby = 0 -- database is not read only for log shipping
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #fileName = #path + #name + '_' + #fileDate + '.BAK'
BACKUP DATABASE #name TO DISK = #fileName
FETCH NEXT FROM db_cursor INTO #name
END
CLOSE db_cursor
DEALLOCATE db_cursor
/** SET #path = '/var/opt/mssql/data/Backup/' the mssql/data/ is my directory where I have mounted the SQL Server from Docker, and Backup is a directory inside this directory, so you have to change it for your directory**/

Related

docker-compose env vars not available when connecting via ssh

I have a local dev env which requires different hosts reachable via SSH so i set up a docker-compose.yml with some services:
services:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
environment:
MYSQL_USER: app1
MYSQL_PASSWORD: app1
The Dockerfile contains following contents:
FROM ubuntu:20.04
RUN export DEBIAN_FRONTEND=noninteractive \
&& ln -fs /usr/share/zoneinfo/Europe/Berlin /etc/localtime \
&& apt update \
&& apt upgrade -y \
&& apt install -y openssh-server rsync php \
&& mkdir /run/sshd/ \
&& ssh-keygen -A \
&& for key in $(ls /etc/ssh/ssh_host_* | grep -v pub); do echo "HostKey $key" >> /etc/ssh/sshd_config; done \
&& addgroup --gid 1000 app \
&& adduser --gecos "" --disabled-password --shell /bin/bash --uid 1000 --gid 1000 app \
&& mkdir -m 700 /home/app/.ssh/ \
&& chown app:app /home/app/.ssh/ \
&& rm -rf /var/lib/apt/lists/*
COPY --chown=app:app ssh1_rsa.pub /home/app/.ssh/authorized_keys
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 22
I can verify, that the environment variables are set in the container
$ docker-compose exec ssh1 printenv | grep MYSQL
MYSQL_USER=app1
MYSQL_PASSWORD=app1
docker inspect project_ssh1_1 also shows the ENV variables.
But when i connect from another container to ssh1 via ssh, my environment variables are not set.
Why are my environment variables not set when i ssh into the container?
I also appreciate any in-depth input on how env vars are set in the container via docker and how env vars are inherited from processes or the "OS".
Solution edit:
The actual question was answered. However, i did not ask the right question. So here's my actual solution. I am able to set the ENV VARS in the SSH session with the help of a pretty hacky solution, which should not be used in PROD environments, since it could lead to information disclosure.
Add all ENV VARS as build args.
docker-compose.yml:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
args:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app1
MYSQL_USER: app1
MYSQL_PASSWORD: app1
MYSQL_HOST: mysql1
And write them to $HOME/.ssh/environment as well as enabling PermitUserEnvironment. Don't do this in production
Dockerfile
FROM ubuntu:20.04
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_HOST
RUN echo "MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_DATABASE=$MYSQL_DATABASE" >> /home/app/.ssh/environment \
&& echo "MYSQL_USER=$MYSQL_USER" >> /home/app/.ssh/environment \
&& echo "MYSQL_PASSWORD=$MYSQL_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_HOST=$MYSQL_HOST" >> /home/app/.ssh/environment \
&& sed -i 's/#PermitUserEnvironment no/PermitUserEnvironment yes/g' /etc/ssh/sshd_config
Now, when you login, SSH will read the env vars from the users .ssh/environment (app in this case) and set them in the user's SSH session.
Environment variables are present in RUN commands and in the shell you exec into when issuing docker exec command, but when you ssh into an ssh server running inside container, you actually get a brand new shell which doesn’t have those env variables set.
Your issue has actually not much to do with docker but is due to the way sshd works: for every connection sshd will setup a new environment wiping out all variables in its own environment, see the Login Process in man sshd.
(It's easy to see why this makes sense: sshd is started by the root user and so may contain sensitive data in its environment variables that should not be leaked to the users. Other variables wouldn't be reasonable to pass to the user session, e.g. HOME, PATH, SHELL)
Depending on your use case there are various ways to pass environment variables to a new ssh session, depending on whether it being an interactive or non-interactive session, running a (login) shell or not:
~/.ssh/environment: variables injected by ssh, see PermitUserEnvironment
/etc/environment: used by pam_env on login
/etc/profile, ~/.bashrc and alike: configs used by the (login) shell, see bash for example.
Also depending on your use case you have now various options how to add these files to the container:
if the variables are static: just ADD the respective file to the image or create it in the Dockerfile
if the variables are set on build-time: use ARGs (or ENV) to pass the variables to the build and create the respective file from that in the build (as you did in your solution)
if the variables should be set on container run-time:
use a custom ENTRYPOINT script to generate the respective file on startup from the passed environment variables or command line arguments
volume-mount the respective file into the container (you may also use docker secret for sensitive data here)

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

Docker: executing all commands as local user and not root

How do I run docker run and docker-compose up/run commands so that the process inside the docker is run by a user with the same uuid as my local user?
I need to do this so that any files generated by an "inside-docker" process would have ownership permissions of my local user.
To replicate:
Use the alpine:3.9 container, mount in a volume for the file to be written and create the file. Assume my current username is user.
mkdir output_dir #Create an output directory
docker run -it --rm --volume "/path/to/output_dir:/tmp" alpine:3.9 touch /tmp/file.txt
ls -la output_dir/file.txt
Will give the output:
-rw-r--r-- 1 root root 0 Feb 7 19:51 /path/to/output_dir/file.txt
This means I need to sudo chown user:user /path/to/output_dir/file.txt to have access as my current user on my own file system.
How do I do this without this extra step?
Idea that comes to mind:
Add a Docker Entrypoint which will create a user inside the container with the same uuid as my local user and execute any code as that user.
docker-entrypoint.sh
#!/bin/sh
TEMP_UID="${TEMP_UID:-1000}"
set -ux
useradd -s /bin/false --no-create-home -u ${TEMP_UID} temp
#su-exec is an executable which makes it easy to run a process as a specific user.
exec su-exec temp $#
The problem with this is I will have to inject the TEMP_UID=<user_id> as an environment variable at every docker run command or include in my docker-compose.yml file for every docker-compose up/run command. If Docker has an internal variable that keeps track of the uuid of the user that ran it, I would just use that. But I can't seem to find such an internal variable.
Any help would be greatly appreciated!
I think the answer is as simple as
docker run --user ${UID} -it --rm --volume "/path/to/output_dir:/tmp" alpine:3.9 touch /tmp/file.txt
Note I injected --user ${UID} into your example command.
Many of the current options require a change outside of the container to pass in the current user, or rely on variables that may not exist in all environments. My preferred solution, since the goal is to fix file permissions on mounted volumes, is to start the entrypoint as root with a script that changes the container userid to match that of the volume mount's userid. And then the end of the entrypoint launches the application with a exec gosu $app_user_name "$#" to switch from root to that application user that was modified inside of the container.
Scripts to do this are in my base image repo. Take note of the fix-perms script which includes two sections like the following (one for uid and another for gid):
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The OLD_UID value is from the userid in the image, and NEW_UID is from the volume mount. When those don't match, the usermod command is run, followed by a recursive chown command to fix any files with the old uid/gid.
Note that in production, where user id's on the host can be standardized, I match the host user id to that of the image if a volume is needed, allowing me to run the entrypoint as that user instead of root. The entrypoint checks the current userid and skips the fix-perms script and gosu command if it is not root.

Can't edit local files due to permission errors once I run docker-compose up

I run docker-compose up -d and then ssh into the container. I can load the site via localhost just fine but when I try to edit the source code on my local it does not let me due to permission errors. This is the ls -la output on container vs local:
Container:
Local:
My dockerfile has the chown command:
My local user is called pwm. I tried running chown -R pwm:pwm ../app from host at which point I am able to edit files but then I get laravel permission denied errors. Then I need to runchown -R www-data:www-data ../app again to fix it.
How can I fix this?
For a development environment, my go-to solution for this is to setup an entrypoint script inside the container that starts as root, changes the user inside the container to match that of the file/directory owner from a volume mount (which will be your user on the host), and then switch to that user to run the app. I've got an example of doing this along with the scripts needed to implement this in your own container in my base image repo: https://github.com/sudo-bmitch/docker-base
In there, the fix-perms script does the heavy lifting, including code like the following:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
That script is run as root inside the container on startup. The last step of the entrypoints that I run will call something like:
exec gosu ${app_user} "$#"
which runs the container command as the application user as the new pid 1 executable.

retain the data inside database using docker

I have created database.sql file and added to a folder inside the container so that it can be used by the application.I want that my data should remain persistent even when my container is removed.I tried using volume.But how to add .sql file and make that file consistent.
sudo docker -v /datadir sqldb
here, sqldb is database image name and datadir is mount folder.
Dockerfile of sqldb:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server curl
ADD ./my.cnf /etc/mysql/my.cnf
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
ADD database.sql /var/db/database.sql
ENV user root
ENV password password
ENV url file:/var/db/database.sql
ENV right WRITE
ADD ./start-database.sh /usr/local/bin/start-database.sh
RUN chmod +x /usr/local/bin/start-database.sh
EXPOSE 3306
CMD ["/usr/local/bin/start-database.sh"]
start-database.sh file
#!/bin/bash
# This script starts the database server.
echo "Creating user $user for databases loaded from $url"
a
# Import database if provided via 'docker run --env url="http:/ex.org/db.sql"'
echo "Adding data into MySQL"
/usr/sbin/mysqld &
sleep 5
curl $url -o import.sql
# Fixing some phpmysqladmin export problems
sed -ri.bak 's/-- Database: (.*?)/CREATE DATABASE \1;\nUSE \1;/g' import.sql
# Fixing some mysqldump export problems (when run without --databases switch)
# This is not tested so far
# if grep -q "CREATE DATABASE" import.sql; then :; else sed -ri.bak 's/-- MySQL dump/CREATE DATABASE `database_1`;\nUSE `database_1`;\n-- MySQL dump/g' import.sql; fi
mysql --default-character-set=utf8 < import.sql
rm import.sql
mysqladmin shutdown
echo "finished"
# Now the provided user credentials are added
/usr/sbin/mysqld &
sleep 5
echo "Creating user"
echo "CREATE USER '$user' IDENTIFIED BY '$password'" | mysql --default-character-set=utf8
echo "REVOKE ALL PRIVILEGES ON *.* FROM '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "GRANT SELECT ON *.* TO '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "finished"
if [ "$right" = "WRITE" ]; then
echo "adding write access"
echo "GRANT ALL PRIVILEGES ON *.* TO '$user'#'%' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
fi
# And we restart the server to go operational
mysqladmin shutdown
cp /var/db/database.sql /var/lib/docker/volumes/mysqlvol/database.sql
echo "Starting MySQL Server"
/usr/sbin/mysqld
To keep data persisted even when container is removed please use volumes.
For more info look at:
SO: How to deal with persistent storage (e.g. databases) in docker
Docker Webinar Q&A: Persistent Storage & Docker
Manage data in containers
Compose and volumes
Volume plugins
I hope that helps. If not please ask.

Resources