retain the data inside database using docker - docker

I have created database.sql file and added to a folder inside the container so that it can be used by the application.I want that my data should remain persistent even when my container is removed.I tried using volume.But how to add .sql file and make that file consistent.
sudo docker -v /datadir sqldb
here, sqldb is database image name and datadir is mount folder.
Dockerfile of sqldb:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server curl
ADD ./my.cnf /etc/mysql/my.cnf
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
ADD database.sql /var/db/database.sql
ENV user root
ENV password password
ENV url file:/var/db/database.sql
ENV right WRITE
ADD ./start-database.sh /usr/local/bin/start-database.sh
RUN chmod +x /usr/local/bin/start-database.sh
EXPOSE 3306
CMD ["/usr/local/bin/start-database.sh"]
start-database.sh file
#!/bin/bash
# This script starts the database server.
echo "Creating user $user for databases loaded from $url"
a
# Import database if provided via 'docker run --env url="http:/ex.org/db.sql"'
echo "Adding data into MySQL"
/usr/sbin/mysqld &
sleep 5
curl $url -o import.sql
# Fixing some phpmysqladmin export problems
sed -ri.bak 's/-- Database: (.*?)/CREATE DATABASE \1;\nUSE \1;/g' import.sql
# Fixing some mysqldump export problems (when run without --databases switch)
# This is not tested so far
# if grep -q "CREATE DATABASE" import.sql; then :; else sed -ri.bak 's/-- MySQL dump/CREATE DATABASE `database_1`;\nUSE `database_1`;\n-- MySQL dump/g' import.sql; fi
mysql --default-character-set=utf8 < import.sql
rm import.sql
mysqladmin shutdown
echo "finished"
# Now the provided user credentials are added
/usr/sbin/mysqld &
sleep 5
echo "Creating user"
echo "CREATE USER '$user' IDENTIFIED BY '$password'" | mysql --default-character-set=utf8
echo "REVOKE ALL PRIVILEGES ON *.* FROM '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "GRANT SELECT ON *.* TO '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "finished"
if [ "$right" = "WRITE" ]; then
echo "adding write access"
echo "GRANT ALL PRIVILEGES ON *.* TO '$user'#'%' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
fi
# And we restart the server to go operational
mysqladmin shutdown
cp /var/db/database.sql /var/lib/docker/volumes/mysqlvol/database.sql
echo "Starting MySQL Server"
/usr/sbin/mysqld

To keep data persisted even when container is removed please use volumes.
For more info look at:
SO: How to deal with persistent storage (e.g. databases) in docker
Docker Webinar Q&A: Persistent Storage & Docker
Manage data in containers
Compose and volumes
Volume plugins
I hope that helps. If not please ask.

Related

How can we run flyway/migrations script inside Cassandra Dockerfile?

My docker file
FROM cassandra:4.0
MAINTAINER me
EXPOSE 9042
I want to run something like when cassandra image is fetched and super user is made inside container.
create keyspace IF NOT EXISTS XYZ WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
I have also tried out adding a shell script but it never connects to cassandra, my modified docker file is
FROM cassandra:4.0
MAINTAINER me
ADD entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod 755 /usr/local/bin/entrypoint.sh
RUN mkdir scripts
COPY alter.cql scripts/
RUN chmod 755 scripts/alter.cql
EXPOSE 9042
CMD ["entrypoint.sh"]
My entrypoint looks like this
#!/bin/bash
export CQLVERSION=${CQLVERSION:-"4.0"}
export CQLSH_HOST=${CQLSH_HOST:-"localhost"}
export CQLSH_PORT=${CQLSH_PORT:-"9042"}
cqlsh=( cqlsh --cqlversion ${CQLVERSION} )
# test connection to cassandra
echo "Checking connection to cassandra..."
for i in {1..30}; do
if "${cqlsh[#]}" -e "show host;" 2> /dev/null; then
break
fi
echo "Can't establish connection, will retry again in $i seconds"
sleep $i
done
if [ "$i" = 30 ]; then
echo >&2 "Failed to connect to cassandra at ${CQLSH_HOST}:${CQLSH_PORT}"
exit 1
fi
# iterate over the cql files in /scripts folder and execute each one
for file in /scripts/*.cql; do
[ -e "$file" ] || continue
echo "Executing $file..."
"${cqlsh[#]}" -f "$file"
done
echo "Done."
exit 0
This never connects to my cassandra
Any ideas please help.
Thanks .
Your problem is that you don't call the original entrypoint to start Cassandra - you overwrote it with your own code, but it just running the cqlsh, without starting Cassandra.
You need to modify your code to start Cassandra using the original entrypoint script (source) that is installed as /usr/local/bin/docker-entrypoint.sh, and then execute your script, and then wait for termination signal (you can't just exit from your script, because it will terminate the image.

Docker stop/start resets to initial passwords

I've just started using docker by copy-pasting pre-made repos from github.
Here is the scenario and steps:
I've passed mysql/shell root password via environment variable -e and these passwords are set as expected inside entry.sh.
I then go inside container and reset shell/mysql root password to something different.
Now the main issue, each time I do docker stop + start from host, it resets passwords to the initial ones of step1.
Please suggest the changes so it retain the modified step2 passwords even I do docker start/stop.
Used entry.sh and dockerfile scripts can be checked from this github repo.
Thanks.
I just noticed that the entry.sh will always update the root password with $MYSQL_RANDOM_ROOT_PASSWORD on docker start. So assuming we already persist the /var/lib/mysql in the host, we can edit the entry.sh a bit to only update the password when /var/lib/mysql doesn't exists:
#!/bin/sh
# start apache
echo "Starting httpd"
httpd
echo "Done httpd"
# check if mysql data directory is nuked
# if so, install the db
echo "Checking /var/lib/mysql folder"
if [ ! -f /var/lib/mysql/ibdata1 ]; then
echo "Installing db"
mariadb-install-db --user=mysql --ldata=/var/lib/mysql > /dev/null
echo "Installed"
# from mysql official docker repo
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
# random password
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo "Using random password"
MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
echo "Done"
fi
tfile=`mktemp`
if [ ! -f "$tfile" ]; then
return 1
fi
cat << EOF > $tfile
USE mysql;
DELETE FROM user;
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY "$MYSQL_ROOT_PASSWORD" WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost' WITH GRANT OPTION;
UPDATE user SET password=PASSWORD("") WHERE user='root' AND host='localhost';
FLUSH PRIVILEGES;
EOF
echo "Querying user"
/usr/bin/mysqld --user=root --bootstrap --verbose=0 < $tfile
rm -f $tfile
echo "Done query"
# setting ssh root password
if [ -z "$SSH_ROOT_PASSWORD" ]; then
echo >&2 'You need to specify SSH_ROOT_PASSWORD'
exit
fi
# Set root password to root, format is 'user:password'.
echo "root:$SSH_ROOT_PASSWORD" | chpasswd
fi;
echo "Generating ssh keys"
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
ssh-keygen -A
/usr/sbin/sshd
# start mysql
# nohup mysqld_safe --skip-grant-tables --bind-address 0.0.0.0 --user mysql > /dev/null 2>&1 &
echo "Starting mariadb database"
exec /usr/bin/mysqld --user=root --bind-address=0.0.0.0
Basically we just move this block of code to the if block above it

How to Import Streamsets pipeline in Dockerfile without container exiting

I am trying to import a pipeline into streamsets, during container start up, by using the Docker CMD command in Dockerfile. The image builds, but while creating the container there is no error but it exits with code 0. So it never comes up. Here is what I did:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
EXPOSE 18630
ENTRYPOINT ["/bin/sh"]
CMD ["/opt/streamsets-datacollector-3.18.1/bin/streamsets","cli","-U", "http://localhost:18630", \
"-u", \
"admin", \
"-p", \
"admin", \
"store", \
"import", \
"-n", \
"myPipeline", \
"--stack", \
"-f", \
"/pipelinejsonlocation/myPipeline.json"]
Build image:
docker build -t cmp/sdc .
Run image:
docker run -p 18630:18630 -d --name sdc cmp/sdc
This outputs the container id. But the container is in the Exited status as shown below.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
537adb1b05ab cmp/sdc "/bin/sh /opt/stream…" 5 seconds ago Exited (0) 3 seconds ago sdc
When I do not specify the CMD command in the Dockerfile, the streamsets container spins up and then when I run the streamsets import command in the running container in shell, it works. But how do I get it done during provisioning itself? Is there something I am missing in the Dockerfile?
In your Dockerfile you overwrite the default CMD and ENTRYPOINT from the StreamSets Data Collector Dockerfile. So the container only executes your command during startup and exits without errors afterwards. This is the reason why your container is in Exited (0) status.
In general this is good and expected behavior. If you want to keep your container alive you need to execute another command in the foreground, which never ends. But unfortunately, you cannot run multiple CMDs in your docker file.
I dug a little deeper. The default entry point of the image is ENTRYPOINT ["/docker-entrypoint.sh"]. This script sets up a few things and starts the Data Collector.
It is required that the Data Collector is running before the pipeline is imported. So a solution could be to copy the default docker-entrypoint.sh and modify it to start the Data Collector and import the pipeline afterwards. You could to it like this:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
# Replace docker-entrypoint.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
EXPOSE 18630
docker-entrypoint.sh (https://github.com/streamsets/datacollector-docker/blob/master/docker-entrypoint.sh):
#!/bin/bash
#
# Copyright 2017 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
# We translate environment variables to sdc.properties and rewrite them.
set_conf() {
if [ $# -ne 2 ]; then
echo "set_conf requires two arguments: <key> <value>"
exit 1
fi
if [ -z "$SDC_CONF" ]; then
echo "SDC_CONF is not set."
exit 1
fi
grep -q "^$1" ${SDC_CONF}/sdc.properties && sed 's|^#\?\('"$1"'=\).*|\1'"$2"'|' -i ${SDC_CONF}/sdc.properties || echo -e "\n$1=$2" >> ${SDC_CONF}/sdc.properties
}
# support arbitrary user IDs
# ref: https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${SDC_USER:-sdc}:x:$(id -u):0:${SDC_USER:-sdc} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
# In some environments such as Marathon $HOST and $PORT0 can be used to
# determine the correct external URL to reach SDC.
if [ ! -z "$HOST" ] && [ ! -z "$PORT0" ] && [ -z "$SDC_CONF_SDC_BASE_HTTP_URL" ]; then
export SDC_CONF_SDC_BASE_HTTP_URL="http://${HOST}:${PORT0}"
fi
for e in $(env); do
key=${e%=*}
value=${e#*=}
if [[ $key == SDC_CONF_* ]]; then
lowercase=$(echo $key | tr '[:upper:]' '[:lower:]')
key=$(echo ${lowercase#*sdc_conf_} | sed 's|_|.|g')
set_conf $key $value
fi
done
# MODIFICATIONS:
#exec "${SDC_DIST}/bin/streamsets" "$#"
check_data_collector_status () {
watch -n 1 ${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 ping | grep -q 'version' && echo "Data Collector has started!" && import_pipeline
}
function import_pipeline () {
sleep 1
echo "Start to import pipeline"
${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 -u admin -p admin store import -n myPipeline --stack -f /pipelinejsonlocation/myPipeline.json
echo "Finished importing pipeline"
}
# Start checking if Data Collector is up (in background) and start Data Collector
check_data_collector_status & ${SDC_DIST}/bin/streamsets $#
I commented out the last line exec "${SDC_DIST}/bin/streamsets" "$#" of the default docker-entrypoint.sh and added two functions. check_data_collector_status () pings the Data Collector service until it is available. import_pipeline () imports your pipeline.
check_data_collector_status () runs in background and ${SDC_DIST}/bin/streamsets $# is started in foreground as before. So the pipeline is imported after the Data Collector service is started.
Run this image with sleep command:
docker run -p 18630:18630 -d --name sdc cmp/sdc sleep 300
300 is the time to sleep in seconds.
Then exec your script manually within the docker container and find out what's wrong.

Restore a SQL Server DB.bak in a Dockerfile

I am running a .NET Razor application, an instance of gitea, and a SQL Server database each in separate containers that communicate with one another. I would like to start my database image with a database schema and data (by restoring a .bak file).
I can do this with my current Dockerfile, if once it is up and running, I run these additional commands:
docker exec -it myContainer /opt/mssql-tools/bin/sqlsmd -S localhost -U sa -P myPassword
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
This gets the job done, but I want to fully automate the process so that this is configured 100% by my docker-compose.yml and Dockerfile so I need only type: docker-compose up -d.
I don't think the content of my docker-compose.yml file is relevant, but here is my Dockerfile (where I am trying to run that script that I currently need to run after docker-compose up):
FROM microsoft/mssql-server-linux
ENV SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./ACES_DB.bak /var/opt/mssql/backup/MY_DB_NAME.bak
RUN docker exec -it myContainer bin/sh /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
Any help would be much appreciated.
A friend and I puzzled through this together and eventually found this solution. Here's what the docker file looks like:
FROM microsoft/mssql-server-linux
ENV MSSQL_SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./My_DB.bak /var/opt/mssql/backup/My_DB.bak
COPY restore.sql restore.sql
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'myPassword' -d master -i restore.sql
*Note that I moved the SQL restore statement to a .sql file.
Expanding on #joshua-abbott 's answer. Here is my setup for restoring multiple DB to mssql 2019 docker image, and replacing the 'default' password used to restore the DB.
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV DEFAULT_MSSQL_SA_PASSWORD=myStrongDefaultPassword
ENV ACCEPT_EULA=Y
USER root
COPY restore-db.sh entrypoint.sh /opt/mssql/bin/
RUN chmod +x /opt/mssql/bin/restore-db.sh /opt/mssql/bin/entrypoint.sh
ADD data.tar.gz /var/opt/mssql/
RUN chown -R mssql:root /var/opt/mssql/data && \
chmod 0755 /var/opt/mssql/data && \
chmod -R 0650 /var/opt/mssql/data/*
USER mssql
RUN /opt/mssql/bin/restore-db.sh
CMD [ "/opt/mssql/bin/sqlservr" ]
ENTRYPOINT [ "/opt/mssql/bin/entrypoint.sh" ]
restore-db.sh
#!/bin/bash
export MSSQL_SA_PASSWORD=$DEFAULT_MSSQL_SA_PASSWORD
(/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Server is listening on" && sleep 2
for restoreFile in /var/opt/mssql/data/*.bak
do
fileName=${restoreFile##*/}
base=${fileName%.bak}
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $MSSQL_SA_PASSWORD -Q "RESTORE DATABASE [$base] FROM DISK = '$restoreFile'"
rm -rf $restoreFile
done
entrypoint.sh
#!/bin/bash
/opt/mssql-tools/bin/sqlcmd \
-l 60 \
-S localhost -U SA -P "$DEFAULT_MSSQL_SA_PASSWORD" \
-Q "ALTER LOGIN SA WITH PASSWORD='${MSSQL_SA_PASSWORD}'" &
/opt/mssql/bin/permissions_check.sh "$#"
I voted for the answer of #Joshua Abbott , but I needed to customize the answer to match the question i.e. to restore from .bak file as it was required:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=xxxxxxxx
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
WORKDIR /src
COPY ["API/db/db.bak", "dbbackups/"]
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'xxxxxxxx' -Q "RESTORE FILELISTONLY FROM DISK='/dbbackups/db.bak';"
just you need to change xxxxxxx with your password, you can name your container as you want using the docker compose file/override files
It is simple, I use SQL Server Management Studio, when you create your DOCKER you declare a var for the directory, just put de Backup there and then you just restore it on your SQL
You can create a stored procedure in one of your databases for creating an automatic backup, I found this an made some adaptations for my use.
------ If you create this and then execute it------
CREATE PROCEDURE [dbo].[P_M_Backup]
AS
DECLARE #name VARCHAR(50) -- database name
DECLARE #path VARCHAR(256) -- path for backup files
DECLARE #fileName VARCHAR(256) -- filename for backup
DECLARE #fileDate VARCHAR(20) -- used for file name
-- specify database backup directory
SET #path = '/var/opt/mssql/data/Backup/'
-- specify filename format
SELECT #fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR READ_ONLY FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('master', 'model', 'msdb', 'tempdb', 'Eikon_CDEEE') -- exclude these databases
AND state = 0 -- database is online
AND is_in_standby = 0 -- database is not read only for log shipping
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #fileName = #path + #name + '_' + #fileDate + '.BAK'
BACKUP DATABASE #name TO DISK = #fileName
FETCH NEXT FROM db_cursor INTO #name
END
CLOSE db_cursor
DEALLOCATE db_cursor
/** SET #path = '/var/opt/mssql/data/Backup/' the mssql/data/ is my directory where I have mounted the SQL Server from Docker, and Backup is a directory inside this directory, so you have to change it for your directory**/

Can't edit local files due to permission errors once I run docker-compose up

I run docker-compose up -d and then ssh into the container. I can load the site via localhost just fine but when I try to edit the source code on my local it does not let me due to permission errors. This is the ls -la output on container vs local:
Container:
Local:
My dockerfile has the chown command:
My local user is called pwm. I tried running chown -R pwm:pwm ../app from host at which point I am able to edit files but then I get laravel permission denied errors. Then I need to runchown -R www-data:www-data ../app again to fix it.
How can I fix this?
For a development environment, my go-to solution for this is to setup an entrypoint script inside the container that starts as root, changes the user inside the container to match that of the file/directory owner from a volume mount (which will be your user on the host), and then switch to that user to run the app. I've got an example of doing this along with the scripts needed to implement this in your own container in my base image repo: https://github.com/sudo-bmitch/docker-base
In there, the fix-perms script does the heavy lifting, including code like the following:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
That script is run as root inside the container on startup. The last step of the entrypoints that I run will call something like:
exec gosu ${app_user} "$#"
which runs the container command as the application user as the new pid 1 executable.

Resources