Docker script not connecting when run first time (sql dump) - docker

I am trying to run mysql through docker and scripts. Everything works fine, it runs it creates users and database. I can connect to it using workbench, but it does not run dump.sql
I get following error
Access denied for user 'userxx'#'localhost' (using password: YES)
But if I run script once again, it will connect and it will run dump.sql
Here is my script:
#!/bin/sh
echo "Starting DB..."
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
mysql:latest
# Wait for the database service to start up.
echo "Waiting for DB to start up..."
docker exec test_db mysqladmin --silent --wait=30 -utest_user -ptest2018 ping || exit 1
# Run the setup script.
echo "Setting up initial data..."
docker exec -i test_db mysql -utest_user -ptest2018 test < dump.sql
What am I doing wrong ? Or is there way to run dump through Dockerfile, Since I couldnt manage to do it?

If you run docker run --help, you will see these flags
--health-cmd string Command to run to check health
--health-interval duration Time between running the check (ms|s|m|h) (default 0s)
--health-retries int Consecutive failures needed to report unhealthy
--health-start-period duration Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s)
--health-timeout duration Maximum time to allow one check to run (ms|s|m|h) (default 0s)
So, you can use these command to check health with provided command. In your case, that command is
mysqladmin --silent -utest_user -ptest2018 ping
Now run as bellow
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
--health-cmd="mysqladmin --silent -utest_user -ptest2018 ping" \
--health-interval="10s" \
--health-retries=6 \
mysql:latest
If you run docker ps, you will see
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d1d160ed7de mysql:latest "docker-entrypoint.s…" 16 seconds ago Up 16 seconds (health: starting) 0.0.0.0:3306->3306/tcp test_db
You will see health: starting in Status.
Finally you can use this to wait. When your mysql is ready, health will be healthy.
So modify your script as below
#!/bin/bash
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
--health-cmd="mysqladmin --silent -utest_user -ptest2018 ping" \
--health-interval="10s" \
--health-retries=6 \
mysql:latest
# Wait for the database service to start up.
echo "Waiting for DB to start up..."
until [ $(docker inspect test_db --format '{{.State.Health.Status}}') == "healthy" ]
do
sleep 10
done
# Run the setup script.
echo "Setting up initial data..."
docker exec -i test_db mysql -utest_user -ptest2018 test < dump.sql
Here, following command returns health status
docker inspect test_db --format '{{.State.Health.Status}}'
Wait until it returns healthy.
Note: I have used #!/bin/bash in script

Instead of executing a dump, better mount your sql file to the init directory of MySQL’s image: /docker-entrypoint-initdb.d in the run command.
Simply add the switch:
-v $PWD/dump.sql:/docker-entrypoint-initdb.d/dump.sql
Pro tip :) don’t use latest tag. Always use stable, specific tag like 5, 5.7, 8 etc.

Related

How to write the good syntax of --user u=in a docker-Compose file

With influxdb2 and telegraf docker container, I want to read some value from a device by modbutcp. For that I use the telegraf modbus plugin.
When I use the telegraf run command
docker run -d --name=telegraf \
-v $(pwd)/telegraf.conf:/etc/telegraf/telegraf.conf \
-v /var/run/docker.sock:/var/run/docker.sock \
--net=influxdb-net \
--user telegraf:$(stat -c '%g' /var/run/docker.sock) \
--env INFLUX_TOKEN=EcoDMFzGnFkeCLsHiyoaTA-m3VXHl_RG7QqYt6Wt7D5Bdq6Bk9BQlmdO2S47OXaOA-wIz2dLu1aebiZCf2JmFQ==\
telegraf
Everything is ok, I get my device values in influxdb dashboard.
Now I want to use a docker-compose.yml file.
I have a problem with the following command part:
--user telegraf:$(stat -c '%g' /var/run/docker.sock)
My yml file
telegraf:
image: telegraf:latest
container_name: telegraf2
volumes:
- ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock
environment:
INFLUX_TOKEN : Lweb-ZjlKzpA6VFSPqNC5CLy86ntIlvGbqMGUvIS1zrA==
user: telegraf$("stat -c '%g' /var/run/docker.sock")
When run the command docker-compose up -d I have an error
Error response from daemon: unable to find user telegraf$("stat -c '%g' /var/run/docker.sock"): no matching entries in passwd file
Can you tell where is my mistake. Why with the first method it's ok and not with the second.

Is there any way to run docker with multiple command?

My command is here:
sudo docker run --name ws_was_con -itd --net=host -p 8022:8022 --restart always account_some/project_some:latest cd /project_directory && daphne -b 0.0.0.0 -p 8022 project_some.asgi:application
but it returns:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cd": executable file not found in $PATH: unknown.
I have to run with that command cd /project_directory && daphne -b 0.0.0.0 -p 8022 project_some.asgi:application without CMD on Dockerfile
How can I do that?
Try: docker run ... sh -c 'cd /project_directory; daphne -b 0.0.0.0 -p 8022 project_some.asgi:application'
You need to provide something to execute your command.
docker run <...> account_some/project_some:latest /bin/bash cd <path>
If you're just trying to run a command in a non-default directory, docker run has a -w option to specify the working directory; you don't need a cd command.
sudo docker run -d \
--name ws_was_con \
-p 8022:8022 \
--restart always \
-w /projectdirectory \ # <== add this line
account_some/project_some:latest \
daphne -b 0.0.0.0 -p 8022 project_some.asgi:application
In practice, though, it's better to put these settings in your image's Dockerfile, so that you don't have to repeat them every time you run the application.
# in the Dockerfile
WORKDIR /projectdirectory
EXPOSE 8022 # technically does nothing but good practice anyways
CMD daphne -b 0.0.0.0 -p 8022 project_some.asgi:application
sudo docker run -d --name ws_was_con -p 8022:8022 --restart always \
account_some/project_some:latest
# without the -w option or a command override

can't control terminal after using the "-t" option in Docker

I am working through the Docker tutorial from here: https://docs.docker.com/language/python/develop/
I have done the first few steps from the tutorial to create a MYSQL container:
$ docker volume create mysql
$ docker volume create mysql_config
$ docker network create mysqlnet
$ docker run --rm -d -v mysql:/var/lib/mysql \
-v mysql_config:/etc/mysql -p 3307:3306 \
### (note: I used 3307:3306 here instead of the specified 3306:3306 because port 3306 was already being used on my machine) ###
--network mysqlnet \
--name mysqldb \
-e MYSQL_ROOT_PASSWORD=p#ssw0rd1 \
mysql
I then followed the next step to check if the mysql container was running:
$ docker exec -ti mysqldb mysql -u root -p
the terminal then prompts:
Enter password:
and I am unable to enter anything. no commands seem to work. ctrl+C didn't even register. the only thing I could do was kill the terminal itself. I am very confused since I have been following the documentation very closely though I am sure it's something very dumb.
Thanks in advance!
Try:
$ docker exec -ti mysqldb bash
then:
mysql -u root -p
I think you can connect with shell first

How to make curl call to docker container running inside another docker container?

My application is spring-boot app run on port 8080.
Below is sample script. One part is from gitlab-pipeline and another two are before and run script.
Inside Gitlab pipeline:
stages:
- performance
#Run performance test
load_performance:
stage: performance
when: manual
before_script:
- source ./scripts/gitlab/perf-test/before-script.sh
script:
- echo "Running load test"
- source ./scripts/gitlab/perf-test/run_gatling.sh
after_script:
- ls -ltra ./scripts/performance-test/gatling
artifacts:
paths:
- "./scripts/performance-test/gatling/results"
expire_in: 1 week
when: always
before-script.sh: This script download image of gatling and my app and start it in detached mode.
LOCALHOST="host.docker.internal"
BASE_URL="http://${LOCALHOST}:8080"
echo "Pulling Gatling image"
docker pull denvazh/gatling
echo "Start my-service2"
starttime=`date +%s`
docker run --ulimit nofile=65536:200000 -itd -p 8080:8080 --name my-service \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $DIR:/opt/app \
gcr.io/com-gcr/my-service-java:jdk-X.X.X_X \
/bin/bash -c "cd /opt/app && ls -ltra && source .envrc \
&& ./gradlew clean bootRun"
run_gatling.sh: this script check my-service is up or not. If it is up then start gating test.
sleep 5
docker inspect -f '{{ .NetworkSettings.IPAddress }}' my-service
while ! curl http://localhost:8080/actuator/health ## this condition always fail but on local it passes after 1-2 minutes
do
echo "waiting on my-service to up"
docker logs --tail=10 my-service
sleep 30
done
echo "my-service up. It took $(($(date +'%s') - starttime)) seconds."
echo "Starting test:"
starttime=`date +%s`
docker run -it --rm -v $DIR/scripts/performance-test/gatling/conf:/opt/gatling/conf \
-v $DIR/scripts/performance-test/gatling/user-files:/opt/gatling/user-files \
-v $DIR/scripts/performance-test/gatling/results:/opt/gatling/results \
-e JAVA_OPTS="-Dduration=$DURATION -DbaseUrl=$BASE_URL -DmaxRequest=MAX_REQUEST" \
denvazh/gatling -rd performance_tests
docker stop my-service
docker rm my-service
echo "Test complete. It took $(($(date +'%s') - starttime)) seconds."

shell script is not working in cron job | docker

I'm trying to automate a postgres database backup which is running in docker,
#!/bin/bash
POSTGRES_USERNAME="XXX"
CONTAINER_NAME="YYY"
DB_NAME="ZZZ"
BACKUP="/tmp/backup/${DB_NAME}.sql.gz"
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} |
gzip -c > ${BACKUP}
exit
if i run this manually, i can get the backup, but if i schedule the script into cronjob means, i got empty folder,
can anyone please help me ?
Hello everyone thank you so much for your immediate response, i think
i have fixed my issue.
Cronjob failed due to --interactive mode
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}
Removed i --interactive from shell script, then it's works perfect.
docker exec -t -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}

Resources