DOCKER: Mysql dockerfile remains in exited state after "docker run" - docker

I have to start a mysql container through a dockerfile, in which I simply have to set the environment variables, I wrote the dockerfile like this, but when I do the "docker run" it remains in exited state.
FROM mysql
ENV DB_HOST=localhost
ENV DB_NAME=productsdb
ENV DB_USER=root
ENV DB_PWD=mm22
ENV DB_DIALECT=mysql
ENV SERVER_PORT=5000
ENV DB_PORT=3306

When you run the container, it outputs - in the log - the following message
You need to specify one of the following as an environment variable:
- MYSQL_ROOT_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
Basically, MySQL won't start without a root password or being told that no password is OK.
Add
ENV MYSQL_ROOT_PASSWORD=myrootpassword
to your Dockerfile and it'll run.

If there is no error in your container add option -d to run container background.
docker run -d yourMysqlImage:yourTag
If you didn't build image yet.
docker build -f yourDockerfile
And you should check container logs to see what happend.
List all container:
docker ps -a
Check logs:
docker logs yourContainerId
Here is my quick start command:
docker run -d -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORD=root mysql:latest

Related

How to put creation of a docker image with maria DB dump into a docker file?

I executed docker commands in order to create a docker image, containing an maria DB SQL dump
docker pull mariadb:10.4.26
docker run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
docker exec -it test_smdb mariadb --user root -p<some_password>
MariaDB [(none)]> CREATE DATABASE smdb_dev;
docker exec -i test_smdb mariadb -uroot -p<some_password> smdb_dev --force < C:\smdb-dev.sql
But as this will have to be part of a Azure pipeline execution, i was advised to put all this in a docker file.
How to combine docker creation and putting it in DEV Azure Container Registry?
Unfortunately i cannot find proper information how combine this all and especially the calling of an execution of a database creation from the docker file?
You can try to use this as a sample and modify to your needs:
# Dockerfile
FROM mariadb:10.4.26
ENV MYSQL_ROOT_PASSWORD dummy-password
ENV MARIADB_DATABASE smdb-dev
COPY C:\smdb-dev.sql ./docker-entrypoint-initdb.d/
ENTRYPOINT [ "docker-entrypoint.sh" ]
EXPOSE 3306
CMD [ "mysqld" ]
When you first time run a container, docker will restore your DB from the docker-entrypoint-initdb.d/ folder (it will restore it only if the persistent volume will be empty, otherwise it will just skip this step), in order to make this data persistent, you need to create a volume and attach it to mysql data folder on your container.
docker build *Dockerfile dir path* -t *image-name*
docker run -dp 3306:3306 *image-name*

I had a problem in the process of executing the mariadb docker image

I had a problem in the process of executing the mariadb docker image.
This is my Dockerfile.
FROM mariadb
ENV MYSQL_ROOT_PASSWORD test1357
ENV MYSQL_DATABASE mydb
EXPOSE 3306
ENTRYPOINT ["mysqld", "--user=root"]
and I try build and run this Dockerfile.
docker build -t mariadb:1.0 .
docker run -d -p 3306:3306 --name mariadb mariadb:1.0
then, my mariadb container exited.
so, I try check logs with the following command.
And, I encountered the error.
docker logs -f mariadb
...
2021-05-23 7:10:08 0 [ERROR] Could not open mysql.plugin table: "Table 'mysql.plugin' doesn't exist". Some plugins may be not loaded
2021-05-23 7:10:08 0 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
2021-05-23 7:10:08 0 [Note] Server socket created on IP: '::'.
2021-05-23 7:10:08 0 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.db' doesn't exist
2021-05-23 7:10:08 0 [ERROR] Aborting
What should I do to solve the error?
For the things you're setting, don't create a custom image at all; just run the standard Docker Hub mariadb image.
docker rmi mariadb:1.0
docker network create some-network
docker run \
-d \
-p 3306:3306 \
-e MYSQL_ROOT_PASSWORD=test1357 \
-e MYSQL_DATABASE=db \
--name db \
--net some-network \
mariadb
You do not generally want to put passwords or other credentials like ssh keys in a Dockerfile: anyone who has the resulting image can easily get the password back out. If the only thing you need in your custom Dockerfile is to set environment variables, just provide them when you run the command.
If you do build a derived image from something that already has a main application in it (FROM mariadb, tomcat, nginx, ...) the EXPOSE, ENTRYPOINT, and CMD settings are inherited from that base image. The mariadb Dockerfile already declares CMD ["mysqld"] and you don't need to repeat this. By overriding ENTRYPOINT, you're actually replacing the script that creates the initial database and uses these environment variables. Here you don't need to provide a command at all; if you do, generally prefer CMD to ENTRYPOINT (it is easier to override when you run a container and there is a common pattern of using ENTRYPOINT as a setup script).

Add Environment variable to Docker file that Contains :

I want to add some environment variables to dockerfile which contains :
SO I need to add something like
environment:
-OAuth2Configuration:CacheProvider=true
any idea how to do that , I even tried to surround the key with "" but it fails to so if any idea , and docker compose file it gives error on :
Use env_file option of docker-compose.
Check this out.
Here is what I tried and it worked:
Created docker-compose.yaml file.
version: '3'
services:
distro:
env_file: test.env
image: alpine
restart: always
container_name: Alpine_Distro
entrypoint: tail -f /dev/null
Created test.env file.
OAuth2Configuration:CacheProvider=true
Ran docker-compose up -d
$ docker-compose up -d
Creating network "ttt_default" with the default driver
Pulling distro (alpine:)...
latest: Pulling from library/alpine
921b31ab772b: Pull complete
Creating Alpine_Distro ... done
[node1] (local) root#192.168.0.33 ~/ttt
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74ee753a27b6 alpine "tail -f /dev/null" 4 seconds ago Up 2 seconds Alpine_Distro
[node1] (local) root#192.168.0.33 ~/ttt
$ docker exec -it 74ee753a27b6 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=74ee753a27b6
TERM=xterm
OAuth2Configuration:CacheProvider=true
HOME=/root
[node1] (local) root#192.168.0.33 ~/ttt
NOTE: As you can see OAuth2Configuration:CacheProvider=true env variable is properly set.
Here your environment variable contains : in it, so I guess that's why environment field was not working for you. In env_file option whatever is on left hand side of = is considered as environment name and right hand side -f = as the value. So its key=value syntax in env_file that's why it will work.
Hope this helps.
Update:
In case you are using only plain docker use --env-file option of docker run
$ docker run -itd --env-file test.env alpine
74f60cb6f513519c2dd7a093622537215937db1682b79a838c95e944a649f451
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74f60cb6f513 alpine "/bin/sh" 12 seconds ago Up 10 seconds infallible_nobel
$ docker exec -it 74f60cb6f513 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=74f60cb6f513
TERM=xterm
OAuth2Configuration:CacheProvider=true
HOME=/root
Try to put quotes, I checked it with:
FROM alpine:latest
ENV "OAuth2Configuration:CacheProvider"=true
CMD ["env"]

Unable to connect to Rabbit MQ instance when running from docker container built by dockerfile

We are attempting to put an instance of rabbit mq into our Kubernetes environment. To do so, we have to implement it into our build and release process, which includes creating a docker container by Dockerfile.
During our original testing, we created the docker container manually with the following commands, and it worked correctly:
docker pull rabbitmq
docker run -p 5672:5672 -d --hostname my-rabbit --name some-rabbit rabbitmq:3
docker start some-rabbit
To create our docker file, we have tried various iterations, with the latest being:
FROM rabbitmq:3 AS rabbitmq
RUN rabbitmq-server -p 5672:5672 -d --hostname my-rabbit --name some-rabbit
EXPOSE 5672
We have also tried it with just the Run rabbitmq-server and not the additional parameters.
This does create a rabbit mq instance that we are able to ssh into and verify it is running, but when we try to connect to it, we receive an error: "ExtendedSocketException: An attempt was made to access a socket in a way forbidden by its access permission" (we are using rabbit's default of 5672).
I'm not sure what the differences could be between what we've done in the command line and what has been done in the Dockerfile.
Looks like you need to expose quite a few other ports.
I was able to generate the Dockerfile commands for rabbitmq:latest (rabbitmq:3 looks the same) using this:
ENV PATH=/usr/lib/rabbitmq/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GOSU_VERSION=1.10
ENV RABBITMQ_LOGS=-
ENV RABBITMQ_SASL_LOGS=-
ENV RABBITMQ_GPG_KEY=0A9AF2115F4687BD29803A206B73A36E6026DFCA
ENV RABBITMQ_VERSION=3.7.8
ENV RABBITMQ_GITHUB_TAG=v3.7.8
ENV RABBITMQ_DEBIAN_VERSION=3.7.8-1
ENV LANG=C.UTF-8
ENV HOME=/var/lib/rabbitmq
EXPOSE 25672/tcp
EXPOSE 4369/tcp
EXPOSE 5671/tcp
EXPOSE 5672/tcp
VOLUME /var/lib/rabbitmq
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["rabbitmq-server"]
Dockerfile is used to build your own image, not to run a container. The question is - why do you need to build your own rabbitmq image? If you don't - then just use the official rabbitmq image (as you originally did).
I'm sure it already has all the necessary EXPOSE directives built-in
Also note command line arguments "-p 5672:5672 -d --hostname my-rabbit --name some-rabbit rabbitmq:3" are passed to docker daemon, not to the rabbitmq process.
If you want to make sure you're forwarding all the necessary ports - just run it with -P.

Spring Boot in Docker

I am learning how to use Docker with a Spring Boot app. I have run into a small snag and I hope someone can see the issue. My application relies heavily on #Value that are set in environment specific properties files. In my /src/main/resources I have three properties files
application.properties
application-local.properties
application-prod.properties
I normally start my app with:
java -jar -Dspring.profiles.active=local build/libs/finance-0.0.1-SNAPSHOT.jar
and that reads the "application-local.properties" and runs properly. However, I am using this src/main/docker/DockerFile:
FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD finance-0.0.1-SNAPSHOT.jar finance.jar
RUN sh -c 'touch /finance.jar'
EXPOSE 8081
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /finance.jar" ]
And then I start it as:
docker run -p 8081:80 username/reponame/finance
-Dspring.profiles.active=local
I get errors that my #Values are not found:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'spring.datasource.driverClassName' in value "${spring.datasource.driverClassName}"
However, that value does exist in both *.local & *.prop properties files.
spring.datasource.driverClassName=org.postgresql.Driver
Do I need to do anything special for that to be picked up?
UPDATE:
Based upon feedback from M. Deinum I changing my startup to be:
docker run -p 8081:80 username/reponame/finance
-eSPRING_PROFILES_ACTIVE=local
but that didn't work UNTIL I realized order matter, so now running:
docker run -e"SPRING_PROFILES_ACTIVE=test" -p 8081:80 username/reponame/finance
works just fine.
You can use docker run Using Spring Profiles. Running your freshly minted Docker image with Spring profiles is as easy as passing an environment variable to the Docker run command
$ docker run -e "SPRING_PROFILES_ACTIVE=prod" -p 8080:8080 -t springio/gs-spring-boot-docker
You can also debug the application in a Docker container. To debug the application JPDA Transport can can be used. So we’ll treat the container like a remote server. To enable this feature pass a java agent settings in JAVA_OPTS variable and map agent’s port to localhost during a container run.
$ docker run -e "JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n" -p 8080:8080 -p 5005:5005 -t springio/gs-spring-boot-docker
Resource Link:
Spring Boot with Docker
Using spring profile with docker for nightly and dev build:
Simply set the environment varialbe SPRING_PROFILES_ACTIVE when starting the container. This will switch the active of the Spring Application.
The following two lines will start the latest Planets dev build on port 8081 and the nightly build on port 8080.
docker run -d -p 8080:8080 -e \"SPRING_PROFILES_ACTIVE=nightly\" --name nightly-planets-server planets/server:nightly
docker run -d -p 8081:8080 -e \"SPRING_PROFILES_ACTIVE=dev\" --name dev-planets-server planets/server:latest
This can be done automatically from a CI system. The dev server contains the latest build and nightly will be deployed once a day...
There are 3 different ways to do this, as explained here
Passing Spring Profile in Dockerfile
Passing Spring Profile in Docker
run command
Passing Spring Profile in DockerCompose
Below an example for a spring boot project dockerfile
<pre>FROM java:8
ADD target/my-api.jar rest-api.jar
RUN bash -c 'touch /pegasus.jar'
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom","-Dspring.profiles.active=dev","-jar","/rest-api.jar"]
</pre>
You can use the docker run command
docker run -d -p 8080:8080 -e "SPRING_PROFILES_ACTIVE=dev" --name rest-api dockerImage:latest
If you intend to use the docker compose you can use something like this
version: "3"
services:
rest-api:
image: rest-api:0.0.1
ports:
- "8080:8080"
environment:
- "SPRING_PROFILES_ACTIVE=dev"
More description and examples can be found here

Resources