I want to deploy MYSQL Database to Docker container but I don't know how can I do that. I deployed my Spring Boot project and works fine. I created Dockerfile in this project:
FROM openjdk:8-jre-alpine
ADD ./target/Family-0.0.1-SNAPSHOT.jar Family-0.0.1-SNAPSHOT.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "Family-0.0.1-SNAPSHOT.jar"]
But I don't know how can I do with MySQL. I used MySQL Workbench when I created my database.
Ideally docker containers should one one service at a time, that means that you would have:
app-docker-container: Container with your Java application
db-docker-container: Container with your database
And then link the database db-docker-container to the app-docker-container. But this involves too many manual commands that might confuse you more, as an starting point I would suggest you to use docker-compose (link to an example).
You need to simply add MYSQL original image from docker-hub
https://hub.docker.com/_/mysql/ --> check for various in line commands to run
Add docker-compose.yml to link both the microservice and database.
https://github.com/thoopalliamar/Spring-Boot-And-Database-Docker -->check this to refer docker-compose.
network mode and hostname creates a network and links between them.
You need to stop the mysql in localmachine and to get into docker of mysql
sudo docker mysql -u root -p
to check the databases
Related
I have a Microservices based application and the services work fine if I deploy them on a host machine. But now, I'd like to learn Docker, so I started to use containers on a linux based machine. Here is a sample Docker file, it is really simple:
FROM openjdk:11-jdk-slim
MAINTAINER BeszterceKK
COPY ./tao-elszamolas-config.jar /usr/src/taoelszamolas/tao-elszamolas-config.jar
WORKDIR /usr/src/taoelszamolas
ENV SPRING_PROFILES_ACTIVE prod
EXPOSE 9001
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
My problem is that, I try to write my Spring boot application log to the host machine. This is why I use data volumes. At the end this is the command how I run the container:
docker run -d --name=tao-elszamolas-config-server --publish=9001:9001 -v /tao-elszamolas/logs:/tao-elszamolas/logs -v /tao-elszamolas/services/tao-config/log4j2-prod.xml:/tao-elszamolas/services/tao-config/log4j2-prod.xml tao-elszamolas-config:latest
But on a longer term all of the services will go under "docker-compose". This is just for the test, something like a proof of concept.
First question is, why it is not writing the log to the right place. (In one of the volumes defined.) That is what I set in the Log4j2 config xml. If I use the config XML on local without Docker everything works fine. When I log into the container, then I can see the mounted volumes and I can "cd" into it. And I also can do this:
touch something.txt
So the file will be created and can be seen both from container and host machine. What am I doing wrong? I think, the application can pick up the log config, because when I just set an internal folder as the location of the log file, it logs the stuff inside the container.
And I also set the permissions of the whole volume (and its children) to 777 temporarily to test out if the permissions were the problem. But not. Any help would be very much appreciated!
My second question, is there any good web based tool on linux where I can manage my containers. Start them, stop then, etc... I googled it out and found some but not sure which one is the best and free for basic needs, and which one is enough secure.
UPDATE:
Managed to resolve this problem after spending couple of nights with this.
I had multiple problems. First of all, the order of the system properties in the Dockerfile ENTRYPOINT section wasn't quite right. The
-Dsomething=something
must be before the "-jar". Otherwise it is not working in Docker. I haven't found any official documentation stating that, but this is how it is working for me. So the right ENDPOINT definition looks like this:
ENTRYPOINT ["java", "-DlogFileLocation=/tao-elszamolas/logs", "-jar", "tao-elszamolas-config.jar"]
Secondly, when I mounted some folders up to the container with docker run command like this:
-v /tao-elszamolas/logs:/tao-elszamolas/logs
then the log file wasn't written, if the folder in the Docker container doesn't exist by default. But if I create that folder at some point before the ENTRYPOINT in the Dockerfile, then the logging is fine, the system writes its logs to the host machine. I also didn't find any documentation stating these facts, but this is my experience.
Just to provide some steps for verification:
Both Log4j and spring boot in general, should not be aware of any docker-related things, like volumes, mapped folders and so forth.
Instead, configure the logging of the application as if it works without docker at all, so if you want a local file - make sure that the application indeed produces the logging file in a folder of your choice.
The next step would be mapping the folder with volumes in docker / docker-compose.
But first please validate the first step:
docker ps // to see the container id
docker exec -it <CONTAINER_ID> bash
// now check the logging file from within the docker container itself even without volumes
If the logging file does not exist its a java issue and you should configure logging properly. If not - it's a docker issue.
You have a space in your entrypoint after log4j2- and before prod.xml:
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
It might be a problem.
We're trying to build a Docker stack that includes our complete application: a Postgres database and at least one web application.
When the stack is started, we expect the application to be immediately working - there should not be any delay due to database setup or data import. So the database schema (DDL) and the initial data have to be imported when the image is created.
This could be achieved by a RUN command in the dockerfile, for example
RUN psql.exe -f initalize.sql -h myhost -d mydatabase -U myuser
RUN data-import.exe myhost mydatabase myuser
However, AFAIU this would execute data-import.exe inside the Postgres container, which can only work if the Postgres container is a Windows container. Our production uses a Linux Postgres distribution, so this is not a good idea. We need the image to be a Linux Postgres container.
So the natural solution is to execute data-import.exe on the host, like this:
When we run docker build, a Linux Postgres container is started.
RUN psql.exe ... runs some SQL commands inside the Postgres container.
Now, our data-import.exe is executed on the host. Its Postgres client connects to the database in the container and imports the data.
When the data import is done, the data is committed to the image, and docker builds an image which contains the Postgres database together with the imported data.
Is there such a command? If not, how can we implement this scenario in docker?
Use the correct tool, a dockerfile is not a hammer for everything.
Obviously you come from a state where you had postgres up before using some import-tool. Now you can mimic that strategy by firing up a postgres container (without dockerfile, just docker/kubernetes). Then run the import-tool, stop the postgres-container, and make a snapshot of the result using "docker commit". The committed image will be used for the next stages of your deployment.
In Docker generally the application data is stored separately from images and containers; for instance you'd frequently use a docker run -v option to store data in a host directory or Docker volume so that it would outlive its container. You wouldn't generally try to bake data into an image, both for scale reasons and because any changes will be lost when a container exits.
As you've described the higher-level problem, I might distribute something like a "test kit" that included a docker-compose.yml and a base data directory. Your Docker Compose file would use a stock PostgreSQL container with data:
postgres:
image: postgres:10.5
volumes:
- './postgres:/var/lib/postgresql/data'
To answer the specific question you asked, docker build steps only run individual commands within Docker container space; they can't run arbitrary host commands, read filesystem content outside of the tree containing the Dockerfile, or write any sort of host filesystem content outside the container.
I'm pretty new to docker and docker compose. I have managed to build my image and push it to Docker Hub. The app I built is simple and consists of 2 images php7-apache and mysql offical images. All declared in docker-compose.yml.
I informed my team to pull the image I built from my Docker Hub repository using docker pull ... and start it using docker run -d .... But when we run docker ps in the production server, only 1 process is running but no MySQL.
Usually, when I run locally using docker-compose up I get this in the terminal:
Creating network "myntrelease_default" with the default driver
Creating myntrelease_mysql_1
Creating myntrelease_laravel_1
I can then access the MySql using docker-compose exec mysql bash and tweak some tables there. So far so good.
Question is how can I use docker-compose.yml in the production server when it's not available because its in the image itself?
Short answer: Yes, you need the docker-compose.yml in the production environment.
Explanation: Every image is independent. Since your image is independent of MySQL image (at least that's what I understand from your questions), and docker-compose.yml defines the relationship between the two (eg. how MySQL is accessible in the php7-apache image), then you definitely need the docker-compose.yml in production. Even if you only have a single image its usually good to use docker-compose.yml so that settings and configuration like volume mounts, ports etc. can be clearly defined.
I want to Containerize a web application which is a WAR file along with Postgres as database and Tomcat as Server.
What will be the procedure to do that?
I am using the following dockerfile:
FROM tomcat:8-jre8 MAINTAINER lpradel
RUN echo "export JAVA_OPTS=\"-Dapp.env=staging\"" > /usr/local/tomcat/bin/setenv.sh
COPY ./application.war /usr/local/tomcat/webapps/staging.war
CMD ["catalina.sh", "run"]
Write a dockerfile for each application.
E.g: Base a dockerfile on a tomcat server docker image, copy over your warfile and start the tomcat in the cmd part of the dockerfile.
For postgres you should be able to use an existing image.
Updated answer:
The dockerfile you are using is correct. This should prepare a tomcat image which has the web application you want. However you may want to connect that tomcat container with postgress container.
Easier way to connect multiple containers would be to use a docker-compose file. To use docker-compose, refer to docker-compose#build.
Original answer
You can mount your war file in tomcat container at /usr/local/tomcat/webapps/name_of_your_file.war inside container. This will enable the war file to be automatically deployed by tomcat container.
By the way, I am doing the similar process and using mysql database. Taking a look at the my deployment file might be helpful to you.
I've just created my first dockerized app and I am using docker-compose to start it on my clients server:
web:
image: user/repo:latest
ports:
- "8080:8080"
links:
- db
db:
image: postgres:9.4.4
It exposes REST API (node.js) over 8080 port. REST API makes use of Postgres database. It works fine. My idea is that I will give this file (docker-compose.yml) to my client and he will just run docker-compose pull && docker-compose up -d each time he want to pull fresh app code from a repo (assuming he has rights to access user/repo repo.
However I have to handle two tasks: database backups and log backups.
How I can expose database to the host (docker host) system to for example define cron job that will make database dump and store it on S3?
I've read some article about docker container storage and docker volumes. As I understand in my set up all database files will be stored in "container memory" that will be lost if container is removed from the host. So I should use a docker volume to hold database data on "host side" right? How I can do this with postgres image?
In my app I log all info to stdout and stderr in case of errors. It would be coll (I think) if those logs were "streamed" directly to some file(s) on host system so they could be backed up to S3 for example (again by cron job?) - how I can do this? Or maybe there is a better aproach?
Sorry for so many questions but I am new to docker-world and it's really hard for me to understand how it actually works or how it's supposed to work.
You could execute a command on a running container to create a backup, like docker exec -it --rm db <command> > sqlDump. I do not know much about postgres but in that case would create the dump on stdout and > sqlDump would redirect that to the file sqlDump which would be created on the hosts current working directory. Then you can include that created dump file into your backup. That could be done perfectly with a cronjob defined on the host. But a much better solution is linked in the next paragraph.
If you run your containers as you described above, your volumes are deleted when you delete your container. You could go for a second approach to use volume containers as described here. In that case you could remove and re-create e.g. the db container without loosing your data. A backup could be created very easy then via a temporary container instance following these instructions. Assuming /dbdata is the place where your volume is mounted which contains the database data to be backed up:
docker run --volumes-from dbdata -v $(pwd):/backup <imageId> tar cvf /backup/backup.tar /dbdata
Since version 1.6 you can define a log driver for your container instance. With that you could interact e.g. with your syslog to have log entries in the hosts /var/log/syslog file. I do not know S3 but maybe that gives you some ideas:
docker run --log-driver=syslog ...