Wordpress image with mysql data - docker

Is there any image available that contain wordpress along with mysql data?
When I commit and take backup of the image, mysql data is not included. I will prefer a single image for both.
I tried to create such image using this Dockerfile:
FROM tutum/lamp:latest
RUN rm -fr /app && git clone https://github.com/WordPress/WordPress.git /app
EXPOSE 80
CMD ["/run.sh"]
I can initiate a fresh installation using a command like this...
docker run -p 88:80 shantanuo/wp
But the container can not be moved to another server "as is". I need to take data backup using mysql-dump command and that is something I am trying to avoid. Is it possible?
If I do not volumanize the container, then I am able to copy the wordpress image along with it's data.
https://hub.docker.com/r/shantanuo/lamp/~/dockerfile/
But it does not work on the new server. Adding wordpress tag.

Is there any image available that contain wordpress along with mysql data?
Short answer: not recommended.
An image usually deals with one service (so two images would be involved here: wordpress and MySQL)
And the persistent data would not be "in" the image, but on the host in a volume / bind mount.
For instance, the tutumcloud/lamp image does declare volumes:
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
The docker run command initializes the newly created volume with any data that exists at the specified location within the base image.
Making your own image without those lines might work as you expect (ie, commit a container with its data).
But if the server reboot at any time, or you have to docker run your original container again, it will start anew, without the data.
A typical docker wordpress image would use a mysql one
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: example
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
And in turn, that mysql container would use a local host mounted volume in order to persists the database.
docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
See for instance "Quickstart: Compose and WordPress"
So not only should you commit your Wordpress image, but your Mysql one as well, and your volume.
However, committing a volume is not supported: see "Commit content of mounted volumes as well" in order to backup that volume with your WordPress database in it.
With those three backups, you then migrate them to your other server.
However, this seems overly complex, and a fresh WordPress/MySQL docker project on the second server is easier to start.
You would then need, yes, your database dump file.
And some other Wordpress folders (like themes)
See "Easy WordPress Migration with Docker".
That would be the recommended way over trying to commit existing containers form one server and "transplant" them onto another server.

If you want to export your workig dataset to another server, docker has the commit command. This command creates a new image from a running container.
$ docker commit c3f279d17e0a svendowideit/testimage:version3
Documentation.

Related

Docker commit is not saving my changes to image

I'm new to docker world: I'm at a point where i can deploy docker containers and do some work.
Trying to get to the next level of saving my changes and moving my containers/images to another pc/server.
Currently, I'm using docker on windows 10, but I do have access to Ubuntu 16.04 server to test my work.
This is where I'm stuck: I have Wordpress and MariaDB images deployed on Docker.
My WP is running perfectly OK.I have installed few themes and created few pages with images.
At this point, I like to save my work and send it to my friend who will deploy my image and do further work on this same Wordpress.
What I have read online is: I should run docker commit command to save and create my docker image in .tar format and then send this image file (.tar) to my friend. He will run docker load -i on my file to load it as image into his docker and then create container from it which should give him all of my work on Wordpress.
Just to clarify, I'm committing both Wordpress and Mariadb containers.
I don't have any external volumes mounted so all the work is being saved in containers.
I do remember putting check mark on drive C and D in docker settings but i don't know if that has anything to to do with volumes.
I don't get any error in my commit and moving .tar files process. Once my friend create his containers from my committed images, he gets clean Wordpress (like new installation of Wordpress starting from wp setup pages).
Another thing I noticed is that the image I create has the same file size as original image i pulled. When I run docker images, I see my image is 420MB ,as well as Wordpress image is 420MB.
I think my image should be a little bit bigger since I have installed themes, plugins and uploaded images to Wordpress. At least it should add 3 to 5 MB more then original images. Please help. Thank you.
Running docker system df gives me this.
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 3 1.259GB 785.9MB (62%)
Containers 3 3 58.96kB 0B (0%)
Local Volumes 2 2 311.4MB 0B (0%)
Build Cache 0 0 0B 0B
Make sure, as shown here, to commit a running container (to avoid any data cleanup)
docker commit CONTAINER_ID yourImage
After the docker commit command, you can use docker save to save your image in a tar, and docker load to import it back, as shown here.
You should never run docker commit.
To answer your immediate question, containers that run databases generally store their data in volumes; they are set up so that the data is stored in an anonymous volume even if there was no docker run -v option given to explicitly store data in a named volume or host directory. That means that docker commit never persists the data in a database, and you need some other mechanism to copy the actual data around.
At a more practical level, your colleague can ask questions like "where did this 400 MB tarball come from, why should I trust it, and how can I recreate it if it gets damaged in transit?" There are also good questions like "the underlying database has a security fix I need, so how do I get the changes I made on top of a newer base image?" If you're diligent you can write down everything you do in a text file. If you then have a text file that says "I started from mysql:5.6, then I ran ..." that's very close to being a Dockerfile. The syntax is straightforward, and Docker has a good tutorial on building and running custom images.
When you need a custom image, you should always describe what goes into it using a Dockerfile, which can be checked into source control, and can rebuild an image using docker build.
For your use case it doesn't sound like you actually need a custom image. I would probably suggest setting up a Docker Compose YAML file that described your setup and actually stored the data in local directories. The database half of it might look like
version: '3'
services:
db:
image: 'mysql:8.0'
volumes:
- './mysql:/var/lib/mysql/data'
ports:
- '3306:3306'
The data will be stored on the host, in a mysql subdirectory. Now you can tar up this directory tree and send that tar file to your colleague, who can then untar it and recreate the same environment with its associated data.
Use docker build (Changes to the images should be stored in the Dockerfile).
Now if you have multiple services, just use docker's brother docker-compose. One extra step you have to do is create docker-compose.yml (don't be afraid yet my friend, it's nothing trivial). All you're doing in this file is listing out your images (along with defining where their Dockerfile is for that image, could be in some subfolder for each image). You can also define some other properties there if you'd like.
Notice that certain directories are considered volume directories by docker, meaning that they are container specific and therefore never saved in the image. The \data directory is such an example. When docker commit my_container my_image:my_tag is executed, all of the containers filesystem is saved, except for /data. To work around it, you could do:
mkdir /data0
cp /data/* /data0
Then, outside the container:
docker commit my_container my_image:my_tag
Then you would perhaps want to copy the data on /data0 back to /data, in which case you could make a new image:
On the Dockerfile:
FROM my_image:my_tag
CMD "cp /data0 /data && my_other_CMD"
Notice that trying to copy content to /data in a RUN command will not work, since a new container is created in every layer and, in each of them, the contents of /data are discarded. After the container has been instatiated, you could also do:
docker exec -d my_container /bin/bash -c "cp /data0/* /data"
You have to use the volumes to store your data.
Here you can find the documentation: https://docs.docker.com/storage/volumes/
For example you can do somethink like this in your docker-compose.yml.
version: '3.1'
services:
wordpress:
image: wordpress:php7.2-apache
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: databasename
WORDPRESS_DB_USER: username
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: namedatabase
volumes:
- name_volume:/var/www/html
volumes:
- name_volume:
or
volumes:
- ./yourpath:/var/www/html

How to implement changes made to docker-compose.yml to detached running containers

The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

docker postgres with initial data is not persisted over commits

I created a rails app in a docker environment and it links to a postgres instance. I edited the
postgres container to add initial data (by running rake db:setup from the rails app). Now I commited the postgres database, but it doesn't seem to remember my data when I create a new container (of the commited postgres image).
Isn't it possible to save data in a commit and then reuse it afterwards?
I used the postgres image: https://registry.hub.docker.com/_/postgres/
The problem is that the postgres Dockerfile declares "/var/lib/postgresql/data" as a volume. This is a just a normal directory that lives outside of the Union File System used by images. Volumes live until no containers link to them and they are explicitly deleted.
You have a few choices:
Use the --volumes-from command to share data with new containers. This will only work if there is only one running postgres image at a time, but it is the best solution.
Write your own Dockerfile which creates the data before declaring the volume. This data will then be copied into the volume when the container is created.
Write an entrypoint or cmd script which populates the database at run time.
All of these suggestions require you to use Volumes to manage the data once the container is running. Alternatively, you could write your own Dockerfile and simply not declare a volume. You could then use docker commit to create a new image after adding data. This will probably work in the short term, but is definitely not how you should work with containers - it isn't repeatable and you will eventually run out of layers in the Union File System.
Have a look at the official Docker docs on managing data in containers for more info.
Create a new Dockerfile and change PGDATA:
FROM postgres:9.2.10
RUN mkdir -p /var/lib/postgresql-static/data
ENV PGDATA /var/lib/postgresql-static/data
You should be all set with the following command. The most important part is the PGDATA location, which should be anything but the default.
docker run -e PGDATA=/var/lib/postgresql/pgdata -e POSTGRES_PASSWORD=YourPa$$W0rd -d postgres
It is not possible to save data during a commit since the data resides on a mount which is specific for that container and will get removed once you run docker rm <container ID> but you can use data volumes to share and reuse data between container and the changes made are directly on the volume.
You can use docker run -v /host/path:/Container/path to mount the volume to the new container.
Please refer to: https://docs.docker.com/userguide/dockervolumes/
For keeping permanent data such as databases, you should define these data volumes as external, therefore it will not be removed or created automatically every time you run docker-compose up or down commands, or redeploy your stack to the swarm.
...
volumes:
db-data:
external: true
...
then you should create this volume:
docker volume create db-data
and use it as data volume for your databse:
...
db:
image: postgres:latest
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432
...
In production, there are many factors to consider when using docker for keeping permanent data safely, specially in swarm mode, or in kubernetes cluster.

How to combine two or more Docker images

I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code

Resources