We are currently rolling out Pimcore 5 in our Docker Kubernetes environment but we didn't find an appropriate answer for the following question yet:
Which folders need to be persistent?
The documentation points out that the folders /var and /web/var are used to safe logs and assets (from the admin interface). Are there any other folders that need to be persistent to keep the environment stable even after a container restart / rebuild?
Are there any problems with updates or downsides if we run a setup like this:
Git Repository for our Code Base
PHP-fpm Docker image that holds the code base (plus nginx and redis container)
Consistent Database
We would also like to share our results when we managed to come up with a good solution.
Thank you very much!
I know this question is kind of specific :)
Yes, /var and /web/var need to be on a persistent and shared filesystem.
Further hints regarding this setup are in the documentation:
https://pimcore.com/docs/master/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Cluster_Setup.html
https://pimcore.com/docs/4.6.x/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Amazon_AWS_Setup/index.html
Related
I'm new with docker, and have some doubts.
In a dev environment (not server), is better to use just one container, with apache, php and mysql for exemple, and use just a docker and a Dockerfile, or is better to use one container for each service, and use docker-compose to do it?
I have made this here with docker-compose, but I don't know if it is the best way, seems to me unnecessary complexity, but I'm newb.
I have the following situation, I work with magento, and is a common need to have a clear instalation for isolate modules and test, so I want create my magento 2 docker environment, where have just a clear magento and must have some easy way of put my module files inside, for test, and ons shutdown, the environment backs to clear magento 2 instalation, without my files, what is the best way to get this environemnt?
Thanks in advance.
I'd certainly recommend using a docker stack (defined in a docker-compose), and not trying to spin up a whole application stack inside a single container. You should have one service per container generally.
I believe what you are looking for in the second part of your question is a deployment orchestration tool. Docker does not replace deployment orchestration, but you can run shell scripts that do application setup in the Dockerfiles that build the containers you use in your stack.
As for access to files inside your containers, I'd look into docker volumes.
at the moment I try to figure out a good setup for my application in amazon ecs.
My application needs a config file. Now I want to have a container to hold my config file so when I want to change something I don't need to redeploy my application.
I can't find any best practice method for this. What I found out is that the ecs tasks just make a docker run and you can't make a docker create.
Does anyone have an idea how I can manage my config files for my applications?
Most likely using Docker for this is overkill. How complex is the data? If it's simple key-value pairs I would use DynamoDB and get rid of the file completely. Another option would be using EFS for the file, or attaching/detaching an EBS volume.
You should not do that, it makes it fragile and you're not guaranteed to be able to access it from all containers across a cluster (or you end up having that on all instances which wastes resources). Why not package it up with the container as-is or package it as much as possible and provide environment variables to fill in the gap? If you really want to go this route I highly suggest something like S3
I've been browsing Docker Hub and I'm trying to determine the quality of builds.
I've got 2 questions:
Question 1
I came accross this image: https://hub.docker.com/r/perfectweb/production/~/dockerfile/
It uses a lot of configuration rewriting inside the image, wouldn't it be better to just copy external configuration files to the container? Like described here: Separate specific configuration in Dockerfile.
Question 2
One of the most-starred images for lemp is this one: https://hub.docker.com/r/stenote/docker-lemp/
It has a warning not to use it for production (because of the empty root password for MySQL) but I'm wondering: are there other reasons why this image is not production safe ?
wouldn't it be better to just copy external configuration files to the container?
If you copied from the disk the same php.ini already modified, that file might overwrite some of the evolution introduced by another version of php in php.ini.
So the current process (rewrites) allows for php.ini to evolve (when installing a new version of php), while keeping the rewrite visible in the Dockerfile.
are there other reasons why this image is not production safe ?
Another reason might be that, by default, those services are accessible in http, not https.
I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.