Editing Docker content - docker

I am looking at Docker to share and contain applications, after reading several articles on the subject I can't figure out what the steps would be to use a Docker container for actual development. Is that even acceptable?
My thought process goes like this
Create DockerFile
Share DockerFile
User A and B download DockerFile
User A and B install images
User A and B are able to make changes to their local containers
User A and B submit changes
The way I have been reading different articles Docker is only to share applications but not for continuous development the way I am thinking, the closest I can think of on what I am explaining above is to make changes outside the containers and commit to a repo outside the containers, then the containers will update the local repo and re-run the application internally but you would never develop on the container itself.

Using docker for development process is not only possible, but handy and convenient in my opinion.
What you might have missed during your study of the docker ecosystem is the concept of volumes.
With volumes you can bind mount a directory of your host (the developer computer) into the container.
You may want to use volumes to share application data folder, making it possible for the devs to work on their local copies normally, but have their application served by a docker container.
A link to get started: https://docs.docker.com/engine/admin/volumes/volumes/

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Docker PGAdmin Container persient config

I am new to docker. So what I want to have is a pgadmin container which I can pull and have always my configs and connections up to date. I was not really sure how to do that, but can I have a Volume which is alsways shared for example on my Windows PC at home and on work? I couldt find an good tutorial for that and dont know if that makes sense. Lets say my computer would be stolen I just want to install docker and my images and fun.
What about a shared directory using DropBox ? as far as i know that the local dropbox directories always synced with the actual dropbox account which means you can have the config up to date for all of your devices.
Alternatively you can save the configuration - as long as it does not contain sensitive data - on a git repository which you can clone it then start using it. Both cases can be used as volumes in docker.
That's not something you can do with Docker itself. You can only push images to DockerHub, which do not contain information that you added to a container during an execution.
What you could do is using a backup routine to S3 for example, and sync your 'config and connections' between your docker container running on your home PC and work one.

Can I migrate processes between fat containers?

I recently began getting into Docker and containers. Up to now, I understand that the philosophy behind containers is to run one process per container so we end up with applications which can be run easily and consistently regardless of the environment. Also, that containers are intrinsically connected to it's image, so if you want to save the changes to a container you need to commit and create a new image.
But let's say I want to run multiple processes inside a single container, AKA a fat container. I know it can be done and things like "Supervisord" and "Baseimage-docker" can help manage processes within fat containers.
Now we get to my question: Is there a way to have a fat container running, save the run state of a single process and migrate said process to another container?
I've looked online but I haven't really found anyone that has said that this is possible. So I'm turning to you guys in case one of you have thought about this problem or maybe I've missed something along the way.
I am not so sure if the question might be opinion based. but here is what i think you might be able to do, lets say you have a web application like a Django application that uses Redis within the same container, which will be considered as a fat container and you need to migrate redis to be a standalone service within its own container then you have to do the following:
1- Prepare a docker image that has Redis installed you might go with your own image or use the official redis docker image.
2- Copy the configuration that is being used with redis from the fat container so you can mount it later inside the new redis container
3- Change the Django application settings and make it point to that new redis container
4- Remove redis service and its configuration from the fat container or maybe build a new image.
and thats it, now you should start the redis container and restart the django application container to take effect or start a new one if you modified the fat image.
There's the famous Quake demo and the ability to migrate the state of an entire container with CRIU. That's probably the closest I've seen to what you're talking about. More here: https://criu.org/Docker
As far as a "single" process inside a container, maybe just migrate the entire container and kill the processes you want moved?
I would say the more common pattern in the Docker community is single process containers that are freely killed/updated/etc.

How to share images between multiple docker hosts?

I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.

What would be a good docker webdev workflow?

I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.

Resources