Why weren't containers popular before Docker? - docker

I'm currently into Docker and I'm asking myself why containers in general weren't hyped before Docker? I mean it's not like containers were something new. The technology has been around for quite some time now. But Docker gained like it's success overnight.
Is there something I didn't keep in mind?

It's a very broad question but I will try to answer you.
Docker was at first build on LXC, they switched to libcontainer later.
LXC is actually pretty hard to use compared to Docker, you don't have all the Docker related stuff like Dockerfile, Compose and all.
So I would say that container wasn't really a thing because of the difficulty of LXC.

As Wassim, I would say the main reason was that it needed motivated sysadmins, specific kernels (with OpenVZ and AUFS),...
Creating the same thing as a docker image was a complicated process.
Today it is a straightforward process, create a Dockerfile, just do
docker build -t mytag .
and you have created an image.
In 2004, you could not do that so easily.

Related

What is the difference between `docker compose` and `docker-compose`

It seems that docker compose * mirrors many docker-compose * commands.
What is the difference between these two?
Edit: docker compose is not considered a tech preview anymore. The rest of the answer still stands as-is, including the not-yet-implemented command.
docker compose is currently a tech preview, but is meant to be a drop-in replacement for docker-compose. It is being built into the docker binary to allow for new features. It hasn't implemented one command yet, and has deprecated a few. These are quite rarely used though, in my experience.
The goal is that docker compose will eventually replace docker-compose, but no timeline for that yet and until that day you still need docker-compose for production.
Why they do that?
docker-compose is written in Python while most Docker developments are in Go. And they decided to recreate the project in Go with the same and more features, for better integration

Is there something like save and restore snapshots in docker

I love docker and especially for complex CI environments it is just amazing.
The one thing that I really miss when working with Docker compared to a Virtual Machine is the ability to
save and restore snapshots of the container and I was wondering if Docker offers anything similar?
docker checkpoint may answer your needs.
Note that checkpoints is experimental feature and you may need to rerun your docker engine in experimental mode.

Portable docker daemon for deterministic CI builds

We are looking to make use of Docker to run integration tests within CI builds (with Bazel).
We need to support Debian as well as MacOS.
In order to guarantee build correctness, and ensure determinism and portability, we cannot rely on the host having a running docker daemon. The build needs to come with its own docker daemon.
What is the best way to achieve this? Is there a standard “portable” docker binary?
If not, what do you think would be the right approach to implement this?
In linux systems, I imagine this would be relatively simple, as we would just need to download the binaries and run.
In MacOS, I guess we would need to bundle it with hyperkit.
Would love to hear your thoughts on this.
In terms of building Docker images, you should look at bazelbuild/rules_docker (disclaimer: I wrote/own them). They implement the only properly deterministic Docker builds of which I'm aware (at least to Bazel's standard).
They do this by avoiding Dockerfile and the Docker daemon (which most other approaches use), as it is unclear these can produce deterministic artifacts. This avoids the root requirement too, which is nice.
However, you specifically asked about testing, which tl;dr we have not solved.
#ittaiz is also interested in this and started this Github issue for discussing it. Would you mind moving the discussion there?

Is it possible/sane to develop within a container Docker

I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.

How to bootstrap a docker container?

I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn

Resources