Install entire database (including binaries) inside A VOLUME in Docker - docker

I need to containerize a JanusGraph database inside Docker, i don't know what files/directories needs to reside in volume to become persistent/writable. In order to make all the things simple and fast, can i install the entire database in a volume? Not only the data, but the entire app, all the binaries etc. I think this is a fast way to containerize some of my apps.
The janusGraph subdirectories of binaries, data, log resides inside a "janusgraph-hadoop" directory
For example: i will create a volume called /janusgraph-hadoop and run the command to install all the software inside that (it will be a volume).
This can be considered a bad practice or there are no problem in doing that?
I know, we have some JanusGraph already containerized, but they are not official, and my doubt is more general in order to containerize some apps in a more direct way without the need to research what directories need to be in volume and what not.
I will not redistribute any of this, it's just to my use.

At a technical level, nothing would stop you from launching a plain container with an attached volume and installing software there.
docker run -v my_opt:/opt -it --rm ubuntu sh
I wouldn't consider this an especially effective use of Docker. If your colleague wants to use your database installation, you have no way of giving it to them; if you leave the project for six months and come back to it, you'll have no record of how you built this setup. If you were set on this approach, you might find the networking and snapshot setups for more typical virtual machines to be better matched to it.

Related

What's the purpose of the node-modules container in wolkenkit?

That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.

Intro to Docker for FreeBSD Jail User - How and should I start the container with systemd?

We're currently migrating room server to the cloud for reliability, but our provider doesn't have the FreeBSD option. Although I'm prepared to pay and upload a custom system image for deployment, I nontheless want to learn how to start a application system instance using Docker.
in FreeBSD Jail, what I did was to extract an entire base.txz directory hierarchy as system content into /usr/jail/app, and pkg -r /usr/jail/app install apache24 php perl; then I configured /etc/jail.conf to start the /etc/rc script in the jail.
I followed the official FreeBSD Handbook, and this is generally what I've worked out so far.
But Docker is another world entirely.
To build a Docker image, there are two options: a) import from a tarball, b) use a Dockerfile. The latter of which lets you specify a "CMD", which is the default command to run, but
Q1. why isn't it available from a)?
Q2. where are information like "CMD ENV" stored? in the image? in the container?
Q3. How to start a GNU/Linux system in a container? Do I just run systemd and let it figure out the rest from configuration? Do I need to pass to it some special arguments or envvars?
You should think of a Docker container as a packaging around a single running daemon. The ideal Docker container runs one process and one process only. Systemd in particular is so heavyweight and invasive that it's actively difficult to run inside a Docker container; if you need multiple processes in a container then a lighter-weight init system like supervisord can work for you, but that's usually an exception more than a standard packaging.
Docker has an official tutorial on building and running custom images which is worth a read through; this is a pretty typical use case for Docker. In particular, best practice is to write a Dockerfile that describes how to build an image and check it into source control. Containers should avoid having persistent data if they can (storing everything in an external database is ideal); if you change an image, you need to delete and recreate any containers based on it. If local data is unavoidable then either Docker volumes or bind mounts will let you keep data "outside" the container.
While Docker has several other ways to create containers and images, none of them are as reproducible. You should avoid the import, export, and commit commands; and you should only use save and load if you can't use or set up a Docker registry and are forced to move images between systems via a tar file.
On your specific questions:
Q1. I suspect the best reason the non-docker build paths to create images don't easily let you specify things like CMD is just an implementation detail: if you look at the docker history of an image you'll see the CMD winds up being its own layer. Don't worry about it and use a Dockerfile.
Q2. The default CMD, any set ENV variables, and other related metadata are stored in the image alongside the filesystem tree. (Once you launch a container, it has a normal Unix process tree, with the initial process being pid 1.)
Q3. You don't "start a system in a container". Generally run one process or service in a container, and manage their lifecycles independently.

Avoiding a constant "stop, build, up" loop when developing with Docker locally

I've recently delved into the wonder that is Docker and have set up my containers using docker-compose (an Express app and a MySQL DB).
It's been great for sharing the project with the team and pushing to a VPS, but one thing that's fast becoming tedious is the need to stop the running app, docker-compose build then docker-compose up any time there are changes to the code (which I believe is also creating numerous unnecessary images?).
I've scoured about but haven't found a clear-cut way to get around this, barring ditching Docker-compose locally and using docker run to run the Express app pointing to a local DB (which would do away with a lot of the easy set up perks that come with Docker, such as building the DB from scratch).
Is there a Nodemon-style way of working with Docker (images/containers get updates automatically when code changes)? Is there something obvious I'm missing? Or is my approach the necessary "evil" that comes with working on a Dockerised app?
You can mount a volume to your source directory for local development. Any changes on the host will be reflected in the container. https://docs.docker.com/storage/volumes/
You might consider separate services for deployment/development. I usually have a separate service which mounts the source directory and installs test dependencies inside the container.

Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.
From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these 'service' containers using -d they remain running (docker ps) since their main process continuously runs.
However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.
I have read the 'hacks' of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console...).
How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)
EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren't services (project files, git, composer)
when you run the container try running with the flags ...
docker run -dt ..... etc
you might even try .....
docker run -dti ..... etc
let me know if this brings any joy. has certainly worked for me on occassions.
i know you wanted to avoid hacks but if the above fails then also add ...
CMD cat
to the end of your Dockerfile - it is a hack but is the cleanest hack :)
So after reading this a few times along with Joachim Isaksson's comment, I finally get it. Tools don't need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.
The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume - they are not networked. Logs, databases and other output can be persisted in different volumes.
When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.
I don't know why this took me so long to grasp, but this is the aha moment for me.

Is it possible/sane to develop within a container Docker

I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.

Resources