Docker for custom build process - docker

I have an executable that performs a number of tasks such as:
Copy .NET source code to a directory
Run another executable that modifies the source code
Run MSBuild to build the code
Publish the code
Run add-migration to create database
Run another executable that populates the database from files
etc.
I've set this up on my laptop, and everything works correctly, and now I want to publish this to the cloud.
Is it possible to create a docker image that does all these kinds of things, and run it on Azure Container Instances? Or do I need to run this kind of system on a VM?
I'm new to docker so don't know what it's capable of, but if I can run it on ACI as-needed that would be great, so I'm not paying for a VM 24/7 when this process only happens a few times a day

Docker is an open source centralised plateform design to create, Deploy and run Application.Its uses OS level of virtualization.Docker uses container on host to run the application.Container is also like a Virtaul Machine but its advantage, there is no preallocation of RAM as we have in VM.
Is it possible to create a docker image that does all these kinds of
things, and run it on Azure Container Instances? Or do I need to run
this kind of system on a VM?
Yes it is possible to create docker images using docker compose yaml file . In that yaml file you need to write all the task you want to peroform and then build an images of that file and push it container registery and create a conatainer instance of it.
You can take reference of these thread for Copy source code and to add it into docker image using Dockerfile.and how to create database in docker container using yaml file

Related

Separate shell scripts from application in docker container?

I have a ftp.sh script that downloads files from an external ftp to my host. And I have another (java) application that imports the downloaded content into a database. Currently, both runs on the host triggered by a cronjob as follows:
importer.sh:
#!/bin/bash
source ftp.sh
java -jar app.jar
Now I'd like to move my project to docker. From the design point of view: would the .sh script and the application reside both in a separate container? Or should both be bundled into one container?
I can think of the following approaches:
Run the ftp script on host, but java app in docker container.
Run the ftp script in its own docker container, and the java app in another docker container.
Bundle both the script and java app in a docker container. Then call a wrapper script with: ENTRYPOINT["wrapper.sh"]
So the underlying question is: should each docker container serve only on purpose (either download files, or import them)?
Sharing files between containers is tricky and I would try to avoid doing that. It sounds like you are trying to set up a one-off container that does the download, then does the import, then exits, so "orchestrating" this with a shell script in a single container will be much easier than trying to set up multiple containers with shared storage. If the ftp.sh script sets some environment variables then it will be basically impossible to export them to a second container.
From your workflow it doesn't sound like building an image with the file and the import tool is the right approach. If it was, I could envision a work flow where you ran ftp.sh on the host or as the first part of a multi-stage build and then COPY the file into the image. For a workflow that's "download a file then import it into a database, where the file changes routinely" I don't think that's what you're after.
The setup you have now should work fine just packaging it into a container. I'd give you some generic advice in a code-review context but your last option of stuffing it also into one image and running the wrapper script as the main container process makes sense.

Persisting changes to Windows Registry between restarts of a Windows Container

Given a Windows application running in a Docker Windows Container, and while running changes are made to the Windows registry by the running applications, is there a docker switch/command that allows changes to the Windows Registry to be persisted, so that when the container is restarted the changed values are retained.
As a comparison, file changes can be persisted between container restarts by exposing mount points e.g.
docker volume create externalstore
docker run -v externalstore:\data microsoft/windowsservercore
What is the equivalent feature for Windows Registry?
I think you're after dynamic changes (each start and stop of the container contains different user keys you want to save for the next run), like a roaming profile, rather than a static set of registry settings but I'm writing for static as it's an easier and more likely answer.
It's worth noting the distinction between a container and an image.
Images are static templates.
Containers are started from images and while they can be stopped and restarted, you usually throw them entirely away after each execution with most enterprise designs such as with Kubernetes.
If you wish to run a docker container like a VM (not generally recommended), stopping and starting it, your registry settings should persist between runs.
It's possible to convert a container to an image by using the docker commit command. In this method, you would start the container, make the needed changes, then commit the container to an image. New containers would be started from the new image. While this is possible, it's not really recommended for the same reason that cloning a machine or upgrading an OS is not. You will get extra artifacts (files, settings, logs) that you don't really want in the image. If this is done repeatedly, it'll end up like a bad photocopy.
A better way to make a static change is to build a new image using a dockerfile. You'll need to read up on that (beyond the scope of this answer) but essentially you're writing a docker script that will make a change to an existing docker image and save it to a new image (done with docker build). The advantage of this is that it's cleaner, more repeatable, and each step of the build process is layered. Layers are advantageous for space savings. An image made with a windowsservercore base and application layer, then copied to another machine which already had a copy of the windowsservercore base, would only take up the additional space of the application layer.
If you want to repeatedly create containers and apply consistent settings to them but without building a new image, you could do a couple things:
Mount a volume with a script and set the execution point of the container/image to run that script. The script could import the registry settings and then kick off whatever application you were originally using as the execution point, note that the script would need to be a continuous loop. The MS SQL Developer image is a good example, https://github.com/Microsoft/mssql-docker/tree/master/windows/mssql-server-windows-developer. The script could export the settings you want. Not sure if there's an easy way to detect "shutdown" and have it run at that point, but you could easily set it to run in a loop writing continuously to the mounted volume.
Leverage a control system such as Docker Compose or Kubernetes to handle the setting for you (not sure offhand how practical this is for registry settings)
Have the application set the registry settings
Open ports to the container which allow remote management of the container (not recommended for security reasons)
Mount a volume where the registry files are located in the container (I'm not certain where these are or if this will work correctly)
TL;DR: You should make a new image using a dockerfile for static changes. For dynamic changes, you will probably need to use some clever scripting.

Intro to Docker for FreeBSD Jail User - How and should I start the container with systemd?

We're currently migrating room server to the cloud for reliability, but our provider doesn't have the FreeBSD option. Although I'm prepared to pay and upload a custom system image for deployment, I nontheless want to learn how to start a application system instance using Docker.
in FreeBSD Jail, what I did was to extract an entire base.txz directory hierarchy as system content into /usr/jail/app, and pkg -r /usr/jail/app install apache24 php perl; then I configured /etc/jail.conf to start the /etc/rc script in the jail.
I followed the official FreeBSD Handbook, and this is generally what I've worked out so far.
But Docker is another world entirely.
To build a Docker image, there are two options: a) import from a tarball, b) use a Dockerfile. The latter of which lets you specify a "CMD", which is the default command to run, but
Q1. why isn't it available from a)?
Q2. where are information like "CMD ENV" stored? in the image? in the container?
Q3. How to start a GNU/Linux system in a container? Do I just run systemd and let it figure out the rest from configuration? Do I need to pass to it some special arguments or envvars?
You should think of a Docker container as a packaging around a single running daemon. The ideal Docker container runs one process and one process only. Systemd in particular is so heavyweight and invasive that it's actively difficult to run inside a Docker container; if you need multiple processes in a container then a lighter-weight init system like supervisord can work for you, but that's usually an exception more than a standard packaging.
Docker has an official tutorial on building and running custom images which is worth a read through; this is a pretty typical use case for Docker. In particular, best practice is to write a Dockerfile that describes how to build an image and check it into source control. Containers should avoid having persistent data if they can (storing everything in an external database is ideal); if you change an image, you need to delete and recreate any containers based on it. If local data is unavoidable then either Docker volumes or bind mounts will let you keep data "outside" the container.
While Docker has several other ways to create containers and images, none of them are as reproducible. You should avoid the import, export, and commit commands; and you should only use save and load if you can't use or set up a Docker registry and are forced to move images between systems via a tar file.
On your specific questions:
Q1. I suspect the best reason the non-docker build paths to create images don't easily let you specify things like CMD is just an implementation detail: if you look at the docker history of an image you'll see the CMD winds up being its own layer. Don't worry about it and use a Dockerfile.
Q2. The default CMD, any set ENV variables, and other related metadata are stored in the image alongside the filesystem tree. (Once you launch a container, it has a normal Unix process tree, with the initial process being pid 1.)
Q3. You don't "start a system in a container". Generally run one process or service in a container, and manage their lifecycles independently.

Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.
From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these 'service' containers using -d they remain running (docker ps) since their main process continuously runs.
However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.
I have read the 'hacks' of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console...).
How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)
EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren't services (project files, git, composer)
when you run the container try running with the flags ...
docker run -dt ..... etc
you might even try .....
docker run -dti ..... etc
let me know if this brings any joy. has certainly worked for me on occassions.
i know you wanted to avoid hacks but if the above fails then also add ...
CMD cat
to the end of your Dockerfile - it is a hack but is the cleanest hack :)
So after reading this a few times along with Joachim Isaksson's comment, I finally get it. Tools don't need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.
The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume - they are not networked. Logs, databases and other output can be persisted in different volumes.
When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.
I don't know why this took me so long to grasp, but this is the aha moment for me.

Docker-Compose: Initialize vs Run

I'm migrating an existing rails application to docker and docker-compose. There are a few scripts that need to run only at the creation of the containers, for instance a script that copies the prod db into a volume and and indexes it in Elasticsearch.
From then on, when I start the containers locally for development, I only want to run the rails development server and not all the db init scripts. I could make two docker-compose files (say init and run) that are the same except for the command: option on the webapp container.
Is there a better way?
The base Docker system doesn't have an "on run" concept for custom scripts.
What you can do is one of these approaches:
Add to your script a check of if it already has done that. Then it doesn't matter if you re-run it again and again.
Integrate the db into the docker and ship it as already made with the data loaded.
Make a 2 part docker system: The 1st would be the docker you know now with a possible "ONBUILD" command so the 2nd one would run the script. Then the 2nd docker is a one inhering the original one and would run the script with or without the "ONBUILD" above. In docker-compose you would have a local build which would trigger the import while creating the local docker image.
Just an idea
You can use extends in your compose *.yml files.
Extend documentation and examples.

Resources