We have to deploy a dockerized web app into a closed external system(our client's server).
(our image is made of gunicorn, nginx, django-python web app)
There are few options i have already considered:
option-1) using docker registries: push image into registry, pull
from client's system run docker-compose up with pulled image
option-2) docker save/load .tar files: docker save image in local dev
environment, move .tar file into the client's system and run docker load
(.tar file) there.
Our current approach:
we want to move source code inside a docker image(if possible).
we can't make our private docker registry public --yet--(so option-1 is gone)
client servers are only accessible from their internal local network(has no connection to any other external network)
we dont want to copy all the files when we make an update(to our app), what we want to do is somehow detect diff or changes on docker image and copy/move/update only changed parts of app into client's server.(option-2 is gone too)
Is there any better way to deploy to client's server with the approach explained above?
PS: I'm currently checking "docker commit": what we could do is, docker load our base image into client's server, start container with that image and when we have an update we could just copy our changed files into that container's file system, then docker commit(in order to keep changed version of container). But the thing i don't like in that option is we would need to keep changes in our minds, then move our changed files(like updated .py or .html files) to client's server.
Thanks in advance
Related
I do not want login authentication to my infinispan server started in docker container
We have done following things to create infinispan sever
Take Infinispan official base image (infinispan/server:10.1.8.Final) to create Infinispan server.
During Infinispan server creation we need to copy following two files in the container.
cache.xml to /data/sk/server/infinispan-server-10.1.8.Final/server/data
infinispan.xml to /data/sk/server/infinispan-server-10.1.8.Final/server/conf
cache.xml copied successfully and its content is well reflecting on Infinspan server UI.
infinispan.xml does not persistent.
During container creation, infinispan.xml (our file ) is override by the same file which is present in the base image.
You need to copy your configuration to a different directory and pass it as an argument when starting the server. Details in Infinipan Images repository.
ps. not sure if it works in such an old image.
Given a Windows application running in a Docker Windows Container, and while running changes are made to the Windows registry by the running applications, is there a docker switch/command that allows changes to the Windows Registry to be persisted, so that when the container is restarted the changed values are retained.
As a comparison, file changes can be persisted between container restarts by exposing mount points e.g.
docker volume create externalstore
docker run -v externalstore:\data microsoft/windowsservercore
What is the equivalent feature for Windows Registry?
I think you're after dynamic changes (each start and stop of the container contains different user keys you want to save for the next run), like a roaming profile, rather than a static set of registry settings but I'm writing for static as it's an easier and more likely answer.
It's worth noting the distinction between a container and an image.
Images are static templates.
Containers are started from images and while they can be stopped and restarted, you usually throw them entirely away after each execution with most enterprise designs such as with Kubernetes.
If you wish to run a docker container like a VM (not generally recommended), stopping and starting it, your registry settings should persist between runs.
It's possible to convert a container to an image by using the docker commit command. In this method, you would start the container, make the needed changes, then commit the container to an image. New containers would be started from the new image. While this is possible, it's not really recommended for the same reason that cloning a machine or upgrading an OS is not. You will get extra artifacts (files, settings, logs) that you don't really want in the image. If this is done repeatedly, it'll end up like a bad photocopy.
A better way to make a static change is to build a new image using a dockerfile. You'll need to read up on that (beyond the scope of this answer) but essentially you're writing a docker script that will make a change to an existing docker image and save it to a new image (done with docker build). The advantage of this is that it's cleaner, more repeatable, and each step of the build process is layered. Layers are advantageous for space savings. An image made with a windowsservercore base and application layer, then copied to another machine which already had a copy of the windowsservercore base, would only take up the additional space of the application layer.
If you want to repeatedly create containers and apply consistent settings to them but without building a new image, you could do a couple things:
Mount a volume with a script and set the execution point of the container/image to run that script. The script could import the registry settings and then kick off whatever application you were originally using as the execution point, note that the script would need to be a continuous loop. The MS SQL Developer image is a good example, https://github.com/Microsoft/mssql-docker/tree/master/windows/mssql-server-windows-developer. The script could export the settings you want. Not sure if there's an easy way to detect "shutdown" and have it run at that point, but you could easily set it to run in a loop writing continuously to the mounted volume.
Leverage a control system such as Docker Compose or Kubernetes to handle the setting for you (not sure offhand how practical this is for registry settings)
Have the application set the registry settings
Open ports to the container which allow remote management of the container (not recommended for security reasons)
Mount a volume where the registry files are located in the container (I'm not certain where these are or if this will work correctly)
TL;DR: You should make a new image using a dockerfile for static changes. For dynamic changes, you will probably need to use some clever scripting.
I am new to docker. So what I want to have is a pgadmin container which I can pull and have always my configs and connections up to date. I was not really sure how to do that, but can I have a Volume which is alsways shared for example on my Windows PC at home and on work? I couldt find an good tutorial for that and dont know if that makes sense. Lets say my computer would be stolen I just want to install docker and my images and fun.
What about a shared directory using DropBox ? as far as i know that the local dropbox directories always synced with the actual dropbox account which means you can have the config up to date for all of your devices.
Alternatively you can save the configuration - as long as it does not contain sensitive data - on a git repository which you can clone it then start using it. Both cases can be used as volumes in docker.
That's not something you can do with Docker itself. You can only push images to DockerHub, which do not contain information that you added to a container during an execution.
What you could do is using a backup routine to S3 for example, and sync your 'config and connections' between your docker container running on your home PC and work one.
I am planning a setup where, the docker containers are using remote volume - volume that have ssh-ed to another machine and it is reading all the time.
Lets say we have 5 containers using that remote volume. In my understanding, the docker is ssh-ed to the remote machine and constantly reading on certain directory (with about 100 files, not more than few MB).
Presumably that constant reading will put some load to the remote machine. Will that load be significant or it can be negligible? There is php-fpm and Apache2 on the remote machine, will the constant reading slow down that web server? Also, how often the volume is refreshing the files?
Sincerely.
OK after some testing:
I have created a remote volume with vieux/sshfs driver.
Created an ubuntu container with the volume mounted under certain folder.
Then tail a txt file from the container itself.
Write to that txt file form the remote machine (the one that contains the physical folder).
I have found out, if we write to the file continuously (like echo "whatever" >> thefile.txt). The changes appear all at once after few seconds, not one by one as they have been introduced. Also, if I print or list the files in the mounted directory, the response is instant. This makes me thing, that Docker is making local copy of the folder ssh-ed in the volume and refreshes it every 5 sec or so. Basically negligible load after the folder is copied once.
Also, when trying to write from the container to the mounted folder, the changes on the file are reflected almost instantly (considering any latency). Which makes me think that the daemon is propagating the write changes instantly.
In conclusion - reading a remote folder, puts negligible load to the remote machine. The plan is to use such setup in production environment, so we don't have to pull changes on two different places (prod server and machine which is sharing (local) volume between containers).
If there is anyone who can confirm my findings, that would be great.
Sincerely
Question about Docker best practices/intended use:
I have docker working, and it's cool. I run a PaaS company, and my intent is maybe to use docker to run individual instances of our service for a given user.
So now I have an image that I've created that contains all the stuff for our service... and I can run it. But once I want to set it up for a specific user, theres a set of config files that I will need to modify for each user's instance.
So... the question is: Should that be part of my image filesystem, and hence, I then create a new image (based on my current image, but with their specific config files inside it) for each user?
Or should I put those on the host filesystem in a set of directories, and map the host filesystem config files into the correct running container for each user (hence, having only one image shared among all users)?
Modern PAAS systems favour building an image for each customer, creating versioned copies of both software and configuration. This follows the "Build, release, run" recommendation of the 12 factor app website:
http://12factor.net/
An docker based example is Deis. It uses Heroku build packs to customize the software application environment and the environment settings are also baked into a docker image. At run-time these images are run by chef on each application server.
This approach works well with Docker, because images are easy to build. The challenge I think is managing the docker images, something the docker registry is designed to support.