I'm new to docker and am currently working on dockerizing a simple ELK Stack application at work. I've seen several tutorials on how to do this, however my biggest issue is that I can't use just any existing docker image, as this is corporate code. So, from my understanding, I'll need dockerize/create 3 separate images of ELK from artifacts that we have currently available internally. My current approach so far has been to get the rpms (using RHEL7), create a dockerfile to install/expose them ect.
Reason for my approach: I am working behind a corporate firewall and proxy and don't know if downloading an official docker image is possible nor if it is compliant
So far unsuccessful, but does anyone have experience doing this?
Thanks in advance!
It seems your env can not access through internet to docker registry to download docker images right ? So you just want to get the docker image related with EFK, refer How to copy Docker images from one host to another without using a repository for copy the images to your env.
Related
I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.
I'm trying to setup the deployment of docker images to Linux server (Debian 10).
I looked over the internet to find an easy solution to deploy images from docker repository onto a server automatically.
I know that Docker Hub has webhooks.
Also, there is an option to use Kubernetes, but it seems to be a bit too much for a simple application running on one server.
What I am looking for is a way for server to detect that docker image has been updated, so that it downloads it and runs the newest version.
Currently, I have setup automatic build of docker images on Azure DevOps that are pushed to private repository on Docker Hub (I will most likely move to privately hosted Nexus repository).
I am looking for suggestions on how to do it with relatively low complexity (e.g. should I use docker-compose for it or some sort of bash script on a server).
The closest thing to what I am looking for is this solution: How to auto deploy Docker Image on own server with GitLab?
I would like to know if this is the recommended way to do or are there any other, possibly easier ways to approach it.
I found this project that looks good as a solution for my case.
https://containrrr.github.io/watchtower/
I have made a quite simple golang server and I need to deploy it to a digitalocean droplet.
I know that there can be issues with cross-building go apps in case they use cgo, so to not to think about it in future I decided to use docker, so my app will be build and run always in same environment.
The first thing I dont get is about developing an app. When I create a Dockerfile I use commands to add files from my project directory into newly created docker image. Then I run the container created from this image. But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Second one - I have created a docker droplet on a DO: Whats the way to deploy my app?
I have to push my image to any docker repository and pull it on to the droplet?
Or I can upload it directly?
Or I have to scp my source code to droplet and run same process as on my local machine, building image and then runnjng a container?
But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Don't delete the image just rebuild it. It will be much faster than the first initial build. Also why is it tricky? It's just one or two commands, you can create a bash or .bat script if it gets annoying.
I have created a docker droplet on a DO: Whats the way to deploy my app?
All three options are a possibility. For the second one you would have to set up your VM as a docker-hub repo which might be more than you need. Using docker hub isn't bad. You could also just build the image on your server. I recommend using docker hub for it's ease and having watchtower set up on your server to restart your web app on new image pushes.
Edit: the above advice was for a VM not a docker droplet. I'm not familiar with DO but this article should help:
https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd
I am very new to Docker and currently trying to get my head around if there is any best practice guide to update software that runs inside a docker container in a very large distributed environment. I already found couple of posts around updating a MySQL database in docker, etc. It gives a good hint for any software that stores data, but what if you want to update other parts or your own software package or services that are distributed and used by several other docker images through docker-compose?
Is there someone with real life experience doing that in such an environment who can help me or other newbies to understand the best practices in docker if there are any.
Thanks for your help!
You never update software in a running container. You pull down a new version from the hub. If we assume you're using the latest tag (which is a bad idea, always pin your versions) of your image and it's one of the official library images or the publicly available that uses automated builds you'll get the latest version of the container image when you pull the image.
This assume you've also separated the data out of your container either as a host volume or using the data container pattern.
The container should be considered immutable, if you change it's state it's no longer a true version of the image.
I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.