I am new to docker. So what I want to have is a pgadmin container which I can pull and have always my configs and connections up to date. I was not really sure how to do that, but can I have a Volume which is alsways shared for example on my Windows PC at home and on work? I couldt find an good tutorial for that and dont know if that makes sense. Lets say my computer would be stolen I just want to install docker and my images and fun.
What about a shared directory using DropBox ? as far as i know that the local dropbox directories always synced with the actual dropbox account which means you can have the config up to date for all of your devices.
Alternatively you can save the configuration - as long as it does not contain sensitive data - on a git repository which you can clone it then start using it. Both cases can be used as volumes in docker.
That's not something you can do with Docker itself. You can only push images to DockerHub, which do not contain information that you added to a container during an execution.
What you could do is using a backup routine to S3 for example, and sync your 'config and connections' between your docker container running on your home PC and work one.
Related
I have taken some time to create a useful Docker volume for use at work. It has a restored backup of one of our software databases (SQL Server) on it, and I use it for testing/debug by just attaching it to whatever Linux SQL Container I feel like running at the time.
When I make useful Docker images at work, I share them with our team using either the Azure Container Registry or the AWS Elastic Container Registry. If there's a DockerFile I've made as part of a solution, I can store that in our GIT repo for others to access.
But what about volumes? Is there a way to share these with colleagues so they don't need to go through the process I went through to build the volume in the first place? So if I've got this 'databasevolume' is there a way to source control it? Or share it as a file to other users of Docker within my team? I'm just looking to save them the time of creating a volume, downloading the .bak file from its storage location, restoring it etc.
The short answer is that there is no default docker functionality to export the contents of a docker volume and docker export explicitly does not export the contents of the volumes associated with the container. You can backup, restore or migrate data volumes.
Note: if your're backing up a database I'd suggest using the appropriate tools for that database.
We have to deploy a dockerized web app into a closed external system(our client's server).
(our image is made of gunicorn, nginx, django-python web app)
There are few options i have already considered:
option-1) using docker registries: push image into registry, pull
from client's system run docker-compose up with pulled image
option-2) docker save/load .tar files: docker save image in local dev
environment, move .tar file into the client's system and run docker load
(.tar file) there.
Our current approach:
we want to move source code inside a docker image(if possible).
we can't make our private docker registry public --yet--(so option-1 is gone)
client servers are only accessible from their internal local network(has no connection to any other external network)
we dont want to copy all the files when we make an update(to our app), what we want to do is somehow detect diff or changes on docker image and copy/move/update only changed parts of app into client's server.(option-2 is gone too)
Is there any better way to deploy to client's server with the approach explained above?
PS: I'm currently checking "docker commit": what we could do is, docker load our base image into client's server, start container with that image and when we have an update we could just copy our changed files into that container's file system, then docker commit(in order to keep changed version of container). But the thing i don't like in that option is we would need to keep changes in our minds, then move our changed files(like updated .py or .html files) to client's server.
Thanks in advance
I am looking at Docker to share and contain applications, after reading several articles on the subject I can't figure out what the steps would be to use a Docker container for actual development. Is that even acceptable?
My thought process goes like this
Create DockerFile
Share DockerFile
User A and B download DockerFile
User A and B install images
User A and B are able to make changes to their local containers
User A and B submit changes
The way I have been reading different articles Docker is only to share applications but not for continuous development the way I am thinking, the closest I can think of on what I am explaining above is to make changes outside the containers and commit to a repo outside the containers, then the containers will update the local repo and re-run the application internally but you would never develop on the container itself.
Using docker for development process is not only possible, but handy and convenient in my opinion.
What you might have missed during your study of the docker ecosystem is the concept of volumes.
With volumes you can bind mount a directory of your host (the developer computer) into the container.
You may want to use volumes to share application data folder, making it possible for the devs to work on their local copies normally, but have their application served by a docker container.
A link to get started: https://docs.docker.com/engine/admin/volumes/volumes/
I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.
I have two hosts and docker is installed in each.
As we know, each docker stores the images in local /var/lib/docker directory.
So If I want to use some image, such as ubuntu, I must execute the docker pull to download from internet in each host.
I think it's slow.
Can I store the images in a shared disk array? Then have some host pull the image once, allowing every host, with access to the shared disk, to use the image directly.
Is it possible or good practice? Why docker is not designed like this?
It may need to hack the docker's source code to implement this.
Have you looked at this article
Dockerizing an Apt-Cacher-ng Service
http://docs.docker.com/examples/apt-cacher-ng/
extract
This container makes the second download of any package almost instant.
At least one node will be very fast, and I think it should possible to tell the second node to use the cache of the first node.
Edit : you can run your own registry, with a command similar to
sudo docker run -p 5000:5000 registry
see
https://github.com/docker/docker-registry
What you are trying to do is not supposed to work as explained by cpuguy83 at this github/docker issue.
Indeed:
The underlying storage driver would need to synchronize access.
Sharing /var/lib/docker is far not enough and won't work!
According to the doc.docker.com/registry:
You should use the Registry if you want to:
tightly control where your images are being stored
fully own your images distribution pipeline
integrate image storage and distribution tightly into your in-house development workflow
So I guess that this is the (/your) best option to work this out (but I guess that you got that info -- I just add it here to update the details).
Good luck!
Update in 2016.1.25 docker mirror feature is deprecated
Therefore this answer is not applicable now, leave for reference
Old info
What you need is the mirror mode for docker registry, see https://docs.docker.com/v1.6/articles/registry_mirror/
It is supported directly from docker-registry
Surely you can use public mirror service locally.