Is it possible to pull an image from another docker machine without having to install the docker repository?
I got 2 docker machines for development and i would like to deploy an image on the second docker machine that i have build with the first one.
Is this possible?
If you have created your docker servers using docker-machine then you could do an export/import using remote access to the docker agents on each server.
docker $(docker-machine config server1) export exampleimage:1.0 | docker $(docker-machine config server2) import - exampleimage:1.0
But....it would be a lot simpler to just rebuild the image on the second server, using the same Dockerfile.
Related
We have ansible configured to deploy our various applications on IIS environment. I am trying to create a docker image of deployed applications so that I can just start up containers as we need for testing and otherwise.
I am planning to build on the Windows IIS image, start the container on azure, run our ansible to install everything on the server, then save the image on container.
I cannot find any documentation on how I can docker commit the container image into our private azure container registry.
Is it possible?
If you have an existing Docker registry in azure you should be able to use the az acr login --name myregistry command to authenticate to it https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli. Make sure you have a registry created for the container image you want to push up.
Next, you can run the container in azure and do all the installation you want. SSH or RDP into the instance in Azure that is running this container. Now run docker ps and find the container id for the correct container. Next, use docker commit <container id> myregistry.azurecr.io/samples/nginx.
Then, just docker push myregistry.azurecr.io/samples/nginx
Also not sure what your use case is, but starting a container in order to modify and commit it in that way seems like an atypical use case for Docker since the build isn't reproducible via the Dockerfile. Looks like there are ways to replace Dockerfiles using Ansible playbooks with something like ansible-containers https://docs.ansible.com/ansible-container/ so you might want to take a look at that(I've never used this tool).
i have containers postgres and odoo in docker, docker is installed in ubuntu 18.04 machine.
I need to run containers odoo and postres in other machine , the probleme is how to backup and restore images and containers postgres version:9.6 and odoo version:11 , in the other laptop ?
In order to export container you should use docker save command:
docker save odoo | gzip > odoo.gz
docker save db | gzip > db.gz
where odoo and db are the names of containers that you want to export.
Then copy odoo.gz and db.gz files to the other laptop and import it using docker load command:
docker load < db.gz
docker load < odoo.gz
docker load will create an images named odoo and db which you should run to make containers based on them - use the same command as you used to run those containers on your initial laptop.
Please mind that docker save will export only container not mounted volume and (as you mentioned in the comment) you are using volume:
-v volume-pg:/var/lib/postgresql/
There is no easy way to export data from docker volume I am aware of, you can find suggested approach in official docker documentation for volume management.
You can find more details on docker save and docker load in official docker documentation:
docker save
docker load
PS. It looks like you are running independent docker containers instead of docker-compose which would manage the whole setup for you and is much better approach, you can find sample docker-compose.yml file on docker hub page of odoo. You can read about docker-compose here. Please be aware that docker-compose will not solve your volume migration issue and you have to migrate docker volumes manually as I suggested above.
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.
I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.
We are evaluating Docker to use for our application,so really like to know the following questions:
What are the best practices to move docker images and container between different machine?
Also how to manage containers and images in production environment across different regions?
First of all Docker architecture has a push pull mechanism using Registry(which may be private or public(like docker Hub).
1) Answer to your first Question- Moving Docker images and container between machines?
You can create tar file of images or container and then move the tar file between your machines.
Check using docker ps -a,then based on your requirement use any one of the following:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
68d9619a7a91 ubuntu:14.04 "/bin/bash" 10 seconds ago Exited
For Container move - Use docker export and import:
$ docker export 68d9619a7a91 > ubuntu-container.tar
$ docker import - update < ubuntu-container.tar
For Image move -- Use docker save and load:
$ docker images
$ docker save -o image.tar
$ docker load < image.tar
2) Second question- Managing containers in production environment?
a) It is better to have your own private registry managing all the images that you need for your containers. Suppose you have a dedicated node as Docker registry where all your docker images will stay.Now you can push your changes or updates of the images to the registry and then accordingly pull this images from this registry to your machine that will run the containers from ths images.
b) Another great way of managing images/container across cluster and different cloud provider is to use a Kubernetes(open sourced by Google). Although we have not implemented Kubernetes,but just started looking into its documentation,and it looks very promising if you are using docker containers and cloud.