How to move Docker container from.Local system to AWs.I have configured docker in my local system . I need to move docker container from my local system to aws EC2 instance.
In a one time scenario you have these options:
A: To transfer your image:
Save your image on your local machine:
docker save my_image > my_image.tar
Upload tar to your remote server:
scp my_image.tar user#aws-machine:.
Load image on your remote machine:
ssh user#aws-machine
docker load < my_image.tar
Run a new container
docker run my_image
B: To transfer your container:
Export your container on your local machine:
docker export my_container_id > my_container.tar
Upload tar to your remote server:
scp my_container.tar user#aws-machine:.
Load tar as image on your remote machine:
ssh user#aws-machine
cat my_container | docker import - my-container-exported:latest
Run a new container
docker run my-container-exported:latest
To be prepared for later deployment improvements (like using CD/CI) you should consider option A. All necessary data for execution should be in the image and important data should be stored externally (volume mount, database, ..)
Related
Hey guys this is a quick question to see if it's possible to move the images stored on your local PC into minikube, without having to keep building them within minikube.
My Problem:
My problem is that everytime I restart my computer I would have to rebuild the images by using the eval $(minikube docker-env) command to connect my shell session to the minikube docker daemon. However, the images that you build on using your local daemon are persistent upon restart or shutdown. Is there a way to move these images into minikube so that the minikube docker daemon can pick them up, or else a way to pull those images straight from my local PC?
If you want to save your existing container back to an image (recommend using a new image name), run:
docker commit -p CONTAINER_ID NEW_IMAGE_NAME
You can then export the image into a tarball via:
docker save -o savedIMG.tar NEW_IMAGE_NAME
Determine the IP address of the Minikube container via: "minikube ip". It will likely be 192.168.49.2. Your host machine will be 192.168.49.1.
You then connect to the Minikube container via "minikube ssh" and scp the tarball from your host server. Example:
scp bob#192.168.49.1:/home/bob/savedIMG.tar .
(Note the space+period at the end)
You then import the image from the tarball via:
docker load --input savedIMG.tar
How to create a local-registry container that mounts a volume from the host machine and persist locally all the images that get pulled?
Local Docker registry with persisted images
It should be possible to have an ephemeral registry container (and its docker volume), allowing to not download images more than once, even after the registry (or the whole Docker VM) is being throw away and recreated.
This would allow to pull just once the images, having them available when internet connectivity isn't good (or available at all); would allow also to mount a docker volume with pre-downloaded images.
It would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
Notes:
destination of the mount should probably be /var/lib/registry/docker/registry.
it is possible to configure a local Docker registry as a pull-through cache.
my specific setup runs docker via minikube, on macOS; but the answer doesn't have to be specific to it.
I managed it, here are the step-by-step instructions. Hopefully will make life easier to somebody else!
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts
I'm trying to load a Docker image into an environment without internet connection (nginx:stable-alpine).
Once I've downloaded the image with pull on a computer with internet connection, I use the save command:
docker image save --output docker-image-nginx.tar nginx:stable-alpine
Then I copy it to the environment without internet connection, and load it:
docker image load --input docker-image-nginx.tar
The image gets loaded and can be seen with docker image ls:
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx stable-alpine 8c1bfa967ebf 4 weeks ago 21.5MB
But when I create a container with the run command:
docker run --name nginx -p 8080:80 nginx:stable-alpine
I get this error:
/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration
The container can be created with the same command on the computer with internet connection.
What's wrong in the process of saving and loading the image?
What about trying to save a docker container instead an image?
For example:
In your host with internet
docker run --name mynginx -p 8080:80 nginx:stable-alpine
docker save mynginx > mynginx.tar
In your host without internet
docker load < mynginx.tar
I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.
In my working environment , i can't connect to the network, but i can connect to the download machine ,which can connect to network .
But i do not know how to find a docker image and download it .
The website docker hub just show the command such as "docker pull nginx" , but i can't connect to the network ,it is useless for me .
My question:
Now, I have install docker by download docker-engine.deb.
where can I get a docker image off line?
You'll need access to a registry where docker images are stored. But if you don't have images and no registry with images yet, than you have to pull the image from the internet.
A recommended way could be:
Install docker on a machine (maybe your local machine) with internet access and pull an image:
$ docker pull busybox
Use docker save to make a .tar of your image
$ docker save busybox > busybox.tar
or you can use the following syntax
$ docker save --output busybox.tar busybox
Reference is here.
You can use a tool like scp to send the .tar to your Docker server where you don't have internet access.
Now you can use docker load to extract the .tar and get your image:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker load < busybox.tar.gz
Loaded image: busybox:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 769b9341d937 7 weeks ago 2.489 MB
Reference is here