Running local container when connected to remote docker machine? - docker

I'd like to deploy a container that I've built locally on a DigitalOcean droplet. I've been following these instructions. The problem is that by running:
eval $(docker-machine env DROPLET_NAME)
Docker sets the environment variables to be the remote machine, effectively changing the environment to running Docker on the remote machine. This is expected. However, say I have a local image I've built named rb612/docker-img:latest that I haven't pushed up to a remote. I want to run this in the remote machine context.
If I run:
docker run -d -p 80:8000 rb612/docker-img:latest
Then I get Unable to find image 'rb612/docker-img:latest' locally. If my understand is correct, this is because it's no longer running in the context of my machine. Opening a new shell and running the same command works fine without the remote environment variables set.
So I'm wondering if there's a way I can run this local image on my remote machine. I tried using the -w flag to pass in the local path but I got the same error. Deploying instead with a remote docker image works fine.

So I'm wondering if there's a way I can run this local image on my remote machine.
Sure.
You have a couple of options.
Using docker image save/load
You can use docker image save to save the image to a file. Do this either before you run your eval statement, or do it in a different terminal window that doesn't have the remote Docker environment configured:
docker image save rb612/docker-img:latest > docker-img.tar
After running your eval $(...) command, use docker image load to send the image to your remote Docker:
docker image load < docker-img.tar
Now the image is available on your remote Docker host and you can docker run it normally.
Set up a remote registry
You can set up your own remote registry, in which case you can simply docker push to that registry from your local machine and docker pull from the remote machine. This is generally the best long-term solution, but the initial set up (especially securing things properly with SSL) is a little bit more involved. Details are in the documentation.

Related

Pulling from local registry gives UNAUTHORIZED

I am running a Docker registry in a container which I run as-is from the image of 'docker-registry', as published on Docker hub. This image is running on a machine in my local network. From my laptop I am able to push an image to that registry without any problems. I subsequently try to pull that same image to a different machine on my network, but there I get an error response:
{"code":"UNAUTHORIZED","message":"authentication required", ...}
This raises the questions: Is this image configured to require authentication? Why does it not require authentication when I push/pull from my laptop?
One of the reason could be that the target machine where you are trying to run your docker image does not have root/sudo access. This is generally an issue with docker. It does require root privileges. Try to ensure required permission is given when you run your docker commands. (Try using sudo with commands)
Can't be very sure of the reason, need more info regarding the machine where you are running the docker.

Gitlab-CI error upon deploying Docker Image on swarm mode

Hi i have problem with updating / changing image of my service on the server running Docker swarm mode.
Here is the process of manually updating the service.
push the project to gitlab from local machine.
pull the project from gitlab in server.
build a Docker image as my-project:latest
tag my-project:latest as registry.gitlab.com/my-group/my-project:staging
i push the image using docker push registry.gitlab.com/my-group/my-project:staging
i run docker stack deploy -c ~/docker-stack.yml api --with-registry-auth
and it works fine.
However if i move the codes above into a gitlab-ci.yml despite of ending the job successfully i get an error when it is trying to update the service.
Updating service api_backend (id: r4gqmil66kehzf0oehzqk57on)
image registry.gitlab.com/my-group/my-project:staging could not be accessed on a registry to record
its digest. Each node will access registry.gitlab.com/my-group/my-project:staging independently,
possibly leading to different nodes running different
versions of the image.
Also the gitlab runner is executing commands in Shell mode.
I have tried different solutions as you can see i'm even using the --with-registry-auth flag.
To summarize this:
everything works fine if i enter the codes manually but i get an error when i use gitlab-ci.yml.

How to run docker-compose with docker image?

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

Pull image from another Docker Machine

Is it possible to pull an image from another docker machine without having to install the docker repository?
I got 2 docker machines for development and i would like to deploy an image on the second docker machine that i have build with the first one.
Is this possible?
If you have created your docker servers using docker-machine then you could do an export/import using remote access to the docker agents on each server.
docker $(docker-machine config server1) export exampleimage:1.0 | docker $(docker-machine config server2) import - exampleimage:1.0
But....it would be a lot simpler to just rebuild the image on the second server, using the same Dockerfile.

Resources