Boot2Docker to Google Compute Engine VM: saving Docker container - docker

I am running Boot2Docker v1.0.1 on Windows, and wish to fire up a Docker container I have created on a Google Compute Engine VM.
In order to do so, I need to save the container and upload it to Google Cloud Storage.
I issue the following command:
docker save --output=mycontainer.tar mycontainer:latest
The command completes without error. However, I cannot find the rce_env.tar file anywhere on my hard drive.
Does anyone have any experience with this? If not, is there a better way to run containers on GCE VM's?

You can run google/docker-registry locally to push your container images to GCS.
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
And then run it on GCE to pull your containers from GCS.
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename

I understand that you are using boot2docker on windows.
On a similar setup, using OSX and boot2docker 1.1.0, the following works:
docker save --output mycontainer.tar mycontainer:latest
As also does redirecting standard output:
docker save mycontainer:latest > mycontainer.tar

GCE now allows to store docker images for your projects using the gcloud command.
you can now run $ gcloud preview docker push gcr.io/YOUR-PROJECT/IMAGE-NAME
Source: https://cloud.google.com/tools/container-registry/#pushing_to_the_registry

Related

What is the location of logs when Wso2 EI is run as a docker container

I pulled the latest WSO2 docker container from docker official website.
Then I ran the following command to run the container
docker run -it -p 8280:8280 -p 8243:8243 -p 9443:9443 --name integrator wso2/wso2ei-integrator
Where can I find the wso2carbon.log?
Found the answer in docker site itself.
https://hub.docker.com/r/wso2/wso2ei-integrator
Can view the logs with docker logs integrator and can access the bash shell of container with docker exec -it integrator bash

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

Use docker command in jenkins container

My centos version and docker version(install by yum)
Use docker common error in container
My docker run command:
docker run -it -d -u root --name jenkins3 -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker docker.io/jenkins/jenkins
but,its error when I exec docker info in jenkins container
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker
Exposing the host's docker socket to your jenkins container will work with
-v /var/run/docker.sock:/var/run/docker.sock
but you will need to have the docker executable installed in your jenkins image via a Dockerfile.
It is likely the example you are looking at is already using a docker image. A quick google search brings up https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ whose example uses a docker image (already has the executable installed):
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
Also note from that same post your exact issue with mounting the binary:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.

Docker couchbase cbbackup/cbtransfer/cbrestore tools

I've used docker to install couchbase on my ubuntu machine using (https://hub.docker.com/r/couchbase/server/). The docker run query is as follows:
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
Everything works perfectly fine. My application connects, I'm able to insert/update and query the couchbase. Now, I'm looking to debug a situation wherein the couchbase is on my co-developers machine who also has the same installation i.e., couchbase on docker using the above link. For achieving this, I wanted to run cbbackup on his installation. To achieve this, I run the following command which is a variation of the above link:
bash -c "clear && docker exec -it couch-db sh"
Can anyone please help me with the location of /opt/couchbase/bin in this setup? I believe this is where I can get access to "cbbackup", "cbrestore" and "cbtransfer" which I can then use to backup and restore data from my colleague's machine.
Thanks,
Abhi.
When you run the command
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
you're pulling a docker image and spawning a docker container.
Please read more about Docker and containerization.
In order to run cbbackup you need to log into your docker container.
Follow these steps:
Retrieve the container-id:
$ docker ps -a
Look for the CONTAINER ID for IMAGE NAME=couchbase
Login to the container using the command:
$ docker exec -it <container-id> bash
Go to the directory : /opt/couchbase/bin using:
$ cd /opt/couchbase/bin
You'll find cbbackup binary in this directory.

Docker: google/docker-registry container usage

Does the google/docker-registry container exist solely to push/pull images from Google Cloud Storage? I am currently following their instructions on Git and have the docker-registry container running, but can't seem to pull from my bucket.
I started it with:
sudo docker run -d -e GCS_BUCKET=mybucket -p 5000:5000 google/docker-registry
I have a .tar Docker image stored in Cloud Storage, at mybucket/imagename.tar. However, when I execute:
sudo docker pull localhost:5000/imagename.tar
It results in:
2014/07/10 19:15:50 HTTP code: 404
Am I doing this wrong?
You need to docker push to the registy instead of copying your image tar manually.
From where you image is:
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
Then from the place you want to run the image from (ex: GCE):
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
When using the google/docker-registry it is preconfigured to use the google buckets.
It should work for any storage (if configuration is overriden), but it's purpose is to be used with the google infrastructure.
The tar file of an exported image should be used when there is no docker registry to manually move images between docker hosts.
You should not upload tar files to the bucket.
To upload images, you should push to the docker-registry container, it will the save the image in the bucket.
The google cloud compute instance that is running the docker registry container must be configured to have write/read access to the bucket.

Resources