Where are the docker images? - docker

I created an image on local laptop:
% docker build -t kubia .
But where is it?
ls: /var/lib/docker/: No such file or directory
it's not here
% ls \~/Library/Containers/com.docker.docker/Data/vms/0
00000002\.00001003 connect data hyperkit.json log
00000003\.000005f5 console-ring guest.000005f5 hyperkit.pid
not here either
When I write a
docker images
where do I get the output from?

Related

Docker run -v : Unable to mount a bind volume : "invalid volume specification"

I'm quite new to Docker. I'm running on Windows 10 Enterprise and am trying to containerize an existing app that runs on windows (so it's a Windows container). I don't know if this matters but the container is rather large (8 GB).
I need to share a config file (that lives on the host) with the container that the app will use when starting. I was thinking that a bind volume was simplest.
Problem: On running the image I get docker: Error response from daemon: invalid volume specification: '<source path>:<target path>'
Container was built with this command:
docker build -t my_image .
Here is the Dockerfile:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8
WORKDIR /app
COPY . .
ENTRYPOINT .\application.exe ..\Resources
Here is what I've tried
docker run -it -v c:/Users/my_user:/app my_image
I've tried every combination of C:/, C:\, C:\\, /c/, //c/, \c\, \\c\, etc.
I've tried multiple combinations of /app, //app, \app, \app, C:\app, etc.
I've also tried with and without :rw appended to the end
I've tried the ```--mount``` syntax which consistently outputs: docker: Error response from daemon: invalid mount config for type "bind": invalid mount path: '/app'. (tried a bunch of variations of /app here too)
I've tried every possible combination (except the right one). Please help!
Since you are using a Windows container, your file path will change. Try the below command, from the docs Persistent Storage in Windows Containers
docker run -it -v c:\Users\my_user:c:\app my_image
If you are using a powershell and trying to run docker using docker run command you can try this approach. It worked for me in windows powershell (vs code powershell)
docker run -v ${pwd}\src:/app/src -d -p 3000:3000 --name react-app-c2 react-app-image
Here react-app-c2 is container name and react-app-image is image name
-v is for volume and ${pwd} is for current working directory
/app/src is for the containerdirectory.

Docker searching for repository when building base image locally

I have a Dockerfile and base image on my machine. I am attempting to build the image using the Dockerfile commands but after Docker loads the base image, it errors out and says it can't reach a machine. Unsure where this definition is.
I issued command,
docker build -t container-name -f Dockerfile .
It sucks in the base image "Sending build context to Docker daemon" up to the full file size of the base image. But then it says:
"Step 1/10 : FROM base-image
Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on : read udp -> : i/o timeout
Unsure why it's trying to access a registry when I have the definition and base file in that same directory.
I realized I had not loaded the base image using the Docker Load command...
It was not listed in my "docker images" Repository, therefore it was looking elsewhere.
So, if using your own local image, use:
docker load -i <base_image.tar.gz>
Then it will be in your "docker images" list, and now you can perform the command:
docker build -t -f Dockerfile .
where the Dockerfile contains first line:
FROM base_image
...

Where can i get docker image off line?

In my working environment , i can't connect to the network, but i can connect to the download machine ,which can connect to network .
But i do not know how to find a docker image and download it .
The website docker hub just show the command such as "docker pull nginx" , but i can't connect to the network ,it is useless for me .
My question:
Now, I have install docker by download docker-engine.deb.
where can I get a docker image off line?
You'll need access to a registry where docker images are stored. But if you don't have images and no registry with images yet, than you have to pull the image from the internet.
A recommended way could be:
Install docker on a machine (maybe your local machine) with internet access and pull an image:
$ docker pull busybox
Use docker save to make a .tar of your image
$ docker save busybox > busybox.tar
or you can use the following syntax
$ docker save --output busybox.tar busybox
Reference is here.
You can use a tool like scp to send the .tar to your Docker server where you don't have internet access.
Now you can use docker load to extract the .tar and get your image:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker load < busybox.tar.gz
Loaded image: busybox:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 769b9341d937 7 weeks ago 2.489 MB
Reference is here

How to move Docker containers to AWS

How to move Docker container from.Local system to AWs.I have configured docker in my local system . I need to move docker container from my local system to aws EC2 instance.
In a one time scenario you have these options:
A: To transfer your image:
Save your image on your local machine:
docker save my_image > my_image.tar
Upload tar to your remote server:
scp my_image.tar user#aws-machine:.
Load image on your remote machine:
ssh user#aws-machine
docker load < my_image.tar
Run a new container
docker run my_image
B: To transfer your container:
Export your container on your local machine:
docker export my_container_id > my_container.tar
Upload tar to your remote server:
scp my_container.tar user#aws-machine:.
Load tar as image on your remote machine:
ssh user#aws-machine
cat my_container | docker import - my-container-exported:latest
Run a new container
docker run my-container-exported:latest
To be prepared for later deployment improvements (like using CD/CI) you should consider option A. All necessary data for execution should be in the image and important data should be stored externally (volume mount, database, ..)

Docker container not running

I have created a docker image which is a python script based on a centos image. This image is working in the host system. Then I converted that image in tar.gz format. After that when I imported that tar.gz file into docker host(in a ubuntu system), it is done properly and the docker images list shows me the image listed in there. Then I tried to run the container in interactive mode using the following command:
$docker run -it image_name /bin/bash
it throws the following error:
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"/bin/bash\\\": stat /bin/bash: no such file or directory\"\n".
Although docker run -it image_name /bin/bash command is working for all other images in my system. I tried almost all the means, but got no output apart from this error.
docker run -it image_name /bin/sh works for me! (Docker image, like Alpine, does not have /bin/bash).
I've just run into the same issue after updating Docker For Windows. It seems that it corrupted some image layers.
I cleared all the cached containers and images by running:
docker ps -qa|xargs docker rm -f
docker images -q|xargs docker rmi
The last command returned a few errors (some returned images didn't exist anymore).
Then I restarted the service and everything was running again.
I had the same issue, and it got resolved, after following the steps described in this post...
https://www.jamescoyle.net/how-to/1512-export-and-import-a-docker-image-between-nodes
Instead of saving the docker image (I) as .tar and importing, we need to commit the exited container, based on the image (I), as new image (N).
Then save the newly committed image (N) as .tar file, for importing into a new environment.
Hope this helps...

Resources