As in title - is it possible to create an image from its manifest.json file and blobs that comes from Docker Registry?
Docker has 'save' command which is able to create .tar file of image, but when I look inside that file it has more files and folders that let say standaolne manifest file and its blobs.
I would like to fetch manifest from registry, then its blobs, nextly pack it into .tar file and then be able to create that image using Docker's 'load' command. Can I do that by having only image manifest and its blobs?
It's a bit of a hack but you can setup a local registry where you want to load the image and then follow the steps to push an image to registry using blobs and manifests as specified in Docker Registry API Docs.
After that, you only need to pull the image from the registry using docker pull.
Related
For security reason I can not directly pull amq63-openshift base image from redhat registry( https://registry.access.redhat.com/v2/jboss-amq-6/amq63-openshift/tags/list).
Is there a way to create amq63-openshift same base image using docker file?
I trying to play a bit with the Sigstore's cosign tool.
The tool is signing the image and then uploading the signature to the same repository.
So if I have a tag - tool:latest and I sign it, the tool creates a new tag named - <digest_of_latest>.sig
I want to somehow download that tag's data, but if I try to docker pull - I get an error that docker cannot load the image (which is right, this is not real image).
is there a way to download the raw data file from the docker hub repository ?
TL;DR: Is there an convenient way to scan docker images for unused/unnecessary packages.
Question: Given an enormous list of docker images & files, is it possible to scan them and check whether or not a package is activity being used? For the purpose of security, it would be best to remove all unnecessary packages and reduce any attack surface. In particularly large applications it's not uncommon for a developer to accidentally leave behind a previously useful package.
Potential dirty approach: Remove packages one by one, if the application fails to build then we put that package back and can consider it necessary. However, if the docker file builds successfully it could trigger a notification indicating that the package was potentially unused.
Concerning the unsued image, you can use the command docker image prune.
Here a link to the documentation that might help you.
nabil#LAPTOP:~$ docker image help
Usage: docker image COMMAND
Manage images
Commands:
build Build an image from a Dockerfile
history Show the history of an image
import Import the contents from a tarball to create a filesystem image
inspect Display detailed information on one or more images
load Load an image from a tar archive or STDIN
ls List images
prune Remove unused images
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rm Remove one or more images
save Save one or more images to a tar archive (streamed to STDOUT by default)
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
Run 'docker image COMMAND --help' for more information on a command.
How does docker handle digests?
I can see in plain text, when I run docker image --inspect, the digest of an image. And there's also the thing that local images don't have a digest until I push them to a registry (and AFAIK, if I push an image to various registries, it will have various digests, but never tried that).
I fear that docker might be actually using that info instead of calculating the hash every time that I use or pull an image.
Is there a way to actually tell docker: "Hey, I want you to recheck right now the hash of the image contents. Are they the exact same as when I first created the image? Or has someone manipulated it ever?"
And: does docker really calculate that hash every time an image is run (by digest), or at least every time an image is pulled (by digest)?
The digest is calculated on push and pull to a registry. It's a sha256 checksum of the image manifest, which is current versions of docker is independent of the registry (the older schema v1 syntax included the repository/tag in the manifest that resulted in the digest changing depending on the image name). The layer digests are included in that manifest, and those digests on the registry are compressed tar files. Once the files have been extracted on the local docker engine, they aren't reverified, and I'm not aware of a command yet that would verify the files under /var/lib/docker have not been changed since the image was pulled.
I have started using docker recently, Initially when I stared the docker image was 2-3 GB in size. I am saving the work done from the container into an image(s) so the image size have grown significantly(~6 GB). I want delete images while preserving the work done. When I export the container to gziped file, the size of that file is ~1 GB. Will it work fine if I delete the current image I have now(~6 GB) and create a new one from the gzipped file with docker import. The description of import command says it will create filesystem image, its docker image or something else ie I will be able to create containers from that image?
You can save the image (see more details here), for example:
docker save busybox > busybox.tar
Another alternative is to write a Dokerfile which contains all the instructions necessary to build your image. The main advantage is that this is a text file which can be versioned controlled hence, you can keep track of all the changes you made to your image. Another advantage is that you can deploy that image elsewhere without having to copy images across the system. For example, instead of copying a 1GB or 6GB image, you just need to copy the DockerFile and build the image in that new host. More details about the docker file can be found here