Get image id from image created via remote API - docker

I'm using the Docker Remote API (API v1.6, Docker 0.6.5). I'm building an image via the POST /build endpoint. FWIW, my client is written in Go. Is there a way to get the image ID of the image I just created without having to parse it out of the streamed response text?

You could give it a name (e.g. repo:tag) during the build and then inspect information about the tagged image using GET /images/(tag name)/json. (docs)
The json information includes the id, as well as the "config" which shows how the image was created in the first place.

Related

How to know which Dockerfile was used to generate hub image

I am facing an issue when I try to create a custom image containing tensorflow. But when I use the official repositories I did not see that problem. Then I am trying to know which Docker file from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/dockerfiles/dockerfiles was used to generate the Docker.hub image. Could you help me, please?
You can always use [docker image inspect][1] to get further information about an image's layers and how it was built locally.
On docker hub if you click on the tags tab, you can click on any image, and it will show you that info in a very nice annotation of that image's layers. Next to the dockerfile.

how to get image digest(sha) via opa policy

I want to create policy that will check that image digest (sha) is part of white list and will approve it.
My problem is that the deployment done via image name and image tag and I try to find a way to convert the tag to sha on the fly.
Does OPA can do get sha of image?
A Docker image's digest is the SHA256 of its config.json file. Without this (i.e. from only the repository and tag) you can't create the digest, with OPA or any other tool. If you do have the config.json file at hand you can create the digest using it as input to the crypto.sha256(string) built-in, but you are probably better off retrieving the digest directly from either the Docker API or you registry, depending on your use case.

Compute Engine, Reset or Stop+Start

I have VM in compute Engine running a docker image uploaded to the Container Registry.
If i push a new image with the same name, is "Reset" enough to load that image or should I keep doing stop+Start?
When you upload a docker image to the Container Registry, apart from the name, it would be wise to tag it. If you upload another image with the same name, and same tag, it is enough using reset for the VM to use the new image. Upon resetting, you will be using the image with the same name and tag of the image you first used to deploy a VM. If you do not use tags, the new images you push with the same name without tag will be automatically tagged as "latest", and that one will be used after doing "reset" to your VM.
When using tags, upon uploading the new image with same name and tag, the previous image will lose the tag and the newly uploaded will get it.

Get the internal URI storage location (gs://) after uploading data [duplicate]

When I attempt load data into BigQuery from Google Cloud Storage it asks for the Google Cloud Storage URI (gs://). I have reviewed all of your online support as well as stackoverflow and cannot find a way to identify the URL for my uploaded data via the browser based Google Developers Console. The only way I see to find the URL is via gsutil and I have not been able to get gsutil to work on my machine.
Is there a way to determine the URL via the browser based Google Developers Console?
The path should be gs://<bucket_name>/<file_path_inside_bucket>.
To answer this question more information is needed. Did you already load your data into GCS?
If not, the easiest would be to go to the project console, click on project, and Storage -> Cloud Storage -> Storage browser.
You can create buckets there and upload files to the bucket.
Then the files will be found at gs://<bucket_name>/<file_path_inside_bucket> as #nmore says.
Couldn't find a direct way to get the url. But found an indirect way and below are the steps:
Go to GCS
Go into the folder in which the file has been uploaded
Click on the three dots at the right end of your file's row
Click rename
Click on gsutil equivalent link
Copy the url alone
Follow the following steps :
1. Go to GCS
2. Go into the folder in which the file has been uploaded
3. On the top you can see overview option
4. You can see there will be Link URL and link for GSUtil
Retrieving the Google Cloud Storage URI
To create an external table using a Google Cloud Storage data source, you must provide the Cloud Storage URI.
The Cloud Storage URI comprises your bucket name and your object (filename). For example, if the Cloud Storage bucket is named mybucket and the data file is named myfile.csv, the bucket URI would be gs://mybucket/myfile.csv. If your data is separated into multiple files you can use a wildcard in the URI. For more information, see Cloud Storage Request URIs.
BigQuery does not support source URIs that include multiple consecutive slashes after the initial double slash. Cloud Storage object names can contain multiple consecutive slash ("/") characters. However, BigQuery converts multiple consecutives slashes into a single slash. For example, the following source URI, though valid in Cloud Storage, does not work in BigQuery: gs://[BUCKET]/my//object//name.
To retrieve the Cloud Storage URI:
Open the Cloud Storage web UI.
CLOUD STORAGE WEB UI
Browse to the location of the object (file) that contains the source data.
At the top of the Cloud Storage web UI, note the path to the object. To compose the URI, replace gs://[BUCKET]/[FILE] with the appropriate path, for example, gs://mybucket/myfile.json. [BUCKET] is the Cloud Storage bucket name and [FILE] is the name of the object (file) containing the data.
If you need help on subdirectories, check this out on https://cloud.google.com/storage/docs/gsutil/addlhelp/HowSubdirectoriesWork
And https://cloud.google.com/storage/images/gsutil-subdirectories-thumb.png, if you need to see how gsutil provides a hierarchical view of objects in a bucket.

What is the relationship between a docker image ID and the IDs in the manifests?

I am trying to understand the connection between the image ID as reported by docker images (or docker inspect) and the actual layers or images in the registry or manifest (using v2).
I run docker images, I get (abbreviated and changed to protect the not-so-innocent):
REPOSITORY TAG IMAGE ID
my.local.registry/some/image latest abcdefg
If I pull the manifest for the above image using the API, I get one that contains fsLayers, not one of which matches the (full) ID for the image. I get that, since the image is the sum of the layers.
However, if I pull that image elsewhere, I get the same ID. If I update the image, push and pull it, the new version has a new ID.
I thought it might be the hash of the manifest. However, (a) pulling the manifest via the API does not return the hash of the manifest in the JSON, and (b) looking in the registry directory itself, the sha256 of the given manifest in /var/lib/registry/v2/repositories/some/image/_manifests/tags/latest/current/link (or those in index/sha256/) give the correct link for the manifest that was downloaded, but does not match the image ID.
I had assumed the image ID matches the blob, but perhaps I am in error?
When we push an image into a registry,
We create a manifest that defines the image - layers inside it, and push both the manifest and layers independently.
We compress the layers and only then push them.
So in our host the hashes we have are the hashes of the content present in those layers, called the Content Hashes.
But to the registry we send the compressed layers, due to which the content changes and so the hashes change. So before those compressed layers are sent, the hashes for the compressed layers are calculated which are called the Distribution Hashes and those hashes are put in the manifest file.
Due to the difference in these Content and Distribution hashes, you see a difference in the ids.

Resources