How to get image upload date/time from Docker Registry - docker

I need to get the image upload date from a Docker Registry
I currently use the following https://registry/v2/repository/manifests/tag which gives me Creation Date. This is stale in most cases. I want to know when something was uploaded.
If I can't get this, is there a way to do docker build and specify and date/time which could be used in place of the Creation Date?
An example of the manifest I'm getting which shows the only dates available are the v1Compatibility/created:
{
"schemaVersion": 1,
"name": "users/jesaremi/baseimage",
"tag": "6a69f60507f029f76ff102aa1b89b562d2d784dfdbfef38cb0ed5c0b61a188ff",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b"
}
],
"history": [
{
"v1Compatibility": "{\"architecture\":\"amd64\",\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"sh\"],\"ArgsEscaped\":true,\"Image\":\"sha256:758a17a836a4c09586a291c928d1f0561320e252d07c4749e14338daefe84b18\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":null},\"container\":\"e30cd53834b3dfdb989c63cc73f4f31f404c7a6a0c0e9d6b9e3e8451edd72596\",\"container_config\":{\"Hostname\":\"e30cd53834b3\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) \",\"CMD [\\\"sh\\\"]\"],\"ArgsEscaped\":true,\"Image\":\"sha256:758a17a836a4c09586a291c928d1f0561320e252d07c4749e14338daefe84b18\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":{}},\"created\":\"2019-09-04T19:20:16.230463098Z\",\"docker_version\":\"18.06.1-ce\",\"id\":\"a91ec18e2f45c300f1df0a23ac04c1396d791c6c387dd5e16e44dc96a4fc309d\",\"os\":\"linux\",\"parent\":\"bd5fbbc2870744fe1d37bbd120eebe4c441f8401c54b04b8ae0f9f625936f4c3\",\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"bd5fbbc2870744fe1d37bbd120eebe4c441f8401c54b04b8ae0f9f625936f4c3\",\"created\":\"2019-09-04T19:20:16.080265634Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ADD file:9151f4d22f19f41b7a289e87aa9cfba3956ffd27746cb3b171b9bd2cb7e6c313 in / \"]}}"
}
],
"signatures": [
{
"header": {
"jwk": {
"crv": "P-256",
"kid": "3NQM:K5YD:M3XF:EKJD:4S64:3772:BJOT:JIMR:NX4R:2XYS:IDNA:NOKL",
"kty": "EC",
"x": "k6pZfyr-dKYLri5KJCL70UmNLCQnfUh2lAC_nDK9PVw",
"y": "MhrKOUbx1sgsbF0kG9d5bfvkVaxaFWiKlWTwgFyHkbQ"
},
"alg": "ES256"
},
"signature": "klE8-cWOS1GZenBB7CPXYUK8VWmqiVQaFfWGgBQPn_L8iayojGEUc9D_06WCUdAqL7upvNIxcCPXJvZMORLn_Q",
"protected": "eyJmb3JtYXRMZW5ndGgiOjIxOTksImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxOS0xMC0yM1QyMTo0MTozM1oifQ"
}
]
}

Related

How to create and validate Docker signed manifests

How exactly is expected to be created and validated signed manifest for a Docker container using schema v1?
The documentation shows an example of a signed manifest file, like the one shown below:
{
"name": "hello-world",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
}
],
"history": [
{
"v1Compatibility": ".."
}
],
"schemaVersion": 1,
"signatures": [
{
"header": {
"jwk": {
"crv": "P-256",
"kid": "OD6I:6DRK:JXEJ:KBM4:255X:NSAA:MUSF:E4VM:ZI6W:CUN2:L4Z6:LSF4",
"kty": "EC",
"x": "3gAwX48IQ5oaYQAYSxor6rYYc_6yjuLCjtQ9LUakg4A",
"y": "t72ge6kIA1XOjqjVoEOiPPAURltJFBMGDSQvEGVB010"
},
"alg": "ES256"
},
"signature": "XREm0L8WNn27Ga_iE_vRnTxVMhhYY0Zst_FfkKopg6gWSoTOZTuW4rK0fg_IqnKkEKlbD83tD46LKEGi5aIVFg",
"protected": "eyJmb3JtYXRMZW5ndGgiOjY2MjgsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAxNS0wNC0wOFQxODo1Mjo1OVoifQ"
}
]
}
It's mentioned that the signature can be generated using trustlib, however, it's not clear how. Is there any code example or even better a CLI tool that can be used to sign and validate a manifest as the one above?

how can JAVA_OPTIONS added in deployconfig in OpenshiftContainer

I am trying to add below JAVA_OPTIONS in deployconfig in OpenshiftContainer but is throwing syntax error .Could anyone help me how to add parameters in OpenshiftContainer please
JAVA_OPTIONS
-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts,
-Djavax.net.ssl.trustStorePassword=changeit,
Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
deploymentConfig as json:
{
"apiVersion": "apps.openshift.io/v1",
"kind": "DeploymentConfig",
"metadata": {
"labels": {
"app": "${APP_NAME}"
},
"name": "${APP_NAME}"
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
},
"strategy": null,
"template": {
"metadata": {
"labels": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "SPRING_PROFILE",
"value": "migration"
},
{
"name": "JAVA_MAIN_CLASS",
"value": "com.agcs.Application"
},
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
{
"name": "MONGO_AUTH_DB",
"valueFrom": {
"secretKeyRef": {
"key": "spring.data.mongodb.authentication-database",
"name": "mongodb-secret"
}
}
},
],
"image": "${IMAGE_NAME}",
"imagePullPolicy": "Always",
"name": "${APP_NAME}",
"ports": [
{
"containerPort": 8103,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "500m",
"memory": "500Mi"
}
},
"volumeMounts":[
{
"name": "secret-volume",
"mountPath": "/mnt/secrets",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "keystore-new"
}
}
]
}
}
}
}
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
This is invalid json, as the key value can only have one value, while you have provided multiple comma separated strings.
JAVA_OPTIONS isn't a standard environment variable, so we don't know how it's processed but maybe this will work?
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12 -Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS} -Djava.awt.headless=true"
},
But there's still probably an issue, because it seems like {KEYSTORE_PATH} is supposed to be a variable. That's not defined or expanded in this file. For a first attempt, probably just hardcode the values of all these variables.
For secrets (such as passwords) you can hardcode some value for initial testing, but please use OpenShift Secrets for formal testing and the actual deployment.

How to update a Secret in Azure Key Vault only if changed in ARM templates or check if it exists

I have a production keyvault that keeps a reference of secrets that projects can use, but only if deployed using ARM templates such secrets are not handled by people copy pasting them.
When a new project starts, as part of its deployment script, it will create its own keyvault.
I want to be able to run the templates/scripts as part of CI/CD. And this will today result in the same secret having a new version at each run, even though the value did not change.
How to make it only update the keyvault value when the master vault is updated.
In my deployment.sh script I use the following technique.
SendGridUriWithVersion=$((az group deployment create ... assume that the secret exists ... || az group deployment create ... assume that the secret exists ... ) | jq -r '.properties.outputs.secretUriWithVersion.value')
and it works because in the template there is a parameter, if set, that will retrieve the secret and compare it with the new value and only insert if difference. The original problem is that the deployment fails if the secret is not already set (this happens for the first deployment etc).
But then due to Unix ||, the same script is run again without the parameter set and it will use a condition to not try to get the old value and therefore run successful.
Here are the example in dept:
SecretName="Sendgrid"
SourceSecretName="Sendgrid"
SourceVaultName="io-board"
SourceResourceGroup="io-board"
SendGridUriWithVersion=$((az group deployment create -n ${SecretName}-secret -g $(prefix)-$(projectName)-$(projectEnv) --template-uri https://management.dotnetdevops.org/providers/DotNetDevOps.AzureTemplates/templates/KeyVaults/${keyVaultName}/secrets/${SecretName}?sourced=true --parameters sourceVault=${SourceVaultName} sourceResourceGroup=${SourceResourceGroup} sourceSecretName=${SourceSecretName} update=true || az group deployment create -n ${SecretName}-secret -g $(prefix)-$(projectName)-$(projectEnv) --template-uri https://management.dotnetdevops.org/providers/DotNetDevOps.AzureTemplates/templates/KeyVaults/${keyVaultName}/secrets/${SecretName}?sourced=true --parameters sourceVault=${SourceVaultName} sourceResourceGroup=${SourceResourceGroup} sourceSecretName=${SourceSecretName}) | jq -r '.properties.outputs.secretUriWithVersion.value')
The https://management.dotnetdevops.org/providers/DotNetDevOps.AzureTemplates/templates/KeyVaults/{keyvaultName}/secrets/{secretName}?sourced=true returns a template
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVaultName": {
"type": "string",
"defaultValue": "io-board-data-ingest-dev"
},
"secretName": {
"type": "string",
"metadata": {
"description": "Name of the secret to store in the vault"
},
"defaultValue": "DataStorage"
},
"sourceVaultSubscription": {
"type": "string",
"defaultValue": "[subscription().subscriptionId]"
},
"sourceVault": {
"type": "string",
"defaultValue": "[subscription().subscriptionId]"
},
"sourceResourceGroup": {
"type": "string",
"defaultValue": "[resourceGroup().name]"
},
"sourceSecretName": {
"type": "string"
},
"update": {
"type": "bool",
"defaultValue": false
}
},
"variables": {
"empty": {
"value": ""
},
"test": {
"reference": {
"keyVault": {
"id": "[resourceId(subscription().subscriptionId, resourceGroup().name, 'Microsoft.KeyVault/vaults', parameters('keyVaultName'))]"
},
"secretName": "[parameters('secretName')]"
}
}
},
"resources": [
{
"apiVersion": "2018-05-01",
"name": "AddLinkedSecret",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[concat('https://management.dotnetdevops.org/providers/DotNetDevOps.AzureTemplates/templates/KeyVaults/',parameters('keyVaultName'),'/secrets/',parameters('secretName'))]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"existingValue": "[if(parameters('update'),variables('test'),variables('empty'))]",
"secretValue": {
"reference": {
"keyVault": {
"id": "[resourceId(parameters('sourceVaultSubscription'), parameters('sourceResourceGroup'), 'Microsoft.KeyVault/vaults', parameters('sourceVault'))]"
},
"secretName": "[parameters('sourceSecretName')]"
}
}
}
}
}
],
"outputs": {
"secretUriWithVersion": {
"type": "string",
"value": "[reference('AddLinkedSecret').outputs.secretUriWithVersion.value]"
}
}
}
and that template has a nested call to https://management.dotnetdevops.org/providers/DotNetDevOps.AzureTemplates/templates/KeyVaults/{keyvaultName}/secrets/{secretName} which gives the one with the condition
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVaultName": {
"type": "string",
"defaultValue": "io-board-data-ingest-dev",
"metadata": {
"description": "Name of the existing vault"
}
},
"secretName": {
"type": "string",
"metadata": {
"description": "Name of the secret to store in the vault"
},
"defaultValue": "DataStorage"
},
"secretValue": {
"type": "securestring",
"metadata": {
"description": "Value of the secret to store in the vault"
}
},
"existingValue": {
"type": "securestring",
"defaultValue": ""
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.KeyVault/vaults/secrets",
"condition": "[not(equals(parameters('existingValue'),parameters('secretValue')))]",
"apiVersion": "2015-06-01",
"name": "[concat(parameters('keyVaultName'), '/', parameters('secretName'))]",
"properties": {
"value": "[parameters('secretValue')]"
}
}
],
"outputs": {
"secretUriWithVersion": {
"type": "string",
"value": "[reference(resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults/secrets', parameters('keyVaultName'), parameters('secretName')), '2015-06-01').secretUriWithVersion]"
}
}
}

How to directly mount external NFS share/volume in kubernetes(1.10.3)

I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix.
ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
I added two things for volume here but getting error every time.
first :
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
Second :
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
---------------------complete yaml --------------------------------
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
Regards
Anupam Narayan
As stated in the error log:
for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
According to this question, you might be missing the nfs-commons package which you can install using sudo apt install nfs-common

How to download Google Compute Engine disk content?

I have linked a Persistent Volume to my Kubernetes Neo4j Replication Controller to store the DB data. Now I would like to download that data locally to run the production DB on my system. I can't find the way to download the Disk content. Can someone point me in the right direction?
Updates (Persistent Volume Creation with Kubernetes):
persistent-volume-db.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-db"
},
"spec": {
"capacity": {
"storage": "500Gi"
},
"accessModes": [
"ReadWriteMany"
],
"gcePersistentDisk": {
"pdName": "tuwa-db-data-disk",
"fsType": "ext4"
}
}
}
persistent-volume-claim-db.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc"
},
"spec": {
"accessModes": [
"ReadWriteMany"
],
"resources": {
"requests": {
"storage": "500Gi"
}
}
}
}
And then the usage:
neo4j-controller.json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "neo4j-controller",
"labels": {
"name": "neo4j"
}
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "neo4j"
}
},
"spec": {
"containers": [
{
"name": "neo4j",
"image": "neo4j/neo4j",
"ports": [
{
"name": "neo4j-server",
"containerPort": 7474
}
],
"volumeMounts": [
{
"mountPath": "/data/databases",
"name": "pv-db"
}
]
}
],
"volumes": [
{
"name": "pv-db",
"persistentVolumeClaim": {
"claimName": "pvc-db"
}
}
]
}
}
}
}
GCE's admin panel doesn't have a "download" button for persistent disks, but gcloud makes it easy to copy files from an instance to your local machine:
gcloud compute copy-files example-instance:~/REMOTE-DIR ~/LOCAL-DIR --zone us-central1-a
This will copy ~/REMOTE-DIR from a remote instance into ~/LOCAL-DIR on your machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary. More info here in the docs.
gcloud compute copy-files #has been deprecated
Please use gcloud compute scp instead. Note that gcloud compute scp does not have recursive copy on by default. To turn on recursion, use the --recurse flag.
gcloud compute scp --recurse example-instance:~/instance-1_path ~/locad_path --zone=us-central1-a
This will copy ~/instance-1_path from a remote compute instance into ~/local_path on your personal machine. Just replace the directory names, example-instance with your instances name, and adjust your zone if necessary.
Complete documentation of this SDK can be found here

Resources