REST API for Docker Hub Registry - docker

The REST API for registry.hub.docker.com does not seem to match the documented API.
For example, curl -k https://registry.hub.docker.com/v1/repositories/busybox/tags returns:
[{"layer": "4986bf8c", "name": "latest"}, {"layer": "2aed48a4", "name": "buildroot-2013.08.1"}, ... ]
But https://docs.docker.com/reference/api/registry_api/#tags says it should return a map of tag -> id. That's what I see when I make a similar request to a registry I'm running locally.
Is the REST API for the Docker Hub Registry supposed to be different for some reason?
Is this a bug?

It looks like instead of returning
[_tag_ : _id_]
it returns
[{"layer: _id_, "name": _tag_}]
But you've got the same information at the end of the day.
Check out this docs, because registry api seems to behave slightly differently than the hub.

Related

DSM docker parse-server can't access cloud code or config file

I've used yongjhih/parse-server as my container image for a while, but this image wasn't updated over 4 years so I moved to the official parseplatform/parse-server image, and connected it to the parse-dashboard official image.
Both working fine, and I've successfully saved objects to the DB using the dashboard console.
Problems:
My parse-server ignoring config.json file on mounted folder, so for now I use only environment variables.
When trying to access cloud code via REST console I get 404 (Not Found) on inspector with response Cannot GET /parse/endpointName. but I don't get any error or warnings on logs.
I've also disabled GraphQL and playground for now because it's making the whole container crash with an error GraphQLError [Object]: Syntax Error: Unexpected Name "const", which probably means I'm missing some imports, but it also means the server can actually access my mounted folders.
Dashboard can't access parse-server via local host, I solved it by using ip address for serverURL on the dashboard config.json
I'm using the docker (GUI) app on my Synology NAS with DSM 7.
My volumes:
Environment variables:
My dashboard config.json:
{
"apps": [{
"serverURL": "http://192.168.1.5:1337/parse",
"appId": "appId",
"masterKey": "masterKey",
"appName": "appName"
}],
"users":
[
{
"user":"user",
"pass":"pass"
}
]
}
--------------------------Edit--------------------------
So moving from such an old server to the new image meant a lot of changes
Cloud code is accessible, just needed to fix syntax a bit
GraphQL is now Relay so I'll have to fix my schema.js too.
Still can't find a way to use config file instead of environment variables

Docker pull fails when using the HTTP exposed API

What I am trying to do
I need to be able pull private image that is stored in Docker hub when using the exposed daemon over http
http://127.0.0.1:2375/v1.35/images/create?fromImage=akkovachev/test-repository
The "akkovachev/test-repository" is a private repo in docker hub, when I run the above POST request I get
{
"message": "pull access denied for akkovachev/test-repository, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
}
So probably there's something I am missing but I don't understand what. What I tried to do is change the auth with base64 encoded uname:password in the file
./docker/config.json
I have also tried the base64 encoded header in the format described here https://docs.docker.com/engine/api/v1.35/#section/Authentication but again the same issue. I am posting here as my last resort, as I was unable to find a good explanation of why this happens. Many people are saying they are facing the same problem but mostly in the CLI. I am using Docker version v20.10.5 for windows.
It's important to note that the issue occurs only when I try to do that via the HTTP exposed demon. It works fine when I do In the CLI
docker pull akkovachev/test-repository
Evertyghing works as expected the image is beeing pulled correctly
I need to be able to pull through the API as I've my own Rest api build around that and i need to be able to pull through the exposed docker daemon.
Headers i am using in Postman
I have mange to figure it out. The problem was in the base64 encoded string that we pass as a X-Registry-Auth header. I was missing the "serveraddress" property in json
{
"username": "string",
"password": "string",
"email": "string",
"serveraddress": "string"
}
i added that and pointed to the https://index.docker.io/v1/ then it worked just fine

how to use\connect to sonatype nexus docker registry v2 api in a web application?

I have a private sonatype nexus repository manager OSS 3.25.1-04 container running on a vm (with nginx routing from docker.io to repo manager url) that contains a few repositories, one of them is a docker registry.
I want to use the docker registry v2 api from a react app to get a listing for the docker images in the repository and maybe some more metrics about the repo and its contents.
I tried calling the api directly: https://nexus3:8083/v2/_catalog but got 401 UnAuthorized in the response when checking the devtools network tab
Then to login to the api I tried using https://auth.docker.io/token?service=registry.docker.io&scope=repository:samalba/my-app:pull,push when substituting samalba/my-app with my own registry and example docker image. I know this link is to get token for only this image couldn't find one for the entire api (it didn't work anyway)
Could use some help on how to connect to the api\get jwt token and using it or how to use the api with http instead
A few things may be going on. First, try just using basic authentication and seeing if that works. Additionally, you may need to set some additional headers to connect to nexus / sonatype. Here is an example with curl:
curl -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -H "Content-Type: application/json" -H "User-Agent: docker/20.10.14" -u username:password -i https://nexus3:8083/v2/_catalog
Note the user agent field -- i've run into issues where the authentication layer is filtering out for the docker user agent.
If that still doesn't work, then the next thing you can look for is to see if the registry response with the header www-authenticate. This means you will need to first authenticate with that service to retrieve a Bearer token, and then you can pass that back to the registry using the Authorization header instead of basic auth.
Hope that helps.

Http Trigger Azure Function in Docker with non anonymous authLevel

I am playing around with an Http Triggered Azure Functions in a Docker container. Up to now all tutorials and guides I found on setting this up configure the Azure Function with the authLevel" set to anonymous.
After reading this blog carefully it seems possible (although tricky) to also configure other authentication levels. Unfortunately the promised follow up blogpost has not (yet) been written.
Can anyone help me clarify on how I would go about and set this up?
To control the master key the Function host uses on startup - instead of generating random keys - prepare our own host_secrets.json file like
{
"masterKey": {
"name": "master",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
},
"functionKeys": [{
"name": "default",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
}]
}
and then feed this file into the designated secrets folder of the Function host (Dockerfile):
for V1 Functions (assuming your runtime root is C:\WebHost):
...
ADD host_secrets.json C:\\WebHost\\SiteExtensions\\Functions\\App_Data\\Secrets\\host.json
...
for V2 Functions (assuming your runtime root is C:\runtime):
...
ADD host_secret.json C:\\runtime\\Secrets\\host.json
USER ContainerAdministrator
RUN icacls "c:\runtime\secrets" /t /grant Users:M
USER ContainerUser
ENV AzureWebJobsSecretStorageType=files
...
The function keys can be used to call protected functions like .../api/myfunction?code=asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==.
The master key can be used to call Functions Admin API and Key management API.
In my blog I describe the whole journey of bringing V1 and later V2 Functions runtime into Docker containers and host those in Service Fabric.
for V3 Functions on Windows:
ENV FUNCTIONS_SECRETS_PATH=C:\Secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json C:\\Secrets\\host.json
for V3 Functions on Linux:
RUN mkdir /etc/secrets/
ENV FUNCTIONS_SECRETS_PATH=/etc/secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json /etc/secrets/host.json
I found a solution for me, even though this post is out of date. My goal was to run a Http Trigger Azure Function in Docker container with function authLevel. For this I use the following Docker Image: Azure Functions Python from Docker hub.
I pushed my created container to an Azure Container Registry after my repository was ready there. I wanted to run my container serverless via Azure Function. So I had followed the following post and created a new Azure Functions in my Azure Portal.
Thus, the container content corresponds to an Azure Function Image and the operation of the container itself is implemented through Azure by an Azure Function. This way may not always be popular, but offers advantages to host a container there. The container can be easily selected from the Azure Container Registry via Deployment Center.
To make the container image accessible via function authLevel, the Azure Function ~3 cannot create a host key as this is managed within the container. So I proceeded as follows:
Customizing my function.json
"authLevel": "function",
"type": "httpTrigger",
Providing a storage account so that the Azure Function can obtaion configurations there. Create there a new container.
azure-webjobs-secrets
Create a directory inside the container with the name of your Azure Function.
my-function-name
A host.json can now be stored in the directory. This contains the master key.
{"masterKey": {
"name": "master",
"value": "myprivatekey",
"encrypted": false }, "functionKeys": [ ] }
Now the Azure Function has to be configured to get access to the storage account. The following values must be added to the configuration.
AzureWebJobsStorage = Storage Account Connection String
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = Storage Account Connection String
WEBSITE_CONTENTSHARE = my-function-name
From now on, the stored Azure Function master key is available. The container API is thus configured via authLevel function and only accessible with the corresponding key.
URL: https://my-function-name.azurewebsites.net/api/helloworld
HEADER: x-functions-key = myprivatekey

Is there any way I can automate configuring Nexus 3?

I am currently setting up a nexus3 OSS server. I've created and ansible role that wil set up an official nexus3 docker container behing and nginx reverse proxy. I have my storage setup separately so my artifacts will persist if the instance gets killed (for say, a base image update). I'd like to set up the ansible role so I don't have to go into the nexus gui to setup LDAP and repositories everytime I recreate the server. Is there a way to inject this kind of configuration into nexus?
Nexus Repository Manager 3 includes a scripting API that you can use for this sort of work. Have a look at the documentation and the demo videos.
If you find anything we should expand the API on or need some help contact us on the mailing list or via live chat.
There is a pretty easy workaround to automate nexus, I usually do that by following the nexus API which uses an API documentation tool called swagger, in order to do that you can go to http://localhost:8081/#admin/system/api or you can go to:
System administration and configuration > System > API
you can check the complete API documentation of nexus and so to do the provisioning you can create a script containing multiple curl calling any API you want.
the generated API request will look like this:
# Add jenkins user
curl -X 'POST' \
"http://${NEXUS_URL}/service/rest/v1/security/users" \
-u "admin:admin123" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"userId" : "jenkins",
"firstName" : "jenkins",
"lastName" : "jenkins",
"emailAddress" : "jenkins#domain.com",
"password" : "jenkins",
"status" : "active",
"roles" : [ "nx-admin" ]
}'
this for example will create a new jenkins user.
You can find more in nexus API documentation
I've just resorted to creating proxy-repositories in nexus2, I'll turn these into hosted repo's later. The storage here is much more straightforward and accessible and I've hosted it on a discrete persistent EBS. I'll use this for now and upgrade to 3.1 when that's released. Thanks anyway!

Resources