I am currently setting up a nexus3 OSS server. I've created and ansible role that wil set up an official nexus3 docker container behing and nginx reverse proxy. I have my storage setup separately so my artifacts will persist if the instance gets killed (for say, a base image update). I'd like to set up the ansible role so I don't have to go into the nexus gui to setup LDAP and repositories everytime I recreate the server. Is there a way to inject this kind of configuration into nexus?
Nexus Repository Manager 3 includes a scripting API that you can use for this sort of work. Have a look at the documentation and the demo videos.
If you find anything we should expand the API on or need some help contact us on the mailing list or via live chat.
There is a pretty easy workaround to automate nexus, I usually do that by following the nexus API which uses an API documentation tool called swagger, in order to do that you can go to http://localhost:8081/#admin/system/api or you can go to:
System administration and configuration > System > API
you can check the complete API documentation of nexus and so to do the provisioning you can create a script containing multiple curl calling any API you want.
the generated API request will look like this:
# Add jenkins user
curl -X 'POST' \
"http://${NEXUS_URL}/service/rest/v1/security/users" \
-u "admin:admin123" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"userId" : "jenkins",
"firstName" : "jenkins",
"lastName" : "jenkins",
"emailAddress" : "jenkins#domain.com",
"password" : "jenkins",
"status" : "active",
"roles" : [ "nx-admin" ]
}'
this for example will create a new jenkins user.
You can find more in nexus API documentation
I've just resorted to creating proxy-repositories in nexus2, I'll turn these into hosted repo's later. The storage here is much more straightforward and accessible and I've hosted it on a discrete persistent EBS. I'll use this for now and upgrade to 3.1 when that's released. Thanks anyway!
Related
I need to create an HTTP access token for a repository which allows me to pull modules from it while building a nodeJS application in another repository.
This was done in the past by using a personal access token from one of the employees and I want to change that.
I refered to this article " https://confluence.atlassian.com/bitbucketserver/personal-access-tokens-939515499.html " in which the steps are stated as follows:
Create HTTP access tokens for projects or repositories
HTTP access tokens can be created for teams to grant permissions at the project or repository level rather than for specific users.
To create an HTTP access token for a project or repository (requires project or repository admin permissions):
From either the Project or Repository settings, select HTTP access tokens.
Select Create token.
Set the token name, permissions, and expiry.
The problem is in my repository settings, I can't find "HTTP access tokens".
I'm using Bitbucket cloud whereas the article refers to bitbucket Server, does that make a problem? If so, this option isn't available in bitbucket cloud?
Atlassian has vast documentation, but I have a problem with it and still don't understand how to get an access token to be able simply download archives from private repositories.
So here is my step by step tutorial
Insert your workspace name instead of {workspace_name} and go to the following link in order to create an OAuth consumer
https://bitbucket.org/{workspace_name}/workspace/settings/api
set callback URL to http://localhost:8976 (doesn't need to be a real server there)
select permissions: repository -> read
use consumer's Key as a {client_id} and open the following URL in the browser
https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code
after you press "Grant access" in the browser it will redirect you to
http://localhost:8976?code=<CODE>
Note: you can spin your local server to automate this step
use the code from the previous step and consumer's Key as a {client_id}, and consumer's Secret as {client_secret}:
curl -X POST -u "{client_id}:{client_secret}" \
https://bitbucket.org/site/oauth2/access_token \
-d grant_type=authorization_code \
-d code={code} \
you should receive similar json back
{
"access_token": <access_token>,
"scopes": "repository",
"token_type": "bearer",
"expires_in": 7200,
"state": "authorization_code",
"refresh_token": <refresh_token>
}
use the access token in the following manner
curl https://api.bitbucket.org/2.0/repositories/{workspace_name} \
--header "Authorization: Bearer {access_token}
Whilst your question is about Bitbucket Cloud, the article you linked is for Atlassian's self-hosted source control tool Bitbucket Server. They have different functionality for different use cases, which is why they don't look the same.
Depending on your use case you can use App passwords or OAuth instead.
Full disclosure: I work for Atlassian
Easiest way to do it is:
Create an OAuth consumer in your Bitbucket settings (also provide dummy redirect like localhost:3000, copy KEY and SECRET.
Use curl -X POST -u "KEY:SECRET" https://bitbucket.org/site/oauth2/access_token -d grant_type=client_credentials to get JSON data with access-token.
I have a private sonatype nexus repository manager OSS 3.25.1-04 container running on a vm (with nginx routing from docker.io to repo manager url) that contains a few repositories, one of them is a docker registry.
I want to use the docker registry v2 api from a react app to get a listing for the docker images in the repository and maybe some more metrics about the repo and its contents.
I tried calling the api directly: https://nexus3:8083/v2/_catalog but got 401 UnAuthorized in the response when checking the devtools network tab
Then to login to the api I tried using https://auth.docker.io/token?service=registry.docker.io&scope=repository:samalba/my-app:pull,push when substituting samalba/my-app with my own registry and example docker image. I know this link is to get token for only this image couldn't find one for the entire api (it didn't work anyway)
Could use some help on how to connect to the api\get jwt token and using it or how to use the api with http instead
A few things may be going on. First, try just using basic authentication and seeing if that works. Additionally, you may need to set some additional headers to connect to nexus / sonatype. Here is an example with curl:
curl -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -H "Content-Type: application/json" -H "User-Agent: docker/20.10.14" -u username:password -i https://nexus3:8083/v2/_catalog
Note the user agent field -- i've run into issues where the authentication layer is filtering out for the docker user agent.
If that still doesn't work, then the next thing you can look for is to see if the registry response with the header www-authenticate. This means you will need to first authenticate with that service to retrieve a Bearer token, and then you can pass that back to the registry using the Authorization header instead of basic auth.
Hope that helps.
I have a cloud run app deployed that is for internal use only.
Therefore only users of our cluster should have access to it.
I added the permission for allAuthenticated members giving them the role Cloud Run Invoker.
The problem is that those users (including me) now have to add authorization bearer header everytime I want to access that app.
This is what Cloud Run suggests to do (somehow useless when u wanna simply visit a frontend app)
curl -H \
"Authorization: Bearer $(gcloud auth print-identity-token)" \
https://importer-controlroom-frontend-xl23p3zuiq-ew.a.run.app
I wonder why it is not possible to be realized as authorized member like the GCP figures out. I can access the cluster but have to add the authorization header to access the cloud run app as authorized member? I find this very inconvenient.
Is there any way to make it way more fun to access the deployed cloud run app?
PS: I do not want to place the app in our cluser - so only fully managed is an option here
You currently can't do that without the Authorization header on Cloud Run.
allAuthenticated subject means any Google user (or service account), so you need to add the identity-token to prove you're one.
If you want to make your application public, read this doc.
But this is a timely request! I am currently running an experiment that lets you make requests to http://hello and automatically get routed to the full domain + automatically get the Authorization header injected! (This is for communication between Cloud Run applications.)
GCP now offers a proxy tool for making this easier, although it's in beta as of writing this.
It's part of the gcloud suite, you can run:
gcloud beta run services proxy $servicename --project $project --region $region
It will launch a webserver on localhost:8080, that forwards all requests to the targeted service, injecting the user's GCP token into all requests.
Of course this can only be used locally.
I'm creating an Azure AppService based on a Docker image. The image is in Docker public registry, so I want the service to 'know' when there's a new version of the image (same tag).
I thought the WebHook under Continuous Deployment was to achieve that, but when I call it with curl I get the message from the subject.
I couldn't find the right doc... is that WebHook URL for what I think (hope) it is? is there a specific HTTP verb to use?
EDIT: I mean the WebHook URL found under Continuous Deployment in my Container Settings in Azure
I was stuck on this one for some time as well, until I realized that it requires POST HTTP request on that URL.
Here is an example of the CURL request that I have in my gitlab CI script
curl -X POST "https://\$$AZURE_DEPLOY_USER:$AZURE_DEPLOY_PASSWORD#$AZURE_KUDU_URL/docker/hook" -d -H
It does require to have set the following variables in the environment or you can replace it directly with your URL
$AZURE_DEPLOY_USER
$AZURE_DEPLOY_PASSWORD
$AZURE_KUDU_URL
The REST API for registry.hub.docker.com does not seem to match the documented API.
For example, curl -k https://registry.hub.docker.com/v1/repositories/busybox/tags returns:
[{"layer": "4986bf8c", "name": "latest"}, {"layer": "2aed48a4", "name": "buildroot-2013.08.1"}, ... ]
But https://docs.docker.com/reference/api/registry_api/#tags says it should return a map of tag -> id. That's what I see when I make a similar request to a registry I'm running locally.
Is the REST API for the Docker Hub Registry supposed to be different for some reason?
Is this a bug?
It looks like instead of returning
[_tag_ : _id_]
it returns
[{"layer: _id_, "name": _tag_}]
But you've got the same information at the end of the day.
Check out this docs, because registry api seems to behave slightly differently than the hub.