Unable to push helm charts to jfrog artifactory - devops

I am trying to add the helm repository to my JFrog account but it is giving me an error. I have created a remote repository.
How can I push using helm client?
helm repo add <key> https://ip/artifactory/<key> --username username --password password
Error: looks like "https://ip/artifactory/" is not a valid chart repository or cannot be reached: failed to fetch https://ip/artifactory//index.yaml : 401 Unauthorized

You cannot push to a remote repository. It's meant to proxy and cache remote repositories only. Not to allow you to push your artifacts to them via Artifactory.
If you look into the documentation, you'll see you need to have a virtual, local and remote helm repositories.
For example:
helm-local -> your local repo
helm-remote -> your remote repo
helm -> your virtual repo
For the virtual repository, you set the default deployment repository (where the charts you upload are pushed to). This would be the local repository you created.
Once you have them setup, you do the login against the virtual repository - https://ip/artifactory/helm. The helm repo add command uses this URL.

The documentation says, you cannot push to artifactory with helm. You need JFrog CLI (https://jfrog.com/blog/master-helm-chart-repositories-artifactory/)
Create a local helm repository with id helm-local
Install jfrog CLI
Run
jfrog rt u your-helm-chart-package.tgz helm-local
When ask for initial setup, confirm and configure your artifactory.
Configure now? (y/n) [n]? y
Choose a server ID: artifactory
JFrog Platform URL: https://your-artifactory.fqdn.example.com/
JFrog username: admin-user-name
JFrog password or API key:
When done, you get a success message:
These files were uploaded:
📦 helm-local
└── 📄 your-helm-chart-package.tgz
{
"status": "success",
"totals": {
"success": 1,
"failure": 0
}
}

Related

Jenkins freestyle project using Gitlab API Token never clone the repository, no error, build always succeed

I've followed this gitlab tutorial link, to connect my jenkins server to Gitlab.
Everyting went fine, and I've :
created a personnal access token in my GitLab profile
created a GitLab API Token using the my GitLab access token in jenkins system configuration as stated in the tutorial
create a freestyle jenkins job and Choose my GitLab connection from the dropdown
checked the Build when a change is pushed to GitLab checkbox.
checked the Accepted Merge Request Events and Closed Merge Request Events checkboxes
generated a secret token from the above freestyle project
use the freestyle jenkins project secret token to create a webhook in the GitLab project repository integration settings
Till there everything went fine.
Then I added and push code including a jenkinsFile to my GitLab repository, and get to the Jenkins WebUI to view the build status, but the pipeline shown green saying build success, while nothing happened, no code has been retrieved from GitLab (as shown in the attached console output screenshot), thus no jenkinsFile executed nor error message shown.
I tried to run the buils manually from WebUI but same result, no way to trigger my pipeline on git push events from GitLab
I thought may be I should select Git in Source Code Management section (I left it to None as the tutorial doesn't mention it) but if I choose Git as SCM I cannot select my GitLab API Token credentials, seeming like we cannot use GitLab plugin (API Token) and Git plugin for the same build project.
SO how should I proceed to be able build my jenkins project from GitLab with a jenkinsFile, using GitLab API Token?
Does the GitLab tutorial miss some useful steps?
OK, I think I understand the issue now.
There are two sets of credentials: GitLab API token for access to GitLab Webhooks and a separate one for cloning the git repo during builds.
So we can't use the GitLab API token for cloning the repository. For this you have to use either a SSH key or a Username/Password combination. Furthermore this dropdown is part of the git plugin not the gitlab plugin.
So the gitlab plugin can't tell which credentials are available as credentials for this dropdown.

Problem with pulling docker images from gcr in kubernetes plugin / jenkins

I have a gke cluster with a running jenkins master. I am trying to start a build. I am using a pipeline with a slave configured by the kubernetes plugin (pod Templates). I have a custom image for my jenkins slave published in gcr (private access). I have added credentials (google service account) for my gcr to jenkins. Nevertheless jenkins/kubernetes is failing to start-up a slave because the image can't be pulled from gcr. When I use public images (jnlp) there is no issue.
But when I try to use the image from gcr, kubernetes says:
Failed to pull image "eu.gcr.io/<project-id>/<image name>:<tag>": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Although the pod is running in the same project as the gcr.
I would expect jenkins to start the slave even if I use a image from gcr.
Even if the pod is running in a cluster in the same project, is not authenticated by default.
Is stated that you've already set up the Service Account and I assume that there's a furnished key in the Jenkins server.
If you're using the Google OAuth Credentials Plugin you can then also use the Google Container Registry Auth Plugin to authenticate to a private GCR repository and pull the image.

Apache Nifi-registry BitBucket repository?

I am looking for version control repository in BitBucket like GitHub. I have found Nifi Git repository. But at my organization we have private account in BitBucket. My question is can I create version control repository with Apache Nifi-registry in bitbucket? There is a class (org.apache.nifi.registry.provider.flow.git.GitFlowPersistenceProvider) which is associated with Git in the providers.xml file in nifi-registry. For bitbucket which class should i use? any help, guidelines will be greatly appreciated. Thanks!
It shouldn't matter where the remote git repo is located, so it should work with BitBucket just like it does with GitHub. You would clone the repo from BitBucket to the server where NiFi Registry is running, and then configure providers.xml to use the local cloned repo, and enter credentials to enable pushing to remote.
It should work the same as how you would interact with the git repo from the command line. You add or modify files in the local repo, commit them, then push to remote. In this case, all these steps will be done by registry for you.

How to add private repo which I don't owned on docker hub for auto build

I want to use docker hub to do an auto build on every commits sent to github. It works fine for the git repo owned by my account. What if there is a private git repo and I have the write access, that repo doesn't appear on the docker hub when I search git repos under my account. Is there a way to add the git repo on docker hub for auto-build?
You would need to fork that repo (it would still be private) and set that repo on Docker Hub (warning: you only get one private repo for free, after that you would need to buy a plan)
Then you can put in place:
a GitHub webhook on the original repo
a webhook listener that would receive any push event sent by the webhook.
For each push event, you would pull on your local fork, then push it to your remote fork, monitored by Docker Hub.

Dockerhub automated build: BitBucket repository with private submodules

I have a private BitBucket repository that stores my Dockerfile. This repository has two other private BitBucket repositories as git submodules. I setup an automated build process on Docker hub and added the public SSH key to my three private repositories on BitBucket. However, when the build runs, it successfully connects to the main private repository on BitBucket but fails when trying to get the submodules. I see the following error in the log file:
fatal: could not read Username for 'https://bitbucket.org': No such device or address
It seems like the build agent is trying to access the submodules via HTTPS, and obviously, fails as there is no web access setup.
Am I missing something or is it a limitation that I'll have to live with for the moment?
I figured it out. My .gitmodules had an HTTPS URL for that particular repository. I edited the .gitmodules file and changed the URL to SSH. Seems like it is building now :-)

Resources