Jenkins setup using Google AppEngine source code - jenkins

I have created an application in google app engine and pushed my code into git provided by google. Now I want to set up Continuos Integration with cloudbees Jenkins.
When I create a job in Jenkins with repository url as url at the source code level I get following error
Failed to connect to repository : Command "git ls-remote -h https://source.developers.google.com/p/my-application-name/r/default HEAD" returned status code 128:
stdout:
stderr: fatal: remote error: Invalid username/password.
You may need to use your OAuth token password; Note that generated google.com passwords are not compatible with private repositories
The repository url I am using is:
https://source.developers.google.com/p/my-application-name/r/default
How do I create OAuth token?

OAuth is a protocol that lets external apps request authorization to private details in a user’s GitHub account without getting their password. This is preferred over Basic Authentication because tokens can be limited to specific types of data, and can be revoked by users at any time.
All developers need to register their application before getting started. A registered OAuth application is assigned a unique Client ID and Client Secret. The Client Secret should not be shared.
Ill adive you to give the following article a read :
https://developer.github.com/v3/oauth/
Also have alook at the plugin for git authentication for Jenkins:
https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin

Related

Berglas not finding my google cloud credentials

I am trying to read my google cloud default credentials with berglas, and it says that:
failed to create berglas client: failed to create kms client: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
And I am passing the right path, and i have tried with many paths but none of them work.
$HOME/.config/gcloud:/root/.config/gcloud
I'm unfamiliar with Berglas (please include references) but the error is clear. Google's client libraries attempt to find credentials automatically. The documentation describes the process by which credentials are sought.
Since the credentials aren't being found, you're evidently not running on a Google Cloud compute service (where credentials are found automatically). Have you set an environment variable called APPLICATION_DEFAULT_CREDENTIALS and is it pointing to a valid Service Account key file?
The Berglas' README suggests using the following command to auth your user's credentials as Application Default Credentials. You may not have completed this step:
gcloud auth application-default login

How do I obtain an HTTP access token from a bitbucket repository on bitbucket cloud

I need to create an HTTP access token for a repository which allows me to pull modules from it while building a nodeJS application in another repository.
This was done in the past by using a personal access token from one of the employees and I want to change that.
I refered to this article " https://confluence.atlassian.com/bitbucketserver/personal-access-tokens-939515499.html " in which the steps are stated as follows:
Create HTTP access tokens for projects or repositories
HTTP access tokens can be created for teams to grant permissions at the project or repository level rather than for specific users.
To create an HTTP access token for a project or repository (requires project or repository admin permissions):
From either the Project or Repository settings, select HTTP access tokens.
Select Create token.
Set the token name, permissions, and expiry.
The problem is in my repository settings, I can't find "HTTP access tokens".
I'm using Bitbucket cloud whereas the article refers to bitbucket Server, does that make a problem? If so, this option isn't available in bitbucket cloud?
Atlassian has vast documentation, but I have a problem with it and still don't understand how to get an access token to be able simply download archives from private repositories.
So here is my step by step tutorial
Insert your workspace name instead of {workspace_name} and go to the following link in order to create an OAuth consumer
https://bitbucket.org/{workspace_name}/workspace/settings/api
set callback URL to http://localhost:8976 (doesn't need to be a real server there)
select permissions: repository -> read
use consumer's Key as a {client_id} and open the following URL in the browser
https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code
after you press "Grant access" in the browser it will redirect you to
http://localhost:8976?code=<CODE>
Note: you can spin your local server to automate this step
use the code from the previous step and consumer's Key as a {client_id}, and consumer's Secret as {client_secret}:
curl -X POST -u "{client_id}:{client_secret}" \
https://bitbucket.org/site/oauth2/access_token \
-d grant_type=authorization_code \
-d code={code} \
you should receive similar json back
{
"access_token": <access_token>,
"scopes": "repository",
"token_type": "bearer",
"expires_in": 7200,
"state": "authorization_code",
"refresh_token": <refresh_token>
}
use the access token in the following manner
curl https://api.bitbucket.org/2.0/repositories/{workspace_name} \
--header "Authorization: Bearer {access_token}
Whilst your question is about Bitbucket Cloud, the article you linked is for Atlassian's self-hosted source control tool Bitbucket Server. They have different functionality for different use cases, which is why they don't look the same.
Depending on your use case you can use App passwords or OAuth instead.
Full disclosure: I work for Atlassian
Easiest way to do it is:
Create an OAuth consumer in your Bitbucket settings (also provide dummy redirect like localhost:3000, copy KEY and SECRET.
Use curl -X POST -u "KEY:SECRET" https://bitbucket.org/site/oauth2/access_token -d grant_type=client_credentials to get JSON data with access-token.

Access problem with service account in gcloud from github actions

I'm quite new to github actions and gcloud. I have trouble to get my github-CI/CD-Pipeline running because I can't push any docker image to the google Cloud Registry due to access restrictions.
What have I done so far:
I have a Quarkus app hosted on github
I used github actions to build the Maven project and the docker image
I created a project in google Cloud and added a service account which I use for the github action. The login seems to work:
Run google-github-actions/setup-gcloud#master
/usr/bin/tar xz --warning=no-unknown-keyword -C /home/runner/work/_temp/ac85f67a-89fa-4eb4-8d30-3f6379124ec2 -f /home/runner/work/_temp/de491940-a4b1-4a15-bf0a-95d563e68362
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet config set project ***
Updated property [core/project].
Successfully set default project
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet auth activate-service-account github-actions#***.iam.gserviceaccount.com --key-file -
Activated service account credentials for: [github-actions#***.iam.gserviceaccount.com]
If I now try to push the docker image I get the following (expected) error message:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
715ac1ae8693: Preparing
435cfe5f5775: Preparing
313d03d71d4d: Preparing
c5c8d86ccee1: Preparing
1b0f2238925b: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Token exchange failed for project '***'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
Error: Process completed with exit code 1.
Next, I opened the Google Cloud Console and created a custom role (IAM & Admin -> Roles -> Create Role) which has the necessary permissions.
Then, I had trouble to assign my new custom role to the service account (IAM & Admin -> Service Accounts -> Manage Access -> Add member). I used the email address of the service account as "New members", but I could not choose the custom role I just created. What am I missing here?
I read somewhere that I can also add service accounts as member (IAM & Admin -> IAM -> Add). Again I used the email address of the service account as "New Members". This time I could choose my custom role. What's the difference to the first approach?
Anyways, if a I try to run the github action again, now I get the following error:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
c4f14c9d3b6e: Preparing
fe78d438e8e2: Preparing
843fcae4a8f4: Preparing
dcf8cc80cedb: Preparing
45e8815b101d: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Access denied.
Error: Process completed with exit code 1.
The error message is different, so I guess the permission for the service account somehow worked, but still I can't succeed. Which steps did I miss?
Any help is highly appreciated. Thanks a lot!
One way to debug this is to create a key for the service account on your local host, configure your script|gcloud to use the service account as its credentials and then try the push manually.
One immediate problem may be that you're not authenticating against Google Container Registry (GCR). GCR implements Docker's registry API and you'll need to use one of the mechanisms to authenticate before you can interact with the registry.
Notes:
I think you don't need to create a custom role. You have 2 options. Either (preferred) create an account specifically for the CI/CD job and grant it the minimum set of roles needed including storage.buckets.get. I think you can start with roles/storage.admin (link) and perhaps refine later.
You can grant roles e.g. roles/storage.admin to a Project in which case the permission applies to all Cloud Storage resources or to a specific Bucket in which case the permission applies only to the bucket and its objects.
Service Accounts have a dual role in GCP. As an identity and as a resource (that can be used by other identities). It can be confusing.

Regenerate expired GitHub PAT on Actions and Packages

I am using GitHub Actions & Packages from Beta. Yesterday, the PAT expired. That's why my GitHub Actions failed. There is a Regenerate button on Profile > Developer Settings > Personal Access Token. I clicked it and created a new PAT.
At this step, I am able to login docker.pkg.github.com and push the image to GitHub Registry.
But, I am getting an error message when I pull that image.
This is the error message:
Error response from daemon: unauthorized: Your request could not be authenticated
by the GitHub Packages service. Please ensure your access token is valid and has
the appropriate scopes configured.
How can I solve this expired PAT issue?
This was a bug and reported on the GitHub community https://github.community/t/bug-report-personal-access-tokens/147968/2
The shell stores your old token and doesn't update it. That's why you have to logout first for one time.
The solution:
Regenerate or Create a new Personal Access Token
Update your repo's Secret
in a shell, docker logout https://docker.pkg.github.com
in a shell, docker login https://docker.pkg.github.com -u GITHUBUSERNAME
use the new token as the password
Then you will able to pull an image from the GitHub registry as always.
I got the answer from zsoobhan-tc's post.

Authenticating azcopy or az storage cli to upload to Azurite docker emulator

I started an Azurite docker on local VM and then tried to copy data to it by azcopy and az CLI like below
export AZURE_STORAGE_ACCOUNT="devstoreaccount1"
export AZURE_STORAGE_ACCESS_KEY="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
azcopy copy /local/data/ http://localvm:10000/devstoreaccount1/data/test --from-to LocalBlob
INFO: Scanning...
failed to perform copy command due to error: Login Credentials missing. No SAS token or OAuth token is present and the resource is not public
I want to authenticate with the Account key and Account name and preferably be able to copy using azcopy.
I scoured the GitHub and stack to find only one https://github.com/Azure/azure-storage-azcopy/issues/867 issue and there is nothing there regarding auth. It looks like I am missing something that's obvious. Your help will be much appreciated.
The version used were:
azure-cli 2.11.1
azcopy version 10.7.0
I was able to getaway with using az cli instead of azcopy.
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azuritedockerhost:10000/devstoreaccount1;"
az storage blob upload -f local-file -c container-name -n dir/blob-name
Hope this helps someone. Plus it would really nice to be able to use azcopy too so if anybody finds out how it will greatly appreciated.
The Microsoft documentation 'Get started with AzCopy' indicates the following under the 'Run AzCopy' heading:
As an owner of your Azure Storage account, you aren't automatically
assigned permissions to access data. Before you can do anything
meaningful with AzCopy, you need to decide how you'll provide
authorization credentials to the storage service.
Under the next heading 'Authorize AzCopy', the documentation states:
You can provide authorization credentials by using Azure Active Directory
(AD), or by using a Shared Access Signature (SAS) token.
Even though you're accessing a local storage emulator (Azurite) on your local machine, the AzCopy app wants an OAuth token or SAS token. See this link to generate SAS tokens for local storage or online storage.
A SAS token must be appended to the destination parameter in the azcopy copy command. I use the Active AD (OAuth token) authorization credentials option so that I can run multiple azcopy commands without appending a SAS token to every command.
To resolve the AzCopy error you're getting "Failed to perform copy command due to error: Login Credentials missing. No SAS token or OAuth token is present and the resource is not public", enter the following into a command prompt or Windows PowerShell:
azcopy login --tenant-id=<your-tenant-directory-id-from-azure-portal>
and then follow the steps this command returns. Here's a reference to azcopy login. From the heading 'Authorize without a secret store' in this reference: "
The azcopy login command retrieves an OAuth token and then places that
token into a secret store on your system.
From 'Authorize a user identitiy' heading:
After you've successfully signed in, you can close the browser window
and begin using AzCopy.
Use azcopy logout from a command prompt to stop any more AzCopy commands.
Here are the steps with screen captures for the login process as well as where to find a tenant ID to get the AzCopy login process going.
Get tenant ID from the Azure portal.
In a command prompt enter the azcopy login command along with the --tenant-id parameter.
Follow the steps indicated in the command prompt: "...use a web browser to open the page https://microsoft.com/devicelogin and enter the code...".
"A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials."
"After you've successfully signed in, you can close the browser window and begin using AzCopy."
You can run your original azcopy copy /local/data/ http://localvm:10000/devstoreaccount1/data/test --from-to LocalBlob without the need for the export entries in your question.
AzCopy deliberately avoids support for account key authentication, because an account key has full admin privileges: https://github.com/Azure/azure-storage-azcopy/issues/186
The only workaround I have found so far is to generate a SAS (for the container) in Azure Storage Explorer, and then use the SAS URL with AzCopy.

Resources