Azure DevOps Pipeline for iOS - Fastlane Match Clone Problem - ios

I'm trying to implement iOS pipeline to Azure DevOps using Fastlane. I have already have Fastlane in my project and successfully deploy beta and pilot versions. My problem is that when I run below script on Azure pipeline, It can't pass match clone part. Therefore, can't fetch certificates, provision profiles etc..
P.S: iOS_Certificates repo is different than project repo.
I'm getting timeout error after 1 hour. I think It is about authentication to
pool:
vmImage: 'macos-latest'
steps:
- script: |
fastlane match development --clone_branch_directly --verbose
fastlane beta
displayName: 'Build iOS'
Related code in MatchFile:
git_url("git#ssh.dev.azure.com:v3/myteam/myproject/certificates_repo")
storage_mode("git")
type("development")
EDIT: I'm trying to fetch a repo inside same project inside Azure DevOps (not GitHub or somewhere else). I'm getting timeout error, so no specific error even I run --verbose on match command.

From your information, you are using the SSH key as the authentication method.
Since you are using the macos-latest(microsoft-hosted agent) as build agent, the private key of ssh key will not exist on the target build machine.
So it can't authenticate and gets stuck. As you said, it will run 60 minutes and cancel. I could also reproduce this issue.
You could try to create a self-hosted agent and run the build on it.
In this case, you need to ensure that the private key exists on the machine, then you could authenticate through the ssh key.
On the other hand, you can authenticate with username and password.
For example(matchfile):
git_url "https://organizationname#dev.azure.com/organizationname/projectname/_git/reponame"
type "development"
app_identifier 'xxx'
username "member#companyname.com" #This will be the git username
ENV["FASTLANE_PASSWORD"] = "abcdefgh" #Password to access git repo.
ENV["MATCH_PASSWORD"] = "password" #Password for the .p12 files saved in git repo.

Related

Problems transferring build artifacts from Jenkins running in a docker container

I'm a little bit of a newb, with this CI/CD container stuff so please correct me anywhere I'm wrong.
I can't seem to find out how to send by npm build files created on my jenkins instance (workspace) to a remote server. I have a pipeline that successfully pulls in my github repo, does all my fun npm stuff (npm install, test, build). I see my build dir in my jenkins instance /workspace.
My environment is as follows. We have a server where docker (with Portainer) is installed. Jenkins is running in a container with a volume mounted (my react build dir goes here). No issues with the pipeline or building etc. I just can't figure out how to push my artifacts from my jenkins workspace directory to my 'remote' dev server.
I can successfully open a console in my jenkins container (portainer as the jenkins user) and scp files from the workspace directory using my remote server creds(but password is necessary).
I installed and used "Publish Over SSH" Jenkins plugin and get a successful "Test Configuration" from my setup.
I created my RSA keys on the REMOTE machine (that I'm trying to push my build files to).
I then pasted the private key (created without a password) into the plugin at the 'Use password authentication, or use a different key' section. Again, I get a successful test connection.
In my pipeline the last step is deploying and I use this command
sh 'scp -r build myusername#xx.xx.xx.xx:/var/files/react-tester'
I get a 'Permission denied (publickey,password).' error. I have no password associated with the rsa key. I tried both ways, creating the rsa key on the remote machine as my remote user, and the jenkins machine as the jenkins user. I've read examples of people creating the keys both ways, but not sure which user/machine combo to create the keys and paste to which section of the 'Publish Over SSH' plugin.
I'm out of ideas.
First, go to "Manage Jenkins" > "Credentials", add a new SSH credential of type "SSH Username with private key" and fill the "Username" and your private key (generate one if you haven't done it yet) fields (you can also upload one). Don't forget that you have to copy the generated public key to the ${SSH_USERNAME}/.ssh/authorized_keys file on the remote server.
I'm assuming you're using a scripted or DSL pipeline here. In your code, after you've builded your application, you can push it to your server adding this step to your pipeline:
pipeline {
stages {
stage("Pushing changes to remote server") {
steps {
script {
def remote_server = "1.2.3.4"
withCredentials([sshUserPrivateKey(credentialsId: 'my-key', keyFileVariable: 'SSH_KEY', passphraseVariable: '', usernameVariable: 'SSH_USERNAME')]) {
sh "scp -i \${SSH_KEY} build/ ${SSH_USERNAME}#${remote_server}:/var/files/react-tester/"
}
}
}
}
}
}
Best regards.

Why is a Jenkins script job failing to use proper AWS credentials?

I have a simple jenkins job that just runs aws ssm send-command and it fails with:
"An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc is not authorized to perform: ssm:SendCommand on resource: arn:aws:ssm:us-east-1:1234567890:document/my-document-name"
However, the IAM permissions are correct. To prove it, I directly SSH onto that instance and run the exact same ssm command, and it works. I verify it's using the instance role by running aws sts get-caller-identity and it returns arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc which is the same user mentioned in the error message.
So indeed, this assumed role can run the command.
I even modified the jenkins job to run aws sts get-caller-identity first, and it outputs the same user json.
Does jenkins do some caching that I am unaware of? Why would I get that AccessDeniedException if that jenkins-live user can run the command otherwise?
First, install the AWS Credentials and AWS Steps plugins and register your AWS key and secret access key in Jenkins credential store. Then, the next steps depends if you're using a freestyle or a declarative/scripted pipeline.
If you're using a freestyle pipeline: On "Build Environment", click on "Use secret text(s) or file(s)" and follow the next steps. After that, you're gonna have your credentials as variables in your pipeline;
If you're using a declarative/scripted pipeline: Enclose your aws calls with a withAWS block, something like this:
withAWS(region: 'us-east-1', credentials: 'my-pretty-credentials') {
// let's explode something
}
Best regards.

Can't find SSH keys settings under travis project settings

My CI project is dependent on another private repo. So I refer to the document to upload the private key using
➜ travis sshkey --upload ~/.ssh/id_travis_rsa --pro
Updating ssh key for Jeff-Tian/uni-sso with key from /Users/tianjef/.ssh/id_travis_rsa
Current SSH key: key for clone k8s-config
Finger print: 65:25:66:26:4d:5d:9f:ac:25:ba:ea:be:c4:d5:e3:5f
From the above I double checked the finger print, and compares to the github ssh keys:
They are matched.
However, the travis build still fails by:
(https://travis-ci.com/github/Jeff-Tian/uni-sso/builds/161350192)
$ git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config
Cloning into '/home/travis/k8s-config'...
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The command "git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config" failed and exited with 128 during .
And then I check the settings on travis settings, can't find the ssh keys settings pane:
Help:
Where goes wrong? Is it a Travis CI bug?
Seems the ssh keys config is only available for private repos.
The issue here is the main repo is public, but when deploy it, a private repo need to be downloaded. This scenario is not covered by the official document.
The workaround is to switch copying the private repo via https instead of ssh, so no need to upload the ssh keys.
By setting up the GH_TOKEN in the setting, and then write that token to .netrc file. Then copy the private repo using https is working:
.travis.yml:
- echo -e "machine github.com\n login $GH_TOKEN" > ~/.netrc
- git clone https://github.com/Jeff-Tian/k8s-config.git ${HOME}/k8s-config

Deploy application using Jenkinsfile and AWS Code deploy

I am migrating from Jenkins 1.x to Jenkins 2. I want to build and deploy application using Jenkinsfile.
I am able to build gradle application, but I am confused about deploying application via AWS Codedeploy using Jenkinsfile.
Here is my jenkinsfile
node {
// Mark the code checkout 'stage'....
stage 'Checkout'
// Get some code from a GitHub repository
git branch: 'master',
credentialsId: 'xxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxx',
url: 'https://github.com/somerepo/someapplication.git'
// Mark the code build 'stage'....
stage 'Build'
// Run the gradle build
sh '/usr/share/gradle/bin/gradle build -x test -q buildZip -Pmule_env=aws-dev -Pmule_config=server'
stage 'Deploy via Codedeploy'
//Run using codedeploy agent
}
I have searched many tutorial but they're using AWS Code deploy plugin instead.
Could you help me deploying application via AWS Codedeploy using Jenkinsfile?
Thank you.
Alternatively you can use AWS CLI commands to do code deployment. This involves two steps.
Step 1 - Push the deployment bundle to S3 bucket. See the following command:
aws --profile {profile_name} deploy push --application-name {code_deploy_application_name} --s3-location s3://<s3_file_path>.zip
Where:
profile_name = name of AWS profile (if using multiple accounts)
code_deploy_application_name = name of AWS code deployment application.
s3_file_path = S3 file path for deployment bundle zip file.
Step 2 - Initiate code deployment
The second command is the used to trigger code deployment. See the following command:
aws --profile {profile} deploy create-deployment --application-name {code_deploy_application_name} --deployment-group-name {code_deploy_group_name} --s3-location bucket={s3_bucket_name},bundleType=zip,key={s3_bucket_zip_file_path}
Where:
profile = name of your AWS profile (if using multiple accounts)
code_deploy_application_name = same as step 1.
code_deploy_group_name = name of code deployment group. This is associated with your code deploy application.
s3_bucket_name = name of S3 bucket which will store your deployment artefacts. (Make sure that your role that performs code deploy has permissions to s3 bucket.)
s3_bucket_zip_file_path = same as step 1.

Mercurial Plugin, Jenkins and Cloudbees

I'm trying to run jenkins on my project hosted in bitbucket using mercurial. I have the following settings for mercurial:
repository url: http://bitbucket.org/myuser/myproject
credentials: username with password (I have my bitbucket username / password)
Revision Type: branch
Revision: default
When I run the build, I'm getting the following:
Started by user Me / Me
Building remotely on bec9ae7e (lxc-fedora17 m1.xlarge hi-speed xlarge) in workspace /scratch/jenkins/workspace/myproject
$ hg --config ******** clone --rev default --noupdate http://bitbucket.org/myuser/myproject /scratch/jenkins/workspace/myproject
abort: http authorization required
ERROR: Failed to clone http://bitbucket.org/myuser/myproject
ERROR: Failed to clone http://bitbucket.org/myuser/myproject
Finished: FAILURE
I'm not finding seeing where my appropriate credentials are being sent over. Plus, not sure what all of these config things are doing.
--config ******** is a masked version of the command-line option defining authentication. The form you quote seems to be for SSH private key authentication, which contradicts your claim that the specified credentials were a username/password pair (which would produce several --config options, some not masked). So I would double-check your credentials.

Resources