How can I get the path of a newly built apk in gitlab after the code is checked in, which I need in the command 'curl' to upload it to ADF - android-testing

I wrote a python script, which uses boto3 to schedule android app UI test on ADF. My next step is to make gitlab CI work. My test suite program is Java Appium with TestNG, not integrated with the android program.
My problem now is how to get the PATH of files(apk and test.zip) in gitlab repo which needed in curl command to upload the newly build apk(builded after new code checked in) and my test suite.
Actually, foremost, am I in the right track?
Can I use curl in gitlab like that?
If so, what's the path I could use? (if you could shortly explain the storage structure(or say namespace?) or give me some reference). Actually, is it just take the project home directory as '/'?
for test suite, its actually easier, if I figure out what's the path, I could just put it in the home directory.
for the newly built apk, I don't actually know where is it. We use the pipeline, I think the apk file is somewhere in the server. Below is the YAML snippet:
archive_project:
stage: archive
script:
- ./gradlew assembleRelease
only:
- master
- search
artifacts:
paths:
- main/build/outputs/
tags:
- android
- gradle
If not, how can I do that? This actually suit if I am not in the right track as well.

So after you check in code, the runner look at the .gitlab_ci.yml, and runs following its instruction. And this process, happens in the server machine(either yours or gitlab's), so everything is basically the same as you in your computer(sure you will need the right environment, just appoint the right image or docker).
so yes, we could use 'curl' there. For the directory structure, if you have the privilege to login your server(thru ssh, for instance), you could get it easily. Or we could just explore it like what we do locally(pwd, ls, cd). So what I did is I have a script which have some "pwd, ls, cd ", and call this script from yml, then I look at the info it print to help figure out the directory structure. Then I got what I want(the path), then problem solved.
Although I didn't use dependencies, but you might want to read it to get more about how to pass artifacts between jobs.
If you are looking for this problem, hope it helps you.

Related

How to use a GitLab link for applying jenkins.yml file for the concept of Jenkins Configuration as Code

I have a local instance of Jenkins. I have previously tried storing the jenkins.yml in my system and giving its path on http://localhost:8080/configuration-as-code. This worked but I want to use a Gitlab repository to store the jenkins.yml file.
I have already tried giving the gitlab link of my jenkins.yml in the path or URL textbox. Some weird things happened, like
1. jenkins broke or huge error console
2. It reapplies the previous configuration(from system path)
jenkins:
systemMessage: "Hello, world"
Your problem as described: you want the job configuration to be saved in GIT and, when a build is triggered, the job should get the current stand of its configuration from there and then, run the build.
Maybe there is a kind of plug-in that does it for you, but I am not aware of any. Maybe anyone?
My suggestion is to define a pipeline job and use a declarative pipeline. It is a file, normally named Jenkinsfile that can be stored in GIT. In the Job, you define the GIT address and when you trigger a build, the file is got from GIT and executed.
There are several flaws in this: pipelines learning curve is not small, you are confronted with groovy (not XML!) and your current XML file is barelly useful.
Maybe someone shows up and tells us about new (for me) plugin that solves your problem using the configuration XML file. In the other hand, pipelines are such a beautyful feature that I encourage you to give it a try

Update Jenkins Plugins via Artifactory

I want to update Jenkins plugin via Artifactory.
Create a remote repo named Jenkins-update
Create a local repo named jenkins-update-center
Get the update-center.json from repo Jenkins-update to local and modify the URL from 'http://updates.jenkins-ci.org/' to my own URL 'https://artifacts.xxx.com/artifactory/Jenkins-update/' in update-center.json, then put update-center.json into local repo.
#!/bin/sh
curl -L -o /tmp/update-center.json http://localhost:8081/artifactory/Jenkins-update-cache/update-center.json
sed -i 's#http://updates.jenkins-ci.org/#https://artifacts.xxx.com/artifactory/Jenkins-update/#g' /tmp/update-center.json
curl -L -uuser:pass -T /tmp/update-center.json "http://localhost:8081/artifactory/jenkins-update-center/update-center.json"
Change the default update site from 'http://updates.jenkins-ci.org/' to 'https://artifacts.xxx.com/artifactory/jenkins-update-center/update-center.json' in Jenkins
There is an error 'SHA-512 digest mismatch: expected=49a22dc23f739a76623d10128b6803f79e0489de3ded0f1d01f3dfba4557136c7f318baaf4749a7713ec4b3f56633f2ac3afc4703e87d423ede029d68f84c74d in 'update site 'default''' when I click 'check now' button.
What should I do to make Jenkins update plugins from Artifactory?
Tkx
As soon as the content of update-center.json changed you need to re-generate "signature" section of this file.
For that you need to generate your key pair (see more details in How to create a local mirror of public Jenkins update site?)
Also you may use the following proposed approach :
there is probably a better way, by having a sandbox Jenkins on a system that has access to the internet. You update the server using the UI and then you can test that updated Jenkins thoroughly. When done, you just need to copy the war and hpi files over to your 'production' Jenkins. now you have even a nice process and QA in place.
Another way is to setup a transparent https proxy between your Jenkins and Artifactory server - in that case update-center.json will not change and signature verification should work fine.
With best regards,
Dmytro Gorbunov
As of 2023-01-10 there is a problem with making a mirror of the jenkins plugins on artifactory.
Artifactory documentation decribes only how to create a mirror: https://jfrog.com/knowledge-base/how-to-configure-artifactory-as-a-mirror-for-jenkins-plugins/
But this is not a complete solution. Because this leads to the situation when every plugin shall be manually updated. Having plugins with bunch of dependencies it is huge effort.
There is a need to generate a file: update-center.json
There is an internal jenkins tool to do this: https://github.com/jenkins-infra/update-center2, but documentation is poor and contains vague statements like:
With a few modifications it could easily be used to generate your corporate update center as well.
Without clear description, what shall be done.
I tried to follow steps and completely failed. Tool require some special environment variables, which are also not documented and so on.
So as of my experience mirroring jenkins plugins on artifactory is practically not possible. And honestly spoken, I would like to be wrong here.

Jenkins - how to copy test logs back to artifacts directory for build

New to Jenkins so apologise in advance as I'm sure this answer is out there somewhere. Just not sure exactly how to search for what I'm after. I'm struggling a bit with the copyback process in Jenkins.
When I build, I'm running some unit tests that create some log files which I want to be stored as part of the Jenkins build. I'm running on Windows 10 and everything is running on my laptop (I'm purely trying to learn Jenkins so this is fine for me).
So my test results will always appear in C:\TestLogs\*.log. I want the results copied to my build directory which is URL: http://localhost:8080/job/loadrunner_test/1/ absolute: C:\Program Files (x86)\Jenkins\jobs\loadrunner_test\builds\1
I'm a bit confused with which plugin I should use in my post build step? Copy Artifact plugin looks as if it's meant to pass data between builds. For each build, I just want to copy C:\TestLogs*.* to the current build directory so I can see them when I click on the link for #1 in the Build History.
Many thanks!
Tim
WindowsDir
Jenkins Build
You can copy it with additional step.
Select Execute Windows batch command for that step and add this line:
xcopy C:\TestLogs C:\Program Files (x86)\Jenkins\jobs\jenkins_test\builds\%BUILD_NUMBER% /s /e
You can also check configuration for your test if you can set path location.

travis-ci: how to move or rename a file

Before deployment, (or after, but this is harder as we are deploying to s3), we need to rename staging.robots.txt to robots.txt (overwriting the default robots.txt) for the staging deployment only, so that we can block crawling on our staging server (but allow it on production).
Any idea if this is possible?
On the Travis documentation site, there is no info on the before_deploy stage, and we cant see any feature to rename files. With Jenkins, I would simply put cp xxx yyy or similar in the build script, as I know my Jenkins is running on Ubuntu, but we don't know the equivalent Travis command for the .travis.yml file.
== UPDATE ==
Having done more research, it might be possible to do this through a script e.g. commit move.sh into your repo, then call it. As you can choose what OS the build is done one (e.g. Linux), you can write the script for that platform. However, it's not clear at what point you can call this script in the .yml file.
You can simply write a script to invoke in your .travis.yml file for deployment. See the documentation.
Here's the example copied from these docs:
deploy:
provider: script
script: scripts/deploy.sh
on:
tags: true
branch: master
The above config for deploy would invoke on tagging the master branch and the the script (scripts/deploy.sh) would be invoked.
Other than that, you can simply write this command under before_install section like this:
before_install:
- mv abc.txt xyz.txt
You've used cp command but you are talking about renaming, not copying. So, I've used mv command to rename the file.
If you want to do something at the end, you can add an after_success section as well.
Hope that helps!

Continuous Deployment with Codeship doesn't recognize environment variables

Recently I started to use Codeship as CI/CD tool for a small website that I am maintaining. I set up my Codeship project to deploy via sftp as described in their guide here.
The part where it fails is in the production script. I created a deploy folder and a production.sh script which contains the line:
put -rp "${HOME}/clone/build/*" /path/to/remote/dir
However when running the build I get the following error:
sftp> put -rp "${HOME}/clone/build/*" /path/to/remote/dir
stat ${HOME}/clone/build/*: No such file or directory
Echoing $HOME in a test script directly in Codeship gives me my home directory, so the environment variable works. However, at the moment the batch script is run, the environment variable is unrecognized.
How can I fix this? I'd rather not hardcode the path in my deployment script. It also doesn't seem possible that this happens because I suffixed production.sh, whereas in the docs they only have a production script?
With no answer coming from the people from Codeship, I resulted to writing the absolute path to the ${HOME} directory. I've been doing this for a time now with a few different projects and it all seems to work.
replace ${HOME}/clone with ~/clone
this worked for me

Resources