How to deploy the github page at another repo - travis-ci

From the documentation of GitHub Pages deployment, it seems possible to deploy an application at repo A to the github pages of another repo B. In my use case, I would like to deploy to an organization github pages organization.github.io. Since the organization github pages only accept files in the master branch, I would like to develop the app at another repo.
So in my development repo (let's call it organization/app), I have such .travis.yml:
language: node_js
node_js:
- "node"
install:
- npm install
script:
- npm run lint
- npm run build
deploy:
provider: pages
local-dir: dist
github-token: $GITHUB_TOKEN
skip-cleanup: true
keep-history: true
repo: organization/organization.github.io
target-branch: master
on:
branch: master
Even though the repo and target-branch has been specified, Travis CI still deploys all build files to organization/app:gh-pages, notorganization/organization.github.io:master.
For a real word app, see this development repo and the CI deployment log.

I had a similar problem. Check if you run a build with right commit hash.
By restarting a build you got an old commit set.
You either set a flow that triggers a build or you trigger a custom build.

Related

Jenkins is getting build automatically twice likely because of GIT Scm polling

Problem Statement: Jenkins is getting an automatic build when only one build is pushed.
Project Setup: What I am trying to achieve is Jenkins-Github CI/CD setup. A simple Flask application is built and the same is pushed to Github test branch first. The repository has a webhook setup so that whenever a push event is detected a POST request to the Jenkins URL - this part is working fine. In Jenkins I have a simple Pipeline where I have the Github project configured and for Build option chose "GitHub hook trigger for GITScm polling". The pipeline setup is shown below. The pipeline (scripted) is also very simple -
builds a docker image of the flask application
performs few unit tests on the application
pushes the docker image to docker hub
pushes the code to master branch
sends notification.
What is happening that I observed: Once the github push is triggered and Jenkins build starts running which is expected but as soon as the 4th stage is reached observed that another Jenkins build is automatically started which I think is because of the webhook setup.
Now, I might be absolutely wrong as this is my first try (POC sort of) at Jenkins and also to be fair I was not very sure if I am doing/trying the right thing and also if this is the correct method to do such things. So, I would highly appreciate if I am being corrected here or atleast provided a correct approach. Please let me know if more information (script/code etc..) is required - thanks in advance.
Code used at scripted pipeline for pushing to master branch from test branch -
sh """git remote set-url origin git#github.com:xxxxxxxxxxxxxxx/FlaskApp.git
git checkout test
git pull
git checkout master
git pull origin master
git merge test
git status
git push origin master
"""
Edit:
Screen-grab of github webhook configuration-
Environment: Jenkins 2.249.1

error in basic ci implementation in gitlab fatal: couldn't find remote ref

I setup fresh gitlab docker , then set up a runner docker with docer executer based on microsoft/dotnet:latest
then I added simple project to gitlab just a dotnet core hello world
then I create a ci file as below:
image: microsoft/dotnet:latest
stages:
- build
variables:
project: "ConsoleApp"
before_script:
- "dotnet restore"
build:
stage: build
variables:
build_path: "$ConsoleApp"
script:
- "cd $build_path"
- "dotnet build"
then in pipleline I get this output:
Preparing environment
Running on runner-vtysysr-project-2-concurrent-0 via e189cc9d1c60...
Getting source from Git repository
00:07
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/gitlabcitest/.git/
fatal: couldn't find remote ref refs/pipelines/18
Uploading artifacts for failed job
00:06
ERROR: Job failed: exit code 1
I searched error but all asnwers are about projects which have branches, but I dont have any branch, just a simple hello world project.
The OP ali-kamrani adds in the comments:
my issue was in ssh config in runner docker: after adding the ssh key to docker, the issue is solved.
Other avenues (for other users) if this is similar to gitlab-org/gitlab issue 36123
We were using git push --mirror to mirror some projects from a different repository regularly.
As it turns out, it also deletes unknown branches, i.e. pipelines/XXXX and merge/XXXX.
We are now pushing & deleting every branch explicitly and ignoring all pipelines/XXXX and merge/XXXX ones.
Afterwards the error didn't occur again.
I understand you don't have many branches, but the issue here is not with your local branches.
It is with a push operation, initiated locally, pruning remote branches which does not exist locally.
Basically, pipelines are depending on a pipeline specific ref refs/pipelines/* and it has to exist when the pipeline is running.
So if git push --mirror deletes these refs, you might encounter the job failure.
The same issue illustrates a similar scenario:
In our setup, we are using a system hook to mirror our GitLab repositories into another location closer to where our GitLab Runner instances are.
This has worked fine for a long time.
However, now that GitLab is dependent on the refs/pipelines/<pipeline ID> ref existing, all of our runners fail.
The problem is that the refs/pipelines/<pipeline ID> ref gets created behind the scenes and there is no system hook that gets invoked (so we don't know about the new ref that needs to be mirrored).
The built-in Repository Mirroring feature isn't very suitable for us because it must be configured for each repository; with System Hooks, we can automatically mirror all of our repositories.

CI for multi-repository project

My current project consists of three repositories. There is a Java (Spring Boot) application and two Angular web clients.
At the moment I am running a deploy.sh script which clones each repository and then deploys the whole thing.
# Clone all projects
git clone ..
git clone ..
git clone ..
# Build (there is a pom.xml which depends on the cloned projects)
mvn clean package
# Deploy
heroku deploy:jar server/target/server-*.jar --app $HEROKU_APP -v
Not very nice, I know.
So, I'd like to switch to a CI-pipeline and I think travis-ci or gitlab-ci might be some good choices.
My problem is: At this point I don't know how (or if) I can build the whole thing if there is an update on any the master branches.
Maybe it is possible to configure the pipeline in such a way that it simply tracks each repository or maybe it's possible to accomplish this using git submodules.
How can I approach this?
If you need all of the projects to be built and deployed together, you have a big old monolith. In this case, I advise you to use a single repository for all projects and have a single pipeline. This way you wouldn't need to clone anything.
However, if the java app and the angular clients are microservices that can be built and deployed independently, place them in separate repositories and create a pipeline for each one of them. Try not to couple the release process (pipelines) of the different services because you will regret it later.
Each service should be built, tested and deployed separately.
If you decide to have a multi-repo monolith (please don't) you can look into
Gitlab CI Multi-project Pipelines
Example workflow:
Repo 1 (Java), Repo 2 (Angular 1), Repo 3 (Angular 2)
Repo 1:
On push to master, clones Repo 2 and Repo 3, builds, tests, deploys.
Repo 2:
On push to master, triggers the Repo 1 pipeline.
Repo 3:
On push to master, triggers the Repo 1 pipeline.

How to run Jenkins pipeline automatically when "git push" happens for specific folder in bitbucket

I have started using Jenkins recently and there is only scenario where I am stuck. I need to run "Jenkins pipeline" automatically when git push happens for specific folder in master branch. Only if something is added to specific folder, than pipeline should run.
I have already tried SCM with Sparse checkout path, and mentioned my folder, but that's not working.
I am using GUI free style project, I dont know groovy.
I had the same issue and I resolved it by configuring the Git poll.
I used poll SCM to trigger a build and I used the additional behavior of Jenkins Git plugin named "Polling ignores commits in certain paths" > "Included Regions" : my_specific_folder/.*
By the way, using Sparse checkout path allows jenkins to checkout only the folder you mentioned.

I'm a bit confused on npm publishing workflow. Do I have this right?

I'm working on an npm module and the workflow for publishing it (local registry).
As I'm going through this, I'm running into a few places of confusion.
My workflow so far:
Create pull request from feature/* branch into develop.
Jenkins CI job runs, building and testing feature/* branch.
Successful build allows pull request merge into develop.
Jenkins release job manually triggered, passing release type as parameter ('patch', 'minor', 'major'):
Checkout master branch.
Merge develop into master.
Install npm dependencies.
Run tests.
npm version "${RELEASE_TYPE}"
[possibly a release notes step here or something]
npm publish
git push origin master && git push --tags
Email summarizing release sent to team.
What I don't understand is how the version stays in sync between the develop and master branches. I'm committing the version bump to master only. Would it make sense to setup an auto-merge from master to develop each time? That seems a bit backwards.
BONUS QUESTIONS:
What normally takes place during Step 2? Currently I'm just running unit tests and linters, but is there more that I should be doing at that step?
Are there other things I should be doing along the way with my workflow?
Thanks!

Resources