Daily automatic Bitbucket deploy with manual step when pushed to branch - bitbucket

I'm trying to set up a pipeline in Bitbucket with a daily schedule for two branches.
develop : There a scheduled daily deployment running + when I push to this branch the pipeline runs again
master : This is the tricky one. I want to have a daily deployment because the page need to be rebuild daily, but I would like to have a security that if anyone pushes to this branch by mistake or the code is bad, it only runs the deployment after a manual trigger.
So my question is that is it possible to set up a rule to track if there was a push and in this case let the admin manually start the pipeline ?
pipelines:
branches:
develop:
- step:
name: Deploy staging
deployment: staging
caches:
- node
script:
- npm run staging:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project $FIREBASE_PROJECT_STAGING --only functions,hosting
artifacts:
- build/**
master:
- step:
name: Deploy to production
deployment: production
caches:
- node
script:
- npm run deploy:auto
- npm install firebase
- npm install firebase-functions
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN_STAGING --project $FIREBASE_PROJECT_PRODUCTION --only functions,hosting
artifacts:
- build/** ```

I'd suggest to schedule a different custom pipeline other than the one that runs on pushes to the production branch. The same steps definition can be reused with a yaml anchor and you can replace the trigger in one of them.
E.g:
definitions:
# write whatever is meaningful to you,
# just avoid "caches" or "services" or
# anything bitbucket-pipelines could expect
yaml-anchors:
- &deploy-pro-step
name: Deploy production
trigger: manual
deployment: production
script:
- do your thing
pipelines:
custom:
deploy-pro-scheduled:
- step:
<<: *deploy-pro-step
trigger: automatic
branches:
release/production:
- step: *deploy-pro-step
Sorry if I make some yaml mistakes, but this should be the general idea. The branch where the scheduled custom pipeline will run is configured in the web interface when the schedule is set up.

Related

Bitbucket pipeline reuse source code from previous step

Basically, I don't want pipeline step clone code on next steps, only first step will clone the source code a time. Another reason is if step clone the source code (and doesn't use the source code from previous) the built code will be lost.
I known that the bitbucket pipeline has the artifacts feature but seems it only store some parts of the source code.
The flow is:
Step 1: Clone source code.
Step 2: Run in parallel two steps, one install node modules at root folder, one install node module and build js, css at app folder.
Step 3: Will deploy the built source code from step 2.
Here is my bitbucket-pipelines.yml
image: node:11.15.0
pipelines:
default:
- step:
name: Build and Test
script:
- echo "Cloning..."
artifacts:
- ./**
- parallel:
- step:
name: Install build
clone:
enabled: false
caches:
- build
script:
- npm install
- step:
name: Install app
clone:
enabled: false
caches:
- app
script:
- cd app
- npm install
- npm run lint
- npm run build
- step:
name: Deploy
clone:
enabled: false
caches:
- build
script:
- node ./bin/deploy
definitions:
caches:
app: ./app/node_modules
build: ./node_modules
After research hundred pages but cannot find anything, then I must try one by one by myself, finally I found the pattern for the artifacts of all the files:
artifacts:
- '**'

Travis.ci - Build and Deploy based PR and Tags

I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.

Publishing to NPMJS with Travis CI

I've set up a Travis CI to run a few scripts that should:
Deploy some static pages to Github pages
Deploy an NPM package to npmjs
Item 1 works, Item 2 doesn't.
Here's what my travis.yml file looks like:
language: node_js
node_js:
- '10'
script:
- gulp build
- gulp npmDist
deploy:
- provider: pages
local_dir: dist-site/
skip_cleanup: true
github_token: "$GITHUB_TOKEN"
on:
branch: master
- provider: npm
email: myemail#mydomain.com
api_key:
secure: THE-API-KEY-I-GOT-BY-CREATING-A-TOKEN-ON-NPMJS-AND-ENCRYPTING-IT-USING-TRAVIS-ENCRYPT-COMMAND-IN-TERMINAL
on:
tags: true
repo: githubaccount/reponame
all_branches: true
I trigger the script in two ways:
- When I merge to master, it deploys to GitHub Pages.
- When I create a tag and push to master it should deploy the package to npmjs.
As stated, the first part of the file works, as it actually deploys to GitHub Pages.
Here's the error I get from npmjs:
npm ERR! publish Failed PUT 401
npm ERR! code E401
npm ERR! You must be logged in to publish packages. : package-name
(oh, and a strange thing: Travis returns with a "Build Passed" and the succesful status (green), even though there's obviously something wrong)
Hope this makes sense? Thanx in advance for any help.
Fixed it — instead of having this in the travis.yml-file:
api_key:
secure: THE-API-KEY-I-GOT-BY-CREATING-A-TOKEN-ON-NPMJS-AND-ENCRYPTING-IT-USING-TRAVIS-ENCRYPT-COMMAND-IN-TERMINAL
I changed it to:
api_key: "$NPM_TOKEN"
..and added the NPM Token as an environment variable inside the Travis CI dashboard.
(Still curious as to why it didn't work, but I can't be bothered to do something about, as I've already wasted way too much time on this issue today)
I had the same problem and I just removed all previous keys and generated them again and my code looks like this:
deploy:
provider: npm
email: $NPM_USER
api_key: $NPM_TOKEN
To create your NPM_TOKEN you must:
Go to your npm profile
Tokens
Create Token
Select "Read and Publish" and create it.
Then you can specify it in your env variables for the corresponding project.
The key do not have to be encrypted and the user is your email. That will be it.
You will receive a notification like:
Installing deploy dependencies
dpl.2
Preparing deploy
dpl.3
Deploying application
+ your-artifact#x.x.x

Exporting environment variables from one stage to the next in GitLab CI

Is there a way to export environment variables from one stage to the next in GitLab CI? I'm looking for something similar to the job artifacts feature, only for environment variables instead of files.
Let's say I'm configuring the build in a configure stage and want to store the results as (secret, protected) environment variables for the next stages to use. I could safe the configuration in files and store them as job artifacts but I'm concerned about secrets being made available in files than can be downloaded by everyone.
Since Gitlab 13 you can inherit environment variables like this:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Note: for GitLab < 13.1 you should enable this first in Gitlab Rails console:
Feature.enable(:ci_dependency_variables)
Although not exactly what you wanted since it uses artifacts:reports:dotenv artifacts, GitLab recommends doing the below in their guide: 'Pass an environment variable to another job':
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo "$BUILD_VERSION" # Output is: 'hello'
needs:
- job: build
artifacts: true
I believe using the needs keyword is preferable over the dependencies keyword (as used in hd-deman`'s top answer) since:
When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs: configuration.
Furthermore, you could minimise the risk by setting the build's artifacts:expire_in time to be very small.
No this feature is not here yet, but there is already an issue for this topic.
My suggestion would be that you are saving the variables in a files and cache them, as these will be not downloadable and will be removed on finish of the job.
If you want to be 100% sure you can delete it manually. See the clean_up stage.
e.g.
cache:
paths:
- save_file
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
job_name_2:
script:
- cat save_file | do_something_with_content
clean_up:
script:
- rm save_file
when: always
You want to use Artefacts for this.
stages:
- job_name_1
- job_name_2
- clean_up
job_name_1:
script:
- (your_task) >> save_file
artifacts:
paths:
- save_file
# Hint: You can set an expiration for them too.
job_name_2:
needs:
- job: job_name_1
artifacts: true
script:
- cat save_file | do_something_with_content

GitLab-CI for grails project

I move to GitLab and use all his tools that comes with it.
I installed GitLab v8.0.4, on my CentOs7 with Tomcat. I create a project and push a grails example to the git project.
Now I'd like to be able, every time I push a file to the project, to fire up a deploy. In jenkis I was able to pull the project, compile it with grails cmd tool, and deploy the war to the Tomcat.
I'm trying to do the same but I really feel lost. Does anybody have never try this, and can show me how to do?
If the deployment script is in the same repository as the project itself, you can have a build stage and a deploy stage. If the the build stage succeeds, it will start the deployment stage. The .gitlab-ci.yml could look like this:
stages:
- build
- deploy
build_grails:
stage: build
script:
- build-script_of_grails_cmd
deploy_to_tomcat:
stage: deploy
script:
- deploy_script_with_capistrano_or_whatever
If your deployment code is in another project you can trigger this project to start the deployment when the build stage has finished. The deployment repo should have a trigger setup. This can be done in the continuous integration menu of the deployment project. After setting up a trigger, GitLab generates a triggering curl snippet you can paste in the yml-file. The grails app gitlab-ci.yml will look like this:
stages:
- build
- deploy
build_grails:
stage: build
script:
- build-script_of_grails_cmd
trigger:
type: deploy
script:
- curl -X POST -F token=4579a6f10c51f0a4b7bdbd384f6e53 https://gitlab-comewhere.com/ci/api/v1/projects/5/refs/master/trigger
The gitlab-ci.yml in the deployment project will look like this:
stages:
- deploy
deploy_to_tomcat:
stage: deploy
script:
- deploy_script_with_capistrano_or_whatever

Resources