In my Travis file I have several PHP versions and a script entry like this:
php:
- 5.6
- 5.5
- 5.4
- 5.3
script:
- export CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"
- phpize #and lots of other stuff here.
- make
I want to run the export CFLAGS line only when the PHP version matches 5.6.
I could theoretically do that with a nasty hack to detect the PHP version from the command line, but how can I do this through the Travis configuration script?
You can either use shell conditionals to do this:
php:
- 5.6
- 5.5
- 5.4
- 5.3
script:
- if [[ ${TRAVIS_PHP_VERSION:0:3} == "5.6" ]]; then export CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"; fi
- phpize #and lots of other stuff here.
- make
Or use the build matrix with explicit inclusions:
matrix:
include:
- php: 5.6
env: CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"
- php: 5.5
env: CFLAGS=""
- php: 5.4
env: CFLAGS=""
- php: 5.3
env: CFLAGS=""
script:
- phpize #and lots of other stuff here.
- make
The latter is probably what you're looking for, the former is a little less verbose.
Related
has anyone noticed that new Rails apps seem to fail on circleCI with this error:
Sprockets::Rails::Helper::AssetNotFound in Home#index
The asset "application.js" is not present in the asset pipeline.
steps to reproduce:
rails new --javascript=esbuild so it uses JSBundling with ESbuild
• Install rspec
• configure CircleCI
• add app to your circle CI pipeline.
• Add a basic hello world controller that simply prints "Hello world"
• Add the root route to the hello world controller
• Add a spec:
require 'rails_helper'
describe "can load the homepage" do
it "should load the homepage" do
visit "/"
expect(page).to have_content("Hello world")
end
end
there’s a file in app/assets/config/manifest.js which tells Sprockets where to load assets from. I tried adding this
//= link_directory ../../javascript .js
(notice that the default location of the javascript folder is in app/ not in app/assets)
the app works fine locally and deploys to Heroku just fine, I only see this error on CircleCi. Anyone seen this before?
my CircleCI config looks like this:
version: 2.1 # Use 2.1 to enable using orbs and other features.
# Declare the orbs that we'll use in our config.
# read more about orbs: https://circleci.com/docs/2.0/using-orbs/
orbs:
ruby: circleci/ruby#1.0
node: circleci/node#2
browser-tools: circleci/browser-tools#1.2.3
jobs:
build: # our first job, named "build"
docker:
- image: cimg/ruby:3.1.2-browsers # use a tailored CircleCI docker image.
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
- image: redis:6.2.6
steps:
- checkout # pull down our git code.
- ruby/install-deps # use the ruby orb to install dependencies
# use the node orb to install our packages
# specifying that we use `yarn` and to cache dependencies with `yarn.lock`
# learn more: https://circleci.com/docs/2.0/caching/
- node/install-packages:
pkg-manager: yarn
cache-key: "yarn.lock"
- run:
name: Build assets
command: bundle exec rails assets:precompile
test: # our next job, called "test"
parallelism: 1
# here we set TWO docker images.
docker:
- image: cimg/ruby:3.1.2-browsers # this is our primary docker image, where step commands run.
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
- image: redis:6.2.6
- image: circleci/postgres:9.5-alpine
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment: # add POSTGRES environment variables.
POSTGRES_USER: circleci-demo-ruby
POSTGRES_DB: VDQApp_test
POSTGRES_PASSWORD: ""
# environment variables specific to Ruby/Rails, applied to the primary container.
environment:
BUNDLE_JOBS: "3"
BUNDLE_RETRY: "3"
PGHOST: 127.0.0.1
PGUSER: circleci-demo-ruby
PGPASSWORD: ""
RAILS_ENV: test
# A series of steps to run, some are similar to those in "build".
steps:
- browser-tools/install-chrome
- browser-tools/install-chromedriver
- checkout
- ruby/install-deps
- node/install-packages:
pkg-manager: yarn
cache-key: "yarn.lock"
# Here we make sure that the secondary container boots
# up before we run operations on the database.
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Load schema
command: bin/rails db:schema:load RAILS_ENV=test
# Run rspec in parallel
- ruby/rspec-test
# We use workflows to orchestrate the jobs that we declared above.
workflows:
version: 2
build_and_test: # The name of our workflow is "build_and_test"
jobs: # The list of jobs we run as part of this workflow.
- build # Run build first.
- test: # Then run test,
requires: # Test requires that build passes for it to run.
- build # Finally, run the build job.
Ok I figured this out.
The Circle CI config must told to execute the esbuild, whih is done like so:
esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds --public-path=assets
that command lives in your yarn package script, so you can run it as yarn build
In the .circleci/config file, look for the jobs > test > steps stanza
You will want to add this step for yarn build after the step for node/install-packages
- run:
name: Yarn build
command: yarn build
steps:
- browser-tools/install-chrome
- browser-tools/install-chromedriver
- checkout
- ruby/install-deps
- node/install-packages:
pkg-manager: yarn
cache-key: "yarn.lock"
- run:
name: yarn build
command: yarn build
# Here we make sure that the secondary container boots
# up before we run operations on the database.
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Load schema
command: bin/rails db:schema:load RAILS_ENV=test
# Run rspec in parallel
- ruby/rspec-test
Need to clone some repo say: git clone abc.git then switch context to abc then use the dockerfile inside abc to build using skaffold passing custom ARGS.
Is it possible to achieve this via skaffold
for eg:
- name: test-build
build:
tagPolicy:
envTemplate:
template: 1.0.0
artifacts:
- image: dockerhub.io/test-build
requires:
command: ["git", "clone", "abc.git"]
context: ./abc
This is one way that worked for me by using skaffold custom build
- name: test-build
build:
tagPolicy:
envTemplate:
template: 1.0.0
artifacts:
- image: dockerhub.io/test-build
custom:
buildCommand: docker build github.com/abc.git#v1
Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels
I'm a tad new to this. I've been writing a pipeline that's just for training purposes and I've encountered probably like 15 different errors but the one I'm at currently is really ruining all my fun since I can't get around it.
This is my code:
stages:
- lint-css
- lint-js
- unit-test
image: git.chaosgroup.com:4567/philipa.hristova/test33004__half/dind_node:latest
lint css:
stage: lint-css
before_script:
cache:
untracked: true
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js lint-css
lint js:
stage: lint-js
cache:
untracked: true
policy: pull
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js lint-js
run unit test:
stage: unit-test
cache:
untracked: true
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js test
And the docker image I am using is one I made over a docker:dind image, adding nodejs, npm and gulp. The error I get is this:
Is it possible to share steps between branches and still run branch specific steps? For example, the develop and release branch has the same build process, but uploaded to separate S3 buckets.
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
develop:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://develop
staging:
- step:
script:
- s3cmd put --config s3cmd.cfg ./build s3://staging
I saw this post (Bitbucket Pipelines - multiple branches with same steps) but it's for the same steps.
Use YAML anchors:
definitions:
steps:
- step: &Test-step
name: Run tests
script:
- npm install
- npm run test
- step: &Deploy-step
name: Deploy to staging
deployment: staging
script:
- npm install
- npm run build
- fab deploy
pipelines:
default:
- step: *Test-step
- step: *Deploy-step
branches:
master:
- step: *Test-step
- step:
<<: *Deploy-step
name: Deploy to production
deployment: production
trigger: manual
Docs: https://confluence.atlassian.com/bitbucket/yaml-anchors-960154027.html
Although it's not officially supported yet, you can pre-define steps now.
You can use yaml anchors.
I got this tip from bitbucket staff when I had an issue running the same steps across a subset of branches.
definitions:
step: &Build
name: Build
script:
- npm install
- npm build
pipelines:
default:
- step: *Build
branches:
master:
- step: *Build
- step:
name: deploy
# do some deploy from master only
I think Bitbucket can't do it. You can use one pipeline and check the branch name:
pipelines:
default:
- step:
script:
- cd source
- npm install
- npm build
- if [[ $BITBUCKET_BRANCH = develop ]]; then s3cmd put --config s3cmd.cfg ./build s3://develop; fi
- if [[ $BITBUCKET_BRANCH = staging ]]; then s3cmd put --config s3cmd.cfg ./build s3://staging; fi
The two last lines will be executed only on the specified branches.
You can define and re-use steps with YAML Anchors.
anchor & to define a chunk of configuration
alias * to refer to that chunk elsewhere
And the source branch is saved in a default variable called BITBUCKET_BRANCH
You'd also need to pass the build results (in this case the build/ folder) from one step to the next, which is done with artifacts.
Combining all three will give you the following config:
definitions:
steps:
- step: &build
name: Build
script:
- cd source
- npm install
- npm build
artifacts: # defining the artifacts to be passed to each future step.
- ./build
- step: &s3-transfer
name: Transfer to S3
script:
- s3cmd put --config s3cmd.cfg ./build s3://${BITBUCKET_BRANCH}
pipelines:
default:
- step: *build
develop:
- step: *build
- step: *s3-transfer
staging:
- step: *build
- step: *s3-transfer
You can now also use glob patterns as mentioned in the referenced post and steps for both develop and staging branches in one go:
"{develop,staging}":
- step: *build
- step: *s3-transfer
Apparently it's in the works. Hopefully available soon.
https://bitbucket.org/site/master/issues/12750/allow-multiple-steps?_ga=2.262592203.639241276.1502122373-95544429.1500927287