which build in a travis matrix build is used for gh_page provider - travis-ci

When I have a travis matrix build with a spec like this:
matrix:
include:
- os: osx
rust: stable
- os: linux
rust: stable
allow_failures:
- os: osx
and I use the deploy provider to upload an html book generated during the build:
deploy:
provider: pages
skip-cleanup: true
github-token: $GITHUB_PAGES_TOKEN
local-dir: target/html
keep-history: false
on:
branch: master
Which build output is that gh_pages provider running on?

From experimentation, it seems that the deploy: gh_pages provided is run on the last build in my matrix, in the example above: linux.
I am unsure if failure_allowed on the other builds (possibly combined with fast_fail) could change that though.
i.e.If the last one is allowed to fail (and it does) then I think the deployer will not run, even if overall the build has "succeeded" and is green.

Related

CircleCI Serverless framework build fails

CircleCI Install Serverless CLC build suddenly fails, Have not changed anything in config file.
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1
build-tools: circleci/build-tools#3.0.0
node: circleci/node#5.0.3
python: circleci/python#2.1.1
docker: circleci/docker#2.2.0
serverless-framework: circleci/serverless-framework#2.0.0
- serverless-framework/setup
- node/install-packages
- setup_remote_docker
I was able to fix it by updating the serverless-framework version to serverless-framework: circleci/serverless-framework#2.0.1

Bitbucket pipelines: Why does the pipeline not seem to be using my custom docker image?

In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.

Building a rust project with docker is extremely slow on google cloud

I'm relatively new to Rust but I've been working a project within a Docker container. Below is my dockerfile and it works great. My build uses an intermediary container to build all the cargo containers before the main project. Unless I update a dependency the project builds very quickly locally. Even with the dependencies getting rebuilt it doesn't take more than 10 minutes max on my old macbook pro.
FROM ekidd/rust-musl-builder as builder
WORKDIR /home/rust/
# Avoid having to install/build all dependencies by copying
# the Cargo files and making a dummy src/main.rs
COPY Cargo.toml .
COPY Cargo.lock .
RUN echo "fn main() {}" > src/main.rs
RUN cargo test
RUN cargo build --release
# We need to touch our real main.rs file or else docker will use
# the cached one.
COPY . .
RUN sudo touch src/main.rs
RUN cargo test
RUN cargo build --release
# Size optimization
RUN strip target/x86_64-unknown-linux-musl/release/project-name
# Start building the final image
FROM scratch
WORKDIR /home/rust/
COPY --from=builder /home/rust/target/x86_64-unknown-linux-musl/release/project-name .
ENTRYPOINT ["./project-name"]
However, when I set up my project to automatically build from the github repo via google cloud build I was shocked to see builds taking almost 45 minutes! I figured if I got the caching setup properly for the intermediary container at least that would shave some time off. Even though the builder successful pulls the cached image it doesn't seem to use it and always build the intermediary container from scratch. Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/docker
args:
- "-c"
- >-
docker pull $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest
|| exit 0
id: Pull
entrypoint: bash
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- "--cache-from"
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- .
- "-f"
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
id: Push
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- run
- services
- update
- $_SERVICE_NAME
- "--platform=managed"
- "--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- "--region=$_DEPLOY_REGION"
- "--quiet"
id: Deploy
entrypoint: gcloud
timeout: 3600s
images:
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
options:
substitutionOption: ALLOW_LOOSE
I'm looking for any info about what I'm doing wrong in my cloudbuild.yaml and tips on how to speed up my cloud builds considering it's so fast locally. Ideally I'd like to stick with google cloud but if there is another CI service that handles rust/docker builds like this better I'd be open to switch.
This is what I did to improve build time Rust projects on Google Cloud Build. Not a perfect solution, but better than nothing:
Similar changes to yours in Docker file to create different cache layers for deps and my own sources.
Used kaniko to leverage caching (this seems your particular issue)
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=eu.gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
- --cache=true
- --cache-ttl=96h
timeout: 2400s
Docs: https://cloud.google.com/build/docs/kaniko-cache
Changed machine type to higher options, in my case:
options:
machineType: 'E2_HIGHCPU_8'
Be careful though, changing machine types will affect your budgets, so you should consider if this worth it for your particular project.
If you push frequently your changes this works much better, yet still not good enough to be honest.
There is 2 things to consider in term of speed:
On your (even old) macbook pro,
You have multi core hyperthreaded CPU
The CPU can go up to 3.5Ghz in turbo mode
On Cloud Build
You have only one vCPU per build (by default)
The vCPU are "server designed CPU": no highend performance, but stable and consistent performance, around 2.1Ghz (slightly more in turbo mode)
So, the difference of performance is obvious. To speed up your build I can recommend to use the machine type option:
...
...
options:
substitutionOption: ALLOW_LOOSE
machineType: 'E2_HIGHCPU_8'
It should be better!

The travis' matrix.include not working on multiple os

I use matrix.include to trigger multiple test among diffrent os,the config is as follows:
matrix:
include:
- name: "build on linux"
os: linux
dist: trusty
sudo: required
services: docker
- name: "build on mac"
os: osx
osx_image: xcode10
env: CPPFLAGS=-I/usr/local/opt/openssl/include LDFLAGS=-L/usr/local/opt/openssl/lib
- name: "build on windows"
os: windows
I expected all the OS would run the build, but only the first triggered, the others was ignored for some reason.
The link of the config of the travis-ci is here.
I've found that travis's jobs key tends to override matrix:includes.
Have you tried removing the jobs system to see if the matrix works?

Request for 32bit travis build machine

It seems that travis provisions a 64bit build machine by default when a integration process starts.
Is there any options that I can use in .travis.yml to request a 32bit Ubuntu build machine?
I really need 32bit OS, because the 64Bit Ubuntu refuse to install 32bit supporting libraries (ia32-libs).
The following packages have unmet dependencies:
ia32-libs : Depends: ia32-libs-multiarch
E: Unable to correct problems, you have held broken packages.
No, there is no such option (at least for now). See this issue: #986 travis_ci: Add 32-bit environments
#roidrage commented on 23 Jul 2015:
Closing this issue for now, as we have no immediate plans to add this feature.
Should we add it to the roadmap eventually, we'll make sure to update this ticket.
Travis supports a number of architectures using the arch: key. Unfortunately, none of these are 32-bit.
'shoogle' at https://github.com/travis-ci/travis-ci/issues/5770 suggests that it is possible to run a 32-bit image within a 64-bit container, by adding the following to .travis.yml:
services:
- docker
script:
- "docker run -i -v \"${PWD}:/MyProgram\" toopher/centos-i386:centos6 /bin/bash -c \"linux32 --32bit i386 /MyProgram/build.sh\""
And dlang suggest:
env:
- ARCH="x86_64"
matrix:
include:
- {os: linux, d: dmd-2.071.0, env: ARCH="x86", addons: {apt: {packages: [[gcc-multilib]]}}}
- {os: linux, d: ldc-1.0.0, env: ARCH="x86", addons: {apt: {packages: [[gcc-multilib]]}}}
script:
- dub test --arch "$ARCH"

Resources