How can I make a custom command run always with `when: always` - circleci

I have a circle config which includes the following custom command:
remove-circle-ip:
description: "remove current Circle CI box IP from inbound security group rules for DB"
steps:
- aws-white-list-circleci-ip/remove:
tag-key: circleci
tag-value: whitelistmeplease
port: 5432
which I use in my job as follows:
jobs:
test:
docker:
- image: nikolaik/python-nodejs:python3.8-nodejs12
environment:
AWS_DEFAULT_REGION: us-east-2
steps:
- setup
- install-python-deps
- add-circle-ip
- run:
name: run tests
command: |
poetry run coverage run --source='.' manage.py test
- run:
name: remove circle IP
command: remove-circle-ip
when: always
I'd like the step for remove circle IP to run even if the tests which run before it fail. I can't seem to figure out the syntax for this. Previously, I had just used - remove-circle-ip to run the command rather than putting a run block, i.e.:
jobs:
test:
docker:
...
steps:
- setup
- ...
- add-circle-ip
- ...
- remove-circle-ip
but couldn't figure out how to specify when: always if I did it that way.
But now, when switching to calling my command as part of a run block, it fails with "remove-circle-ip: command not found"
So how can I make this command always run even if steps before fail?

I'm fairly new to CircleCI so there may be a better way to do this, or maybe this shouldn't be done at all, however something similar was done (before I joined) to a project I'm working on. It was achieved by making every step report success, whether it actually succeeded or failed, which allows the command at the end to always run. The commands are all terminal commands, so they just have || true at the end. I'm not sure how you would achieve that with a more complex command or using a builtin command.
In our case the steps that can fail are optional and we don't care if they actually fail or not. However if you want to report the failure I think that you should be able to store the failure from a previous step somewhere, and add a final step that reports it.

Related

Bitbucket pipelines: Why does the pipeline not seem to be using my custom docker image?

In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.

Building a rust project with docker is extremely slow on google cloud

I'm relatively new to Rust but I've been working a project within a Docker container. Below is my dockerfile and it works great. My build uses an intermediary container to build all the cargo containers before the main project. Unless I update a dependency the project builds very quickly locally. Even with the dependencies getting rebuilt it doesn't take more than 10 minutes max on my old macbook pro.
FROM ekidd/rust-musl-builder as builder
WORKDIR /home/rust/
# Avoid having to install/build all dependencies by copying
# the Cargo files and making a dummy src/main.rs
COPY Cargo.toml .
COPY Cargo.lock .
RUN echo "fn main() {}" > src/main.rs
RUN cargo test
RUN cargo build --release
# We need to touch our real main.rs file or else docker will use
# the cached one.
COPY . .
RUN sudo touch src/main.rs
RUN cargo test
RUN cargo build --release
# Size optimization
RUN strip target/x86_64-unknown-linux-musl/release/project-name
# Start building the final image
FROM scratch
WORKDIR /home/rust/
COPY --from=builder /home/rust/target/x86_64-unknown-linux-musl/release/project-name .
ENTRYPOINT ["./project-name"]
However, when I set up my project to automatically build from the github repo via google cloud build I was shocked to see builds taking almost 45 minutes! I figured if I got the caching setup properly for the intermediary container at least that would shave some time off. Even though the builder successful pulls the cached image it doesn't seem to use it and always build the intermediary container from scratch. Here is my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/docker
args:
- "-c"
- >-
docker pull $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest
|| exit 0
id: Pull
entrypoint: bash
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- "--cache-from"
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- .
- "-f"
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
id: Push
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- run
- services
- update
- $_SERVICE_NAME
- "--platform=managed"
- "--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- "--region=$_DEPLOY_REGION"
- "--quiet"
id: Deploy
entrypoint: gcloud
timeout: 3600s
images:
- "$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest"
options:
substitutionOption: ALLOW_LOOSE
I'm looking for any info about what I'm doing wrong in my cloudbuild.yaml and tips on how to speed up my cloud builds considering it's so fast locally. Ideally I'd like to stick with google cloud but if there is another CI service that handles rust/docker builds like this better I'd be open to switch.
This is what I did to improve build time Rust projects on Google Cloud Build. Not a perfect solution, but better than nothing:
Similar changes to yours in Docker file to create different cache layers for deps and my own sources.
Used kaniko to leverage caching (this seems your particular issue)
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=eu.gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
- --cache=true
- --cache-ttl=96h
timeout: 2400s
Docs: https://cloud.google.com/build/docs/kaniko-cache
Changed machine type to higher options, in my case:
options:
machineType: 'E2_HIGHCPU_8'
Be careful though, changing machine types will affect your budgets, so you should consider if this worth it for your particular project.
If you push frequently your changes this works much better, yet still not good enough to be honest.
There is 2 things to consider in term of speed:
On your (even old) macbook pro,
You have multi core hyperthreaded CPU
The CPU can go up to 3.5Ghz in turbo mode
On Cloud Build
You have only one vCPU per build (by default)
The vCPU are "server designed CPU": no highend performance, but stable and consistent performance, around 2.1Ghz (slightly more in turbo mode)
So, the difference of performance is obvious. To speed up your build I can recommend to use the machine type option:
...
...
options:
substitutionOption: ALLOW_LOOSE
machineType: 'E2_HIGHCPU_8'
It should be better!

Amplify save environment variables to backend

Following the docs, I set my environment variable in the console ($CLIENT_ID).
In the console I added the echo command to try and insert the variable into a .env.
The error I keep getting is There was an issue connecting to your repo provider. When I remove the echo line the build passes. I've tried single/double quotes and putting the line above/below the other lines under the build commands phase.
Here's the backend section for the build process.
backend:
phases:
build:
commands:
- echo 'CLIENT_ID=$CLIENT_ID' >> backend/.env
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
I wrote an comment but to make it easier, I quote from answers from this question
build:
commands:
- npm run build
- VARIABLE_NAME_1=$VARIABLE_NAME_1 # it works like this
- VARIABLE_NAME_2=${VARIABLE_NAME_2} # it also works this way
Please thumb up on the original answers, and flag this question as duplicated.
Seems this is a feature request:
https://github.com/aws-amplify/amplify-cli/issues/4347

How to use GitLab CI in combination with a JUnit Rule that launches Docker containers?

I have a set of integration tests that rely on a postgres database being available. In order for the tests to be independent, I am using this project to start a postgres docker container before the tests:
#Rule
DockerRule postgresDockerRule = DockerRule
.builder()
.imageName("postgres:9")
.expose(databaseConfig.port().toString(), "5432")
.env("POSTGRES_PASSWORD", databaseConfig.password())
.waitForMessage("PostgreSQL init process complete; ready for start up.", 60)
.keepContainer(false)
.build();
This works fine locally. The rule starts up the container, the tests are run and after the tests, the container is deleted.
However, I am having troubles getting these tests to run on gitlab.com.
The tests always fail with the following exception (this is the end of a longer stacktrace):
Caused by: java.io.IOException: No such file or directory
at jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:94)
at jnr.unixsocket.UnixSocketChannel.connect(UnixSocketChannel.java:102)
at com.spotify.docker.client.ApacheUnixSocket.connect(ApacheUnixSocket.java:73)
at com.spotify.docker.client.UnixConnectionSocketFactory.connectSocket(UnixConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:435)
... 21 more
The project providing the DockerRule uses the spotify docker client to connect to the remote API of the docker daemon. (That is why it throws an IOException stating "No such file or directory" - it cannot find the socket.)
My .gitlab-ci.yml file looks like this:
stages:
- build
- deploy
build_rest-api:
image: openjdk:8
stage: build
script:
- ./gradlew clean build -Dorg.gradle.parallel=true
artifacts:
when: always
paths:
- 'rest-api/build/distributions/*.zip'
- '*/build/reports/*'
deploy_on_development:
image: governmentpaas/cf-cli
stage: deploy
before_script:
- cf api ...
- cf auth ...
- cf target -o XXX -s development
script:
- cf push ....
only:
- master
What I would like to achieve is:
Integration tests are run locally and during the CI process
Integration tests connect to a real database
No difference between local and CI test configuration
I thought about providing the postgres database as a service during the CI process using the services section of .gitlab-ci.yml. But that would mean that I have to manually start up a postgres database before I can run my integration tests locally. What I liked about the junit rule approach was that I could simply run my integration tests like any other tests by just having docker running in the background.
I would be nice if someone can come up with a solution that allows me to connect to a docker instance during the CI process but I am also happy about ideas on how to change my overall setup of integration testing in order for this to work.

Getting an error while trying to use a command under the lifecycle tag on kubernetes

im successfully running kubernetes, gcloud and postgres but i wanna make some modifications after pod startup , im trying to move some files so i tried these 3 options
1
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command: [/bin/cp /var/lib/postgres/data /tmpdatavolume/]
2
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command:
- "cp"
- "/var/lib/postgres/data"
- "/tmpdatavolume/"
3
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command: ["/bin/cp "]
args: ["/var/lib/postgres/data","/tmpdatavolume/"]
on option 1 and 2, im getting the same errors (from kubectl get events )
Killing container with docker id f436e40f5df2: PostStart handler: Error ex
ecuting in Docker Container: -1
and on option 3 it wont even let me upload the yaml file giving me this error
error validating "postgres-master.yaml": error validating data: found invalid field args for v1.ExecAction; if you choose to ignore these errors, turn validation off with --validate=false
any help would be appreciated! thanks.
pd: i just pasted part of my yaml file since i wasnt getting any errors since i added those new lines
Here's the document about lifecycle hooks you might find useful.
Your option 1 won't work and should give you the error you saw, it should be ["/bin/cp","/var/lib/postgres/data","/tmpdatavolume/"] instead. Option 2 is also the right way to specify it. Can you kubectl exec into your pod and type those commands to see what error messages that generates? Do something like kubectl exec <pod-name> -i -t -- bash -il
The error message shown in option 3 means that you're not passing a valid configuration to the API server. To learn the API definition, see v1.Lifecycle and after a few clicks into its child fields you'll find args isn't valid under lifecycle.postStart.exec.
Alternatively, you can find those API definition using kubectl explain, e.g. kubectl explain pods.spec.containers.lifecycle.postStart.exec in this case.

Resources