Backend server in gitlab-ci docker cypress e2e - docker

I am new to gitlab ci. I want to run integrated tests of my app using cypress. E2E tests require frontend (ionic, cypress) and backend (django, postgis). Running in my local machine everything works fine. First launch the backend server, and then run tests. Now I wanted to use gitLab CI but I don't know how to launch the backend server (i've been googling around but found nothing to my case), before launching the tests in the docker container (I'm also new to docker). I'm using a gitlab-ci.yaml I got from the cypress-gitlab network
stages:
- build
- test
# to cache both npm modules and Cypress binary we use environment variables
# to point at the folders we can list as paths in "cache" job settings
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
# this job installs NPM dependencies and Cypress
install:
image: cypress/base:10
stage: build
script:
- npm ci
# show where the Cypress test runner binaries are cached
- $(npm bin)/cypress cache path
# show all installed versions of Cypress binary
- $(npm bin)/cypress cache list
- $(npm bin)/cypress verify
# two jobs that run after "install" job finishes
# NPM dependencies and Cypress binary should be already installed
cypress-e2e:
image: cypress/base:10
stage: test
script:
- $(npm bin)/cypress run
artifacts:
expire_in: 1 week
when: always
paths:
- cypress/screenshots
- cypress/videos
reports:
junit:
- results/TEST-*.xml
cypress-e2e-chrome:
image: cypress/browsers:chrome67
stage: test
script:
- $(npm bin)/cypress run --browser chrome
artifacts:
expire_in: 1 week
when: always
paths:
- cypress/screenshots
- cypress/videos
reports:
junit:
- results/TEST-*.xml
The install job runs fine. The error occur on the job cypress-e2e. The error seems auto-explanatory and logic to me - no server running. My problem is I don't know how to introduce in the script the launch of the service.
The output of the pipeline:
Running with gitlab-runner 13.8.0-rc1 (28e2e34a)
on docker-auto-scale 0277ea0f
Preparing the "docker+machine" executor
00:37
Using Docker executor with image cypress/base:10 ...
Pulling docker image cypress/base:10 ...
Using docker image sha256:071155d6ed07a321ae5c7a453c1fd1f04ff65bd5eeae97281c1a2088c26acf0a for cypress/base:10 with digest cypress/base#sha256:7fb73651d4a48762d5f370a497a155891eba90605ea395a4d86cafdefb153f8c ...
Preparing environment
00:03
Running on runner-0277ea0f-project-23892759-concurrent-0 via runner-0277ea0f-srm-1611586309-ee0dc093...
Getting source from Git repository
00:03
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ctavar01/advisor_app/.git/
Created fresh repository.
Checking out 9e8a00a2 as master...
Skipping Git submodules setup
Restoring cache
00:26
Checking cache for master...
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/23892759/master
Successfully extracted cache
Executing "step_script" stage of the job script
00:23
$ $(npm bin)/cypress run
Cypress could not verify that this server is running:
> http://localhost:8100
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Uploading artifacts for failed job
00:05
Uploading artifacts...
cypress/screenshots: found 14 matching files and directories
cypress/videos: found 8 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=984717879 responseStatus=201 Created token=FCQbsg-q
Uploading artifacts...
WARNING: results/TEST-*.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
Thanks for any help

Related

How to build Nx monorepo apps in Gitlab CI Runner

I am trying to have a gitlab CI that performs the following actions:
Install yarn dependencies and cache them in order to don't have to yarn install in every jobs
Test all of my modified apps with the nx affected command
Build all of my modified apps with the nx affected command
Build my docker images with my modified apps
I tried many ways to do it in my CI and no one of them worked. I'm very stuck actually.
This is my actual CI :
default:
image: registry.gitlab.com/xxxx/xxxx/xxxx
stages:
- setup
- test
- build
- forge
.distributed:
interruptible: true
only:
- main
- develop
cache:
key:
files:
- yarn.lock
paths:
- node_modules
- .yarn
before_script:
- yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
- NX_HEAD=$CI_COMMIT_SHA
- NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
artifacts:
paths:
- node_modules
test:
stage: test
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
forge-docker-landing-staging:
stage: forge
services:
- docker:20.10.16-dind
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- docker build -f Dockerfile.landing -t landing:staging .
Currently here is what works and what doesn't :
❌ Caching don't work, it's doing yarn install in every jobs that got extends: .distributed
✅ Nx affected commands work as expected (test and build)
❌ Building the apps with docker is not working, i have some trouble with docker in docker.
Problem #1: You don't cache your .yarn-cache directory, while you explicitly set in in your yarn install in before_script section. So solution is simple - add .yarn-cache to your cache.paths section
Regarding
it's doing yarn install in every jobs that got extends: .distributed
It is intended behavior in your pipeline, since "extends" basically merges sections of your gitlab-ci config, so test stage basically uses the following bash script in runner image:
yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
NX_HEAD=$CI_COMMIT_SHA
NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
and build stage differs only in one last line
When you'll cache your build folder - install phase will be way faster.
Also in this case
artifacts:
paths:
- node_modules
is not needed, since it will come from cache. Removing it from artifacts will also ease the load on your gitlab instance, node_modules is usually huge and doesn't really make sense as an artifact.
Problem #2: What is your artifact?
You haven't provided your dockerfile or any clue on what is exactly produced by your build steps, so i assume your build stage produces something in dist directory. If you want to use that in your docker build stage - you should specify it in artifacts section of your build job:
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
artifacts:
paths:
- dist
After that, your forge-docker-landing-staging job will have an access to your build artifacts.
Problem #3: Docker is not working!
Without any logs from your CI system, it's impossible to help you, and also violates SO "one question per question" policy. If your other stages are running fine - consider using kaniko instead of docker in docker, since using DinD is actually a security nightmare (you are basically giving root rights on your builder machine to anyone, who can edit .gitlab-ci.yml file). See https://docs.gitlab.com/ee/ci/docker/using_kaniko.html , and in your case something like job below (not tested) should work:
forge-docker-landing-staging:
stage: forge
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile.landing"
--destination "${CI_REGISTRY_IMAGE}:landing:staging"

Utilizing custom docker-image causes build pipeline failure

I am trying to create a build pipeline for a small project I do on my free time. For this, I utilize Spring-Boot and Angular. Locally I build it with ./gradlew clean build. This works perfectly fine on my local machine, but I run into issues I can't pinpoint on gitlab. The build is done on gitlab, utilizing it's own shared runners.
My .gitlab-ci.yml looks like this:
default:
image: oasisat777/openjdk-and-node:latest
# If I comment out above line and comment in the line below, everything works fine & dandy
# image: openjdk:17-jdk-bullseye
stages:
- build
build-job:
stage: build
script:
- whoami
- java -version
- npm -v
- ./gradlew clean compileTestJava compileJava --stacktrace
In the above example I use a docker image based on openjdk:17-jdk-bullseye but extended to have npm available. The corresponding Dockerfile:
# goal: build microservices with spring-boot backend and angular frontend in gitlab
# req'd images: latest stable openjdk and latest node
# unfortunately there's not openjdk-with-node:latest available, so i have to build it by hand
# this ought to do the trick, using bullseye as base and then install node additionally
FROM openjdk:17-jdk-bullseye
# note: for production use node LTS (even numbers)
# https://github.com/nodesource/distributions/blob/master/README.md#deb
RUN curl -fsSL https://deb.nodesource.com/setup_17.x | bash - \
&& apt-get install -y nodejs
USER root
CMD ["bash"]
I tried to build my project utilizing the resulting docking container by adding my code as a volume then run ./gradlew build - which worked on my machine. My assumption is that by this I basically simulated the behavior of what the gitlab-runner does when starting the build.
docker run -it -v .:/project
cd project
./gradlew clean build
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
[...]
BUILD SUCCESSFUL
This is the output of the build-pipeline:
$ whoami
root
$ java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment (build 17.0.2+8-86)
OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)
$ npm -v
8.5.5
$ ./gradlew clean compileTestJava compileJava --stacktrace
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred starting process 'Gradle build daemon'
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Exception is:
org.gradle.process.internal.ExecException: A problem occurred starting process 'Gradle build daemon'
at org.gradle.process.internal.DefaultExecHandle.execExceptionFor(DefaultExecHandle.java:241)
at org.gradle.process.internal.DefaultExecHandle.setEndStateInfo(DefaultExecHandle.java:218)
at org.gradle.process.internal.DefaultExecHandle.failed(DefaultExecHandle.java:369)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:87)
at org.gradle.internal.operations.CurrentBuildOperationPreservingRunnable.run(CurrentBuildOperationPreservingRunnable.java:38)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: net.rubygrapefruit.platform.NativeException: Could not start '/usr/local/openjdk-17/bin/java'
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27)
at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36)
at org.gradle.process.internal.ExecHandleRunner.startProcess(ExecHandleRunner.java:98)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:71)
... 6 more
Caused by: java.io.IOException: Cannot run program "/usr/local/openjdk-17/bin/java" (in directory "/root/.gradle/daemon/7.4.1"): error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073)
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25)
... 9 more
Caused by: java.io.IOException: error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110)
... 11 more
* Get more help at https://help.gradle.org
Cleaning up project directory and file based variables
Now, I made the following observations
When using the openjdk:17-jdk-bullseye image my build works as intended.
Whenever I use the openjdk:17-jdk-bullseye, I don't see this line in the output:
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
I know that I am root, so I should be able to set +x on .../bin/gradle
When running ll on my project, this is what I see on gradlew: -rwxr-xr-x 1 alex staff [ ... ] gradlew
Unfortunately I ran out of ideas and would be thankful for any follow up questions or observations that I have lost. The most common answer to this problem seems to be "Make sure that gradlew is executable!" - well it is.
While typing this answer, I was wondering whether this could be an x86/x64/arm64 related issue? I just noticed the OS/ARCH field is set to linux/arm64/v8 on docker hub.
It worked as sytech suggested - I've just built & pushed the docker-image using gitlab and pushed it into its container repository. I then used it in my application build - and it works as expected.
The .gitlab-ci.yml in the Dockerfile repository looks like this:
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# Tell 'docker:dind' to enable TLS (recommended)
# and generate certificates in the specified directory.
DOCKER_TLS_CERTDIR: "/certs"
build-push-docker-image-job:
# Specify a Docker image to run the job in.
image: docker:latest
# Specify an additional image 'docker:dind' ("Docker-in-Docker") that
# will start up the Docker daemon when it is brought up by a runner.
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- master
(=> source: https://www.shellhacks.com/gitlab-ci-cd-build-docker-image-push-to-registry)
This then publishes the build into my repository's container archive.
In the other build I simply reference the build:
default:
image: registry.gitlab.com/<GROUP>/<PROJECT>/<SOME_NAME>:master
start a new build - and then the build finally works:
BUILD SUCCESSFUL in 3m 7s
11 actionable tasks: 8 executed, 3 up-to-date
Cleaning up project directory and file based variables
00:00
Job succeeded
I suspect the architecture to be the culprit.

Build Singularity container using GitLab CI

I want to build a singularity image in GitLab CI. Unfortunately, the official containers fail with:
Running with gitlab-runner 13.5.0 (ece86343) on gitlab-ci d6913e69
Preparing the "docker" executor
Using Docker executor with image quay.io/singularity/singularity:v3.7.0 ...
Pulling docker image quay.io/singularity/singularity:v3.7.0 ...
Using docker image sha256:46d3827bfb2f5088e2960dd7103986adf90f2e5b4cbea9eeb0b0eacfe10e3420 for quay.io/singularity/singularity:v3.7.0 with digest quay.io/singularity/singularity#sha256:def886335e36f47854c121be0ce0c70b2ff06d9381fe8b3d1894fee689615624 ...
Preparing environment
Running on runner-d6913e69-project-2906-concurrent-0 via <gitlab.url>...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in <repo-path>
Checking out 708cc829 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Error: unknown command "sh" for "singularity"
immediately at the beginning, when using a job like this:
build-singularity:
image: quay.io/singularity/singularity:v3.7.0
stage: singularity
script:
- build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
only:
changes:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
- reproduction/pipeline/semrepro-singularity/assets/mirrorlist
- .gitlab/ci/build-semrepo-singularity.yml
artifacts:
paths:
- reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif
expire_in: 1 hour
interruptible: true
For me, it seems like GitLab is trying to use a shell that doesn't exist? How are they supposed to work? In the official example they're using a special version of the docker image called -gitlab, but that unfortunately isn't available anymore. Any ideas? I can't imagine it isn't possible to build singularity containers within CI? Thanks a lot in advance!
EDIT: According to #tsnowlan's answer, overriding the entrypoint fixes the above issue. However, now the build fails with:
singularity build semrepro-singularity.sif semrepro-singularity.def
INFO: Starting build...
INFO: Downloading library image
84.1MiB / 84.1MiB [========================================] 100 % 28.7 MiB/s 0s
ERROR: unpackSIF failed: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
FATAL: While performing build: packer failed to pack: root filesystem extraction failed: extract command failed: ERROR : Failed to create user namespace: not allowed to create user namespace: exit status 1
Cleaning up file based variables
ERROR: Job failed: exit code 1
Any ideas?
You need to finagle it a bit to make it play nice with gitlab CI. The easiest way I found was to clobber the docker entrypoint and have script step be the full singularity build command. We're using this to build our singularity images with v3.6.4, but it should work with v3.7.0 as well.
e.g.,
build-singularity:
image:
name: quay.io/singularity/singularity:v3.7.0
entrypoint: [""]
stage: singularity
script:
- singularity build reproduction/pipeline/semrepro-singularity/semrepro-singularity.sif reproduction/pipeline/semrepro-singularity/semrepro-singularity.def
...
edit: the gitlab-runner used must also have privileged enabled. This is the default on the gitlab.com shared runners, but if using your own runners you'll need to make sure that is set in their config.

Gitlab pipeline running very slow

I am using Gitlab as my DevOps platform and running pipeline in docker container. So i am using docker executor and my runner is running as a docker container.
Below is my gitlab-ci.yml file which is doing nothing but npm install cypress
stages:
- release
release:
image: node:12.19.0
stage: release
only:
refs:
- master
- alpha
- /^(([0-9]+)\.)?([0-9]+)\.x/
- /^([0-9]+)\.([0-9]+)\.([0-9]+)(?:-([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?(?:\+[0-9A-Za-z-]+)?$/
before_script:
- export http_proxy=http://17.14.45.41:8080/
- export https_proxy=http://17.14.45.41:8080/
- echo 'strict-ssl=false'>>.npmrc
script:
# - npm ci
- npm install cypress
When i run this job, it takes almost 12 minutes which is hell lot of time. My Gitlab is self hosted and I am using proxy to talk to outside world but I don't think proxy has any issue because when i do docker pull it works fine and runs instantly.
I don't know if there is anything I could do or I am missing in Gitlab configuration but if anyone has any ideas please let me know. That will be great help.
I don't know your project and if you have too much dependencies do download and install.
To improve the performance, you need to use the cache https://docs.gitlab.com/ee/ci/caching/ feature of gitlab
but, before doing it, you need to configure the cypress cache folder using the environment variable CYPRESS_CACHE_FOLDER https://docs.cypress.io/guides/getting-started/installing-cypress.html#Environment-variables, look at my example below
CYPRESS_CACHE_FOLDER: '$CI_PROJECT_DIR/cache/Cypress'
I'm telling to cypress to download all the dependencies and binaries to this specific folder, and after that, I configured the gitlab to cache this folder
stage: ci
cache:
paths:
- cache/Cypress
In your case your .gitlab-ci.yml file will be
stages:
- release
release:
image: node:12.19.0
variables:
CYPRESS_CACHE_FOLDER: '$CI_PROJECT_DIR/cache/Cypress'
stage: release
cache:
paths:
- cache/Cypress
only:
refs:
- master
- alpha
- /^(([0-9]+)\.)?([0-9]+)\.x/
- /^([0-9]+)\.([0-9]+)\.([0-9]+)(?:-([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?(?:\+[0-9A-Za-z-]+)?$/
before_script:
- export http_proxy=http://17.14.45.41:8080/
- export https_proxy=http://17.14.45.41:8080/
- echo 'strict-ssl=false'>>.npmrc
script:
# - npm ci
- npm install cypress
But don't forget you need to configure the cache depending of executor you are using. The details about it you can get from the gitlab docs

Bitbucket Pipelines - How to use the same Docker container for multiple steps?

I have set up Continuous Deployment for my web application using the configuration below (bitbucket-pipelines.yml).
pipelines:
branches:
master:
- step:
name: Deploy to production
trigger: manual
deployment: production
caches:
- node
script:
# Install dependencies
- yarn install
- yarn global add gulp-cli
# Run tests
- yarn test:unit
- yarn test:integration
# Build app
- yarn run build
# Deploy to production
- yarn run deploy
Although this works, I would like to increase the build speed by running the unit and integration test steps in parallel.
What I've tried
pipelines:
branches:
master:
- step:
name: Install dependencies
script:
- yarn install
- yarn global add gulp-cli
- parallel:
- step:
name: Run unit tests
script:
- yarn test:unit
- step:
name: Run unit tests
script:
- yarn test:integration
- step:
name: Build app
script:
- yarn run build
- step:
name: Deploy to production
trigger: manual
deployment: production
script:
- yarn run deploy
This also has the advantage of seeing the different steps in Bitbucket including the execution time per step.
The problem
This does not work because for each step a clean Docker container is created and the dependencies are no longer installed on the testing steps.
I know that I can share files between steps using artifacts, but that would still require multiple containers to be created which increases the total execution time.
So my question is...
How can I share the same Docker container between multiple steps?
I've had the same issue a while ago and found a way to do it and I'm using it successfully right now.
You can do this using Docker's save and load along with BitBucket's Artifacts. You just need to make sure that your image isn't too large because BitBucket's Artifacts limit is 1GB and you can easily ensure this using multi stage-builds and other tricks.
- step:
name: Build app
script:
- yarn run build
- docker save --output <backup-file-name>.tar <images-you-want-to-export>
artifacts:
- <backup-file-name>.tar
- step:
name: Deploy to production
trigger: manual
deployment: production
script:
- docker load --input <backup-file-name>.tar
- yarn run deploy
You might also like to use BitBucket's caches which can make building Docker images much faster. For example, you can make it so that NPM packages are only installed when package.json and yarn.lock files change.
Further Reading
docker save (Docker 17): https://devdocs.io/docker~17/engine/reference/commandline/save/index
docker load (Docker 17): https://devdocs.io/docker~17/engine/reference/commandline/load/index
BitBucket Artifacts: https://confluence.atlassian.com/bitbucket/using-artifacts-in-steps-935389074.html
BitBucket Pipelines Caches: https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
Each step runs in its own Docker container, and its own volume. So you cannot have two steps running on the same build container.
Diving deeper into your problem
Are you trying to optimize for build minute consumption, or how long it takes for your build to complete?
If you're optimizing for build minutes, stick with what you have now. As the overhead of using multiple steps and artifacts will add some build minutes. But you'll lose out on the flexibility these features provide. Additionally, you can try ensure you're using a small Docker image for your build environment, as that will be pulled faster.
If you're optimizing for pipeline completion time, I'd recommend you go with your idea of using artifacts and parallel steps. While the total execution time is expected to be higher, you will be waiting for less time to see the result of your pipeline.
Possible solution which i recommend:
- step:
name: Install dependencies
script:
- yarn install
- yarn global add gulp-cli
Your first step above should be in a pre-build docker container, which you host on Docker Hub and use for deployment via image: username/deployment-docker:latest.
Then, both steps can use this container for their tests.

Resources