This is my first time to use testing in my project. I use Gitlab CI and gitlab runner to perform test. But something weird happened, when phpunit executed the output is failure, but the test result in gitlab is passed. Gitlab should be show failed result.
I use Lumen 5.1. And Gitlab Runner using docker.
This is my .gitlab-ci.yml file
image: dragoncapital/comic:1.0.0
stages:
- test
cache:
paths:
- vendor/
before_script:
- bash .gitlab-ci.sh > /dev/null
test:7.0:
script:
- phpunit
This is my .gitlab-sh.sh file
#!/bin/bash
# We need to install dependencies only for Docker
[[ ! -e /.dockerenv ]] && exit 0
set -xe
composer install
cp .env.testing .env
The log and result:
As you can see the phpunit test fail, but the status in gitlab CI is passed.
Update:
The log ouput is quite different in my local computer, but the results are error/fail.
At least I figured out what wrong with this test. There are two phpunit in this system, and I called the wrong one.
First, I installed phpunit using apt-get command, so phpunit is installed as Ubuntu package.
And secondly, Laravel/Lumen provided phpunit in vendor/bin.
When I just typing phpunit in terminal, it call phpunit that provided by Ubuntu, and this give me unexpected results. But, everything ok when I call vendor/bin/phpunit instead of just phpunit.
Related
I'm working on building a Gitlab-CI pipeline that deploys an Spring Boot application to a Kubernetes cluster hosted on DigitalOcean.
Fortunately, I'm right at the beginning of doing this so there's very little bloat, and I figured I'd just test that I had everything wired correctly before I went ahead and built some crazy stuff.
Essentially I've got a Gitlab-CI job that pulls this image: digitalocean/doctl:1.87.0 and I then attempt to run a number of doctl commands in the script section of the job. The results of this very simple "deploy" script:
deploy-to-kubernetes:
stage: deploy
image: digitalocean/doctl:1.87.0
script:
- doctl --help
looked like this:
Error: unknown command "sh" for "doctl"
Run 'doctl --help' for usage.
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 255
After doing a bit of digging and googling and searching and head-scratching, I hit upon this post, and figured it may apply to the doctl image too, so I then updated my Gitlab-CI job to this:
deploy-to-kubernetes:
stage: deploy
image:
name: digitalocean/doctl:1.87.0
entrypoint: [""]
script:
- doctl --help
and the result was this:
$ doctl --help
/bin/bash: line 128: doctl: command not found
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
I'm pretty sure I'm doing something absolutely idiotic, but I can't figure out what that is, so if anybody could help out that would be really appreciated, and if you need more information, let me know.
FYI: This is my first question ever posted on StackOverflow, so any feedback on what I need to change, improve etc is greatly appreciated!
Thanks!
$PATH contains default values in this image. But doctl available in /app/doctl. So you can use it this way: /app/doctl %command% (eg, /app/doctl version)
I ran into the same problem and opted to just use a regular alpine container and install the doctl tool myself. It's a workaround though.
deploy-to-kubernetes:
stage: deploy
image: debian:11.6-slim
before_script:
- apt update && apt -y upgrade && apt-get install -y curl
- curl -Lo doctl.tar.gz https://github.com/digitalocean/doctl/releases/download/v1.91.0/doctl-1.91.0-linux-amd64.tar.gz && tar xf doctl.tar.gz
- chmod u+x doctl
- mv doctl /usr/local/bin/doctl
script:
- doctl --help
I had the same issue, here is job that works for me:
"Deploy to DigitalOcean":
image:
name: digitalocean/doctl:latest
entrypoint: [""]
stage: deploy
needs:
- job: "Test Docker image"
script:
- /app/doctl auth init
- /app/doctl apps create-deployment $DIGITALOCEAN_STUDENT_BACKEND_ID
only:
- master
But it also requires DIGITALOCEAN_ACCESS_TOKEN variable with DigitalOcean token
I need a wasm runtime to unit test my code on GitLab, so I have the following in my .gitlab-ci.yml:
default:
image: emscripten/emsdk
before_script:
- curl https://wasmtime.dev/install.sh -sSf | bash
- source /root/.bashrc
The wasmtime.dev script installs the binaries and updates PATH in ~/.bashrc. Running my tests fails with the message wasmtime: command not found (specified as below):
unit-test:
stage: test
script:
- bash test.sh
What do I need to do to make sure the changes of the wasmtime install script apply? Thanks!
Edit
Adding export PATH="$PATH:$HOME/.wasmtime/bin" before bash test.sh in the unit-test job sucesfully got the wasmtime binary on the path, but I'm not quite sure I'm happy with this solution - what if the path of wasmtime changes later on? Shouldn't sourcing .bashrc do this? Thanks!
I am using CI pipelines on Gitlab to build docker images for deployment to Raspbian. Since my builds need to access some private NPM packages, I include in the Docker file the following line which creates a token file using the value stored in environment variable $NPM_TOKEN:
RUN echo //registry.npmjs.org/:_authToken=$NPM_TOKEN > ~/.npmrc
This works fine when building from my usual image (resin/raspberrypi3-node). However one of my containers is built from armhf/ubuntu. When the above line is executed, the build fails with the following error:
standard_init_linux.go:207: exec user process caused "no such file or directory"
The command '/bin/sh -c echo //registry.npmjs.org/:_authToken=$NPM_TOKEN >> ~/.npmrc' returned a non-zero code: 1
The build runs fine from docker build on my development machine (Windows 10) but not within the gitlab pipeline.
I have tried stripping down my docker and pipeline files to the bare minimum, and removed the environment variable and the tilde from the path, and this still fails for the ubuntu (but not the resin) image.
Dockerfile.test.ubuntu:
FROM armhf/ubuntu
RUN echo hello > world.txt
Dockerfile.test.resin:
FROM resin/raspberrypi3-node
RUN echo hello > world.txt
gitlab-ci.yml:
build_image:
image: docker:git
services:
- docker:dind
script:
- docker build -f Dockerfile.test.resin . # Succeeds
- docker build -f Dockerfile.test.ubuntu . # Fails
only:
- master
I have searched for similar issues and have seen this error reported when running a .sh file which contained CRLF combinations. Although I am developing on Windows, my IDE (VS Code) is set up to use LF, not CRLF and I have checked all the above files for compliance.
As in here, try and use double-quotes for your echo argument:
RUN echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" > ~/.npmrc
And first, in your Dockerfile, do a RUN ls -alrth ~/ to check the accessibility/presence of the target folder.
That error was also reported in this thread (without any answer), with an example where the final version of the Dockerfile, as seen here, use this .gitlab-ci.yml.
The OP bighairdave confirms in the comments:
I copied the following from the example #VonC gave, and it worked:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
before_script:
- docker run --rm --privileged hypriot/qemu-register
I am testing a GitLab CI pipeline with gitlab-runner exec. During a script, Boost ran into an error, and it created a log file. I want to view this log file, but I do not know how to.
.gitlab-ci.yml in project directory:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
I test this on my machine with:
sudo gitlab-runner exec docker build --timeout 3600
The last several lines of the output:
Building Boost.Build engine with toolset ...
Failed to build Boost.Build build engine
Consult 'bootstrap.log' for more details
ERROR: Job failed: exit code 1
FATAL: exit code 1
bootstrap.log is what I would like to view.
Appending - cat bootstrap.log to .gitlab-ci.yml does not output the file contents because the runner exits before this line. I tried looking though past containers with sudo docker ps -a, but this does not show the one that GitLab Runner used. How can I open bootstrap.log?
You can declare an artifact for the log:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
artifacts:
when: on_failure
paths:
- include/boost/bootstrap.log
Afterwards, you will be able to download the log file via the web interface.
Note that using when: on_failure will ensure that bootstrap.log will only be collected if the build fails, saving disk space on successful builds.
I have been trying to wrap my head around how to utilise BitBucket's Pipelines to auto-deploy my (Laravel) application onto a Vultr Server instance.
I have the following steps I do manually, which I am trying to replicate autonomously:
I commit my changes and push to BitBucket repo
I log into my server using Terminal: ssh root#ipaddress
I cd to the correct directory: cd /var/www/html/app/
I then pull from my BitBucket repo: git pull origin master
I then run some commands: composer install, php artisan migrate etc..
I then log out: exit
My understanding is that you can use Pipelines to automatise this, is this true?
So far, I have set up a SSH key pair for pipelines and my server, so my server's authorized_keys file contains the public key from BitBucket Pipelines.
My pipelines file bitbucket-pipelines.yml is as follows:
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
caches:
- composer
script:
- ssh root#ipaddress
- cd /var/www/html/app/
- git pull origin master
- php artisan down
- composer install --no-dev --prefer-dist
- php artisan cache:clear
- php artisan config:cache
- php artisan route:cache
- php artisan migrate
- php artisan up
- echo 'Deploy finished.'
When the pipeline executes, I get the error: bash: cd: /var/www/html/app/: No such file or directory.
I read that each script step is run in it's own container.
Each step in your pipeline will start a separate Docker container to
run the commands configured in the script
The error I get makes sense if it's not executing cd /var/www/html/app within the VPS after logging into it using SSH.
Could someone guide me into the correct direction?
Thanks
The commands you are defining under script are going to be run into a Docker container and not on your VPS.
Instead, put all your commands in a bash file on your server.
1 - Create a bash file pull.sh on your VPS, to do all your deployment tasks
#/var/www/html
php artisan down
git pull origin master
composer install --no-dev --prefer-dist
php artisan cache:clear
php artisan config:cache
php artisan route:cache
php artisan migrate
php artisan up
echo 'Deploy finished.'
2 - Create a script deploy.sh in your repository, like so
echo "Deploy script started"
cd /var/www/html
sh pull.sh
echo "Deploy script finished execution"
3 - Finally update your bitbucket-pipelines.yml file
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
script:
- cat ./deploy.sh | ssh <user>#<host>
- echo "Deploy step finished"
I would recommend to already have your repo cloned on your VPS in /var/www/html and test your pull.sh file manually first.
The problem with the answer marked as the solution is that the SH process won't exit if any of the commands inside fails.
This command php artisan route:cache for instance, can fail easily! not to mention the pull!
And even worse, the SH script will execute the rest of the commands without stop if any fail.
I can't use any docker command because after each, the CI process stops and I can't figure out how to avoid those commands to not exit the CI process. I'm using the SH but I'll start adding some conditionals based on the exit code of the previous command, so we know if anything went wrong during the deploy.
I know this may be an old thread, but bitbucket does provide a pipeline to do all that is mentioned above in a much cleaner way.
Please have a look at https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/ssh-run
Hope this helps.