Install wasmtime on gitlab CI docker image - docker

I need a wasm runtime to unit test my code on GitLab, so I have the following in my .gitlab-ci.yml:
default:
image: emscripten/emsdk
before_script:
- curl https://wasmtime.dev/install.sh -sSf | bash
- source /root/.bashrc
The wasmtime.dev script installs the binaries and updates PATH in ~/.bashrc. Running my tests fails with the message wasmtime: command not found (specified as below):
unit-test:
stage: test
script:
- bash test.sh
What do I need to do to make sure the changes of the wasmtime install script apply? Thanks!
Edit
Adding export PATH="$PATH:$HOME/.wasmtime/bin" before bash test.sh in the unit-test job sucesfully got the wasmtime binary on the path, but I'm not quite sure I'm happy with this solution - what if the path of wasmtime changes later on? Shouldn't sourcing .bashrc do this? Thanks!

Related

doctl not available in gitlab-ci

I'm working on building a Gitlab-CI pipeline that deploys an Spring Boot application to a Kubernetes cluster hosted on DigitalOcean.
Fortunately, I'm right at the beginning of doing this so there's very little bloat, and I figured I'd just test that I had everything wired correctly before I went ahead and built some crazy stuff.
Essentially I've got a Gitlab-CI job that pulls this image: digitalocean/doctl:1.87.0 and I then attempt to run a number of doctl commands in the script section of the job. The results of this very simple "deploy" script:
deploy-to-kubernetes:
stage: deploy
image: digitalocean/doctl:1.87.0
script:
- doctl --help
looked like this:
Error: unknown command "sh" for "doctl"
Run 'doctl --help' for usage.
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 255
After doing a bit of digging and googling and searching and head-scratching, I hit upon this post, and figured it may apply to the doctl image too, so I then updated my Gitlab-CI job to this:
deploy-to-kubernetes:
stage: deploy
image:
name: digitalocean/doctl:1.87.0
entrypoint: [""]
script:
- doctl --help
and the result was this:
$ doctl --help
/bin/bash: line 128: doctl: command not found
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
I'm pretty sure I'm doing something absolutely idiotic, but I can't figure out what that is, so if anybody could help out that would be really appreciated, and if you need more information, let me know.
FYI: This is my first question ever posted on StackOverflow, so any feedback on what I need to change, improve etc is greatly appreciated!
Thanks!
$PATH contains default values in this image. But doctl available in /app/doctl. So you can use it this way: /app/doctl %command% (eg, /app/doctl version)
I ran into the same problem and opted to just use a regular alpine container and install the doctl tool myself. It's a workaround though.
deploy-to-kubernetes:
stage: deploy
image: debian:11.6-slim
before_script:
- apt update && apt -y upgrade && apt-get install -y curl
- curl -Lo doctl.tar.gz https://github.com/digitalocean/doctl/releases/download/v1.91.0/doctl-1.91.0-linux-amd64.tar.gz && tar xf doctl.tar.gz
- chmod u+x doctl
- mv doctl /usr/local/bin/doctl
script:
- doctl --help
I had the same issue, here is job that works for me:
"Deploy to DigitalOcean":
image:
name: digitalocean/doctl:latest
entrypoint: [""]
stage: deploy
needs:
- job: "Test Docker image"
script:
- /app/doctl auth init
- /app/doctl apps create-deployment $DIGITALOCEAN_STUDENT_BACKEND_ID
only:
- master
But it also requires DIGITALOCEAN_ACCESS_TOKEN variable with DigitalOcean token

How do you view a log created during gitlab-runner exec?

I am testing a GitLab CI pipeline with gitlab-runner exec. During a script, Boost ran into an error, and it created a log file. I want to view this log file, but I do not know how to.
.gitlab-ci.yml in project directory:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
I test this on my machine with:
sudo gitlab-runner exec docker build --timeout 3600
The last several lines of the output:
Building Boost.Build engine with toolset ...
Failed to build Boost.Build build engine
Consult 'bootstrap.log' for more details
ERROR: Job failed: exit code 1
FATAL: exit code 1
bootstrap.log is what I would like to view.
Appending - cat bootstrap.log to .gitlab-ci.yml does not output the file contents because the runner exits before this line. I tried looking though past containers with sudo docker ps -a, but this does not show the one that GitLab Runner used. How can I open bootstrap.log?
You can declare an artifact for the log:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
artifacts:
when: on_failure
paths:
- include/boost/bootstrap.log
Afterwards, you will be able to download the log file via the web interface.
Note that using when: on_failure will ensure that bootstrap.log will only be collected if the build fails, saving disk space on successful builds.

Using BitBucket Pipelines to Deploy onto VPS via SSH Access

I have been trying to wrap my head around how to utilise BitBucket's Pipelines to auto-deploy my (Laravel) application onto a Vultr Server instance.
I have the following steps I do manually, which I am trying to replicate autonomously:
I commit my changes and push to BitBucket repo
I log into my server using Terminal: ssh root#ipaddress
I cd to the correct directory: cd /var/www/html/app/
I then pull from my BitBucket repo: git pull origin master
I then run some commands: composer install, php artisan migrate etc..
I then log out: exit
My understanding is that you can use Pipelines to automatise this, is this true?
So far, I have set up a SSH key pair for pipelines and my server, so my server's authorized_keys file contains the public key from BitBucket Pipelines.
My pipelines file bitbucket-pipelines.yml is as follows:
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
caches:
- composer
script:
- ssh root#ipaddress
- cd /var/www/html/app/
- git pull origin master
- php artisan down
- composer install --no-dev --prefer-dist
- php artisan cache:clear
- php artisan config:cache
- php artisan route:cache
- php artisan migrate
- php artisan up
- echo 'Deploy finished.'
When the pipeline executes, I get the error: bash: cd: /var/www/html/app/: No such file or directory.
I read that each script step is run in it's own container.
Each step in your pipeline will start a separate Docker container to
run the commands configured in the script
The error I get makes sense if it's not executing cd /var/www/html/app within the VPS after logging into it using SSH.
Could someone guide me into the correct direction?
Thanks
The commands you are defining under script are going to be run into a Docker container and not on your VPS.
Instead, put all your commands in a bash file on your server.
1 - Create a bash file pull.sh on your VPS, to do all your deployment tasks
#/var/www/html
php artisan down
git pull origin master
composer install --no-dev --prefer-dist
php artisan cache:clear
php artisan config:cache
php artisan route:cache
php artisan migrate
php artisan up
echo 'Deploy finished.'
2 - Create a script deploy.sh in your repository, like so
echo "Deploy script started"
cd /var/www/html
sh pull.sh
echo "Deploy script finished execution"
3 - Finally update your bitbucket-pipelines.yml file
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
script:
- cat ./deploy.sh | ssh <user>#<host>
- echo "Deploy step finished"
I would recommend to already have your repo cloned on your VPS in /var/www/html and test your pull.sh file manually first.
The problem with the answer marked as the solution is that the SH process won't exit if any of the commands inside fails.
This command php artisan route:cache for instance, can fail easily! not to mention the pull!
And even worse, the SH script will execute the rest of the commands without stop if any fail.
I can't use any docker command because after each, the CI process stops and I can't figure out how to avoid those commands to not exit the CI process. I'm using the SH but I'll start adding some conditionals based on the exit code of the previous command, so we know if anything went wrong during the deploy.
I know this may be an old thread, but bitbucket does provide a pipeline to do all that is mentioned above in a much cleaner way.
Please have a look at https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/ssh-run
Hope this helps.

How to properly deploy to host from gitlab-ci (+docker)?

Situation
I have one server: 192.168.1.2. This server has a gitlab installed on it alongside with a docker linked to a gitlab-runner. Keep in mind that we are talking about the same server.
I have a script at /etc/cfupdate.py, which is, you can tell, a Python script. I would like to have this file up in my repository with auto-deployment.
Note: The file is owned by deploymgr, a user created just for this purpose. It has rw access.
Attempt #1
.gitlab-ci.yml:
image: python:latest
before_script:
- echo "Starting script exec."
after_script:
- echo "CI Script Complete."
test-run:
stage: build
script:
- echo "Setting up..."
- pip3 install requests
- python3 "cfupdate.py"
deploy:
stage: deploy
script:
- docker cp $HOSTNAME:$PWD/cfupdate.py /etc/
only:
- master
After a quick research, docker is actually made for process and resource isolation. That's why it's impossible to access the host.
PS: And docker is a host-only command.
Attempt #2
Running a Webhook at build finish. This is a possibly working solution, but I would like to have a better one, which can be contained in .gitlab-ci.yml.
Attempt #3
Given the following .gitlab-ci.yml (only deploy part):
deploy:
stage: deploy
script:
- scp 'cfupdate.py' deploymgr#192.168.1.2:/etc/
only:
- master
I tried to ssh myself to the host, and using scp, copy the file, but with no luck, as the user has a password. I don't really want to use sshpass -p to pass the password, although it is savable in the Secret Variables section of GitLab. Also tried with ssh-keygen and ssh-copy-id, still needs password, and as we know, docker's SSH keys (PS indeed including all other files) are not saved, they are destroyed immediately upon the docker's shutdown.
Attempt #4
deploy:
stage: deploy
script:
- curl --form "fileupload=#cfupdate.py" 192.168.1.2:[port]/upload.php
only:
- master
This way, (haven't really tried it) it also could work, but I'm still looking for a better way. As you can see, this is a really make-do way and if we would be talking about lots of files, this method wouldn't serve well.
Any ideas? Or any suggestions about GitLab? Maybe it has a built-in function for deployment that I don't know about?
I've ran through numerous docs involving docker, gitlab-ci, etc., but they didn't help me. Though, I've successfully devised a working solution:
deploy:
stage: deploy
before_script:
- apt-get update
- apt-get -y install rsync sshpass
script:
- echo "Deploying to staging server..."
- "sshpass -e rsync -vvvurz --progress -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' . deploymgr#192.168.1.2:/etc/cfupdate/"
only:
- master
Maybe you can consider using shell executor instead of docker executor for this particular repo so you can write plain sh script like this:
deploy:
stage: deploy
script:
- cp cfupdate.py /etc/cfupdate.py
only:
- master

Lumen: PHPUnit give failure but testing passed in Gitlab CI Runner

This is my first time to use testing in my project. I use Gitlab CI and gitlab runner to perform test. But something weird happened, when phpunit executed the output is failure, but the test result in gitlab is passed. Gitlab should be show failed result.
I use Lumen 5.1. And Gitlab Runner using docker.
This is my .gitlab-ci.yml file
image: dragoncapital/comic:1.0.0
stages:
- test
cache:
paths:
- vendor/
before_script:
- bash .gitlab-ci.sh > /dev/null
test:7.0:
script:
- phpunit
This is my .gitlab-sh.sh file
#!/bin/bash
# We need to install dependencies only for Docker
[[ ! -e /.dockerenv ]] && exit 0
set -xe
composer install
cp .env.testing .env
The log and result:
As you can see the phpunit test fail, but the status in gitlab CI is passed.
Update:
The log ouput is quite different in my local computer, but the results are error/fail.
At least I figured out what wrong with this test. There are two phpunit in this system, and I called the wrong one.
First, I installed phpunit using apt-get command, so phpunit is installed as Ubuntu package.
And secondly, Laravel/Lumen provided phpunit in vendor/bin.
When I just typing phpunit in terminal, it call phpunit that provided by Ubuntu, and this give me unexpected results. But, everything ok when I call vendor/bin/phpunit instead of just phpunit.

Resources