doctl not available in gitlab-ci - docker

I'm working on building a Gitlab-CI pipeline that deploys an Spring Boot application to a Kubernetes cluster hosted on DigitalOcean.
Fortunately, I'm right at the beginning of doing this so there's very little bloat, and I figured I'd just test that I had everything wired correctly before I went ahead and built some crazy stuff.
Essentially I've got a Gitlab-CI job that pulls this image: digitalocean/doctl:1.87.0 and I then attempt to run a number of doctl commands in the script section of the job. The results of this very simple "deploy" script:
deploy-to-kubernetes:
stage: deploy
image: digitalocean/doctl:1.87.0
script:
- doctl --help
looked like this:
Error: unknown command "sh" for "doctl"
Run 'doctl --help' for usage.
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 255
After doing a bit of digging and googling and searching and head-scratching, I hit upon this post, and figured it may apply to the doctl image too, so I then updated my Gitlab-CI job to this:
deploy-to-kubernetes:
stage: deploy
image:
name: digitalocean/doctl:1.87.0
entrypoint: [""]
script:
- doctl --help
and the result was this:
$ doctl --help
/bin/bash: line 128: doctl: command not found
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
I'm pretty sure I'm doing something absolutely idiotic, but I can't figure out what that is, so if anybody could help out that would be really appreciated, and if you need more information, let me know.
FYI: This is my first question ever posted on StackOverflow, so any feedback on what I need to change, improve etc is greatly appreciated!
Thanks!

$PATH contains default values in this image. But doctl available in /app/doctl. So you can use it this way: /app/doctl %command% (eg, /app/doctl version)

I ran into the same problem and opted to just use a regular alpine container and install the doctl tool myself. It's a workaround though.
deploy-to-kubernetes:
stage: deploy
image: debian:11.6-slim
before_script:
- apt update && apt -y upgrade && apt-get install -y curl
- curl -Lo doctl.tar.gz https://github.com/digitalocean/doctl/releases/download/v1.91.0/doctl-1.91.0-linux-amd64.tar.gz && tar xf doctl.tar.gz
- chmod u+x doctl
- mv doctl /usr/local/bin/doctl
script:
- doctl --help

I had the same issue, here is job that works for me:
"Deploy to DigitalOcean":
image:
name: digitalocean/doctl:latest
entrypoint: [""]
stage: deploy
needs:
- job: "Test Docker image"
script:
- /app/doctl auth init
- /app/doctl apps create-deployment $DIGITALOCEAN_STUDENT_BACKEND_ID
only:
- master
But it also requires DIGITALOCEAN_ACCESS_TOKEN variable with DigitalOcean token

Related

Install wasmtime on gitlab CI docker image

I need a wasm runtime to unit test my code on GitLab, so I have the following in my .gitlab-ci.yml:
default:
image: emscripten/emsdk
before_script:
- curl https://wasmtime.dev/install.sh -sSf | bash
- source /root/.bashrc
The wasmtime.dev script installs the binaries and updates PATH in ~/.bashrc. Running my tests fails with the message wasmtime: command not found (specified as below):
unit-test:
stage: test
script:
- bash test.sh
What do I need to do to make sure the changes of the wasmtime install script apply? Thanks!
Edit
Adding export PATH="$PATH:$HOME/.wasmtime/bin" before bash test.sh in the unit-test job sucesfully got the wasmtime binary on the path, but I'm not quite sure I'm happy with this solution - what if the path of wasmtime changes later on? Shouldn't sourcing .bashrc do this? Thanks!

How to run bitbucket pipeline to deploy php based app on nanobox

I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines

How do you view a log created during gitlab-runner exec?

I am testing a GitLab CI pipeline with gitlab-runner exec. During a script, Boost ran into an error, and it created a log file. I want to view this log file, but I do not know how to.
.gitlab-ci.yml in project directory:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
I test this on my machine with:
sudo gitlab-runner exec docker build --timeout 3600
The last several lines of the output:
Building Boost.Build engine with toolset ...
Failed to build Boost.Build build engine
Consult 'bootstrap.log' for more details
ERROR: Job failed: exit code 1
FATAL: exit code 1
bootstrap.log is what I would like to view.
Appending - cat bootstrap.log to .gitlab-ci.yml does not output the file contents because the runner exits before this line. I tried looking though past containers with sudo docker ps -a, but this does not show the one that GitLab Runner used. How can I open bootstrap.log?
You can declare an artifact for the log:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
artifacts:
when: on_failure
paths:
- include/boost/bootstrap.log
Afterwards, you will be able to download the log file via the web interface.
Note that using when: on_failure will ensure that bootstrap.log will only be collected if the build fails, saving disk space on successful builds.

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

How to properly deploy to host from gitlab-ci (+docker)?

Situation
I have one server: 192.168.1.2. This server has a gitlab installed on it alongside with a docker linked to a gitlab-runner. Keep in mind that we are talking about the same server.
I have a script at /etc/cfupdate.py, which is, you can tell, a Python script. I would like to have this file up in my repository with auto-deployment.
Note: The file is owned by deploymgr, a user created just for this purpose. It has rw access.
Attempt #1
.gitlab-ci.yml:
image: python:latest
before_script:
- echo "Starting script exec."
after_script:
- echo "CI Script Complete."
test-run:
stage: build
script:
- echo "Setting up..."
- pip3 install requests
- python3 "cfupdate.py"
deploy:
stage: deploy
script:
- docker cp $HOSTNAME:$PWD/cfupdate.py /etc/
only:
- master
After a quick research, docker is actually made for process and resource isolation. That's why it's impossible to access the host.
PS: And docker is a host-only command.
Attempt #2
Running a Webhook at build finish. This is a possibly working solution, but I would like to have a better one, which can be contained in .gitlab-ci.yml.
Attempt #3
Given the following .gitlab-ci.yml (only deploy part):
deploy:
stage: deploy
script:
- scp 'cfupdate.py' deploymgr#192.168.1.2:/etc/
only:
- master
I tried to ssh myself to the host, and using scp, copy the file, but with no luck, as the user has a password. I don't really want to use sshpass -p to pass the password, although it is savable in the Secret Variables section of GitLab. Also tried with ssh-keygen and ssh-copy-id, still needs password, and as we know, docker's SSH keys (PS indeed including all other files) are not saved, they are destroyed immediately upon the docker's shutdown.
Attempt #4
deploy:
stage: deploy
script:
- curl --form "fileupload=#cfupdate.py" 192.168.1.2:[port]/upload.php
only:
- master
This way, (haven't really tried it) it also could work, but I'm still looking for a better way. As you can see, this is a really make-do way and if we would be talking about lots of files, this method wouldn't serve well.
Any ideas? Or any suggestions about GitLab? Maybe it has a built-in function for deployment that I don't know about?
I've ran through numerous docs involving docker, gitlab-ci, etc., but they didn't help me. Though, I've successfully devised a working solution:
deploy:
stage: deploy
before_script:
- apt-get update
- apt-get -y install rsync sshpass
script:
- echo "Deploying to staging server..."
- "sshpass -e rsync -vvvurz --progress -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' . deploymgr#192.168.1.2:/etc/cfupdate/"
only:
- master
Maybe you can consider using shell executor instead of docker executor for this particular repo so you can write plain sh script like this:
deploy:
stage: deploy
script:
- cp cfupdate.py /etc/cfupdate.py
only:
- master

Resources