Dokku and bitbucket ci/cd - bitbucket

Is there simple receipt how to integrate bitbucket pipeline with dokku?
I want to continuously deploy to production server after commit in master

The necessary steps can be boiled down to:
Enable pipelines.
Generate an SSH key for the pipelines script and add it to dokku.
Add the dokku host as a known host in pipelines.
If you're using private dependencies, also add bitbucket.org as a known host.
Define the environment variable DOKKU_REMOTE_URL.
Use a bitbucket-pipelines.yml file (see example below).
The easy way is to manage it directly from your app's root folder.
Create a bitbucket-pipelines.yml file in which we enter something like the following:
image: node:8.9.4
pipelines:
default:
- step:
caches:
- node
script:
# Add SSH keys for private dependencies
- mkdir -p ~/.ssh
- echo $SSH_KEY | base64 -d > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
# Install and run checks
- curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.3.2
- export PATH=$HOME/.yarn/bin:$PATH
- yarn install # Build is triggered from the postinstall hook
branches:
master:
- step:
script:
# Add SSH keys for deployment
- mkdir -p ~/.ssh
- echo $SSH_KEY | base64 -d > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
# Deploy to hosting
- git remote add dokku $DOKKU_REMOTE_URL
- git push dokku master
Remember dokku takes care of npm install so all we have to do is setup the docker instance (running in bitbucket) for deploying to dokku.
However pay attention to the image: node:8.9.4, as it is generally a good idea to enforce an image that uses the exact version of node (or whichever language), that you use in your application.
Steps 2-4 is just fidgetting around with the settings in Bitbuckets Repository Settings --> Pipelines --> SSH keys, where you will generate an SSH key, add it to your dokku installation.
For the known host you want to enter the IP adress (or domain name) of the server hosting your dokku installation, and press fetch, followed by add host.
See this example application: https://github.com/amannn/dokku-node-hello-world#continuous-deployment-from-bitbucket.

Related

ssh key in Dockerfile returning Permission denied (publickey)

I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.

How to run bitbucket pipeline to deploy php based app on nanobox

I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines

Using BitBucket Pipelines to Deploy onto VPS via SSH Access

I have been trying to wrap my head around how to utilise BitBucket's Pipelines to auto-deploy my (Laravel) application onto a Vultr Server instance.
I have the following steps I do manually, which I am trying to replicate autonomously:
I commit my changes and push to BitBucket repo
I log into my server using Terminal: ssh root#ipaddress
I cd to the correct directory: cd /var/www/html/app/
I then pull from my BitBucket repo: git pull origin master
I then run some commands: composer install, php artisan migrate etc..
I then log out: exit
My understanding is that you can use Pipelines to automatise this, is this true?
So far, I have set up a SSH key pair for pipelines and my server, so my server's authorized_keys file contains the public key from BitBucket Pipelines.
My pipelines file bitbucket-pipelines.yml is as follows:
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
caches:
- composer
script:
- ssh root#ipaddress
- cd /var/www/html/app/
- git pull origin master
- php artisan down
- composer install --no-dev --prefer-dist
- php artisan cache:clear
- php artisan config:cache
- php artisan route:cache
- php artisan migrate
- php artisan up
- echo 'Deploy finished.'
When the pipeline executes, I get the error: bash: cd: /var/www/html/app/: No such file or directory.
I read that each script step is run in it's own container.
Each step in your pipeline will start a separate Docker container to
run the commands configured in the script
The error I get makes sense if it's not executing cd /var/www/html/app within the VPS after logging into it using SSH.
Could someone guide me into the correct direction?
Thanks
The commands you are defining under script are going to be run into a Docker container and not on your VPS.
Instead, put all your commands in a bash file on your server.
1 - Create a bash file pull.sh on your VPS, to do all your deployment tasks
#/var/www/html
php artisan down
git pull origin master
composer install --no-dev --prefer-dist
php artisan cache:clear
php artisan config:cache
php artisan route:cache
php artisan migrate
php artisan up
echo 'Deploy finished.'
2 - Create a script deploy.sh in your repository, like so
echo "Deploy script started"
cd /var/www/html
sh pull.sh
echo "Deploy script finished execution"
3 - Finally update your bitbucket-pipelines.yml file
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
script:
- cat ./deploy.sh | ssh <user>#<host>
- echo "Deploy step finished"
I would recommend to already have your repo cloned on your VPS in /var/www/html and test your pull.sh file manually first.
The problem with the answer marked as the solution is that the SH process won't exit if any of the commands inside fails.
This command php artisan route:cache for instance, can fail easily! not to mention the pull!
And even worse, the SH script will execute the rest of the commands without stop if any fail.
I can't use any docker command because after each, the CI process stops and I can't figure out how to avoid those commands to not exit the CI process. I'm using the SH but I'll start adding some conditionals based on the exit code of the previous command, so we know if anything went wrong during the deploy.
I know this may be an old thread, but bitbucket does provide a pipeline to do all that is mentioned above in a much cleaner way.
Please have a look at https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/ssh-run
Hope this helps.

How to deploy code in an automated way from a branch in BitBucket to a Google Cloud Bucket?

How can move code in an automated way from a branch in BitBucket to a Google Cloud Bucket?
I see a good deal of writing about how to move code into App Engine from BitBucket.
https://blog.bitbucket.org/2014/09/18/google-cloud-push-to-deploy-comes-to-bitbucket/
I am using the Static Web Page feature of Google Bucket to expose the bucket under a predefined subdomain. I want to do a simple copy of files into the bucket and set public access rights on those files when code is merged (committed) to a branch in BitBucket.
Problem solved. Turns out the issue was my improper attempts to authenticate into GC Storage using gcloud auth. Here is the bitbucket.pipelines.yml that is working for me right now. (provide your own values for the environment variables.)
pipelines:
default:
- step:
script:
- echo "Everything is awesome in general"
branches:.
staging:
- step:
script:
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-155.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
- gcloud -v
# package up the application for deployment
- echo $GOOGLE_CLIENT_SECRET > client-secret.json
- gcloud auth activate-service-account $GOOGLE_ACCOUNT --key-file client-secret.json
- gsutil -m cp -r *.html gs://$STAGING_DOMAIN
- gsutil -m acl set -R -a public-read gs://$STAGING_DOMAIN
- gsutil -m setmeta -h "Cache-Control:private" gs://$STAGING_DOMAIN/*.html

How to properly deploy to host from gitlab-ci (+docker)?

Situation
I have one server: 192.168.1.2. This server has a gitlab installed on it alongside with a docker linked to a gitlab-runner. Keep in mind that we are talking about the same server.
I have a script at /etc/cfupdate.py, which is, you can tell, a Python script. I would like to have this file up in my repository with auto-deployment.
Note: The file is owned by deploymgr, a user created just for this purpose. It has rw access.
Attempt #1
.gitlab-ci.yml:
image: python:latest
before_script:
- echo "Starting script exec."
after_script:
- echo "CI Script Complete."
test-run:
stage: build
script:
- echo "Setting up..."
- pip3 install requests
- python3 "cfupdate.py"
deploy:
stage: deploy
script:
- docker cp $HOSTNAME:$PWD/cfupdate.py /etc/
only:
- master
After a quick research, docker is actually made for process and resource isolation. That's why it's impossible to access the host.
PS: And docker is a host-only command.
Attempt #2
Running a Webhook at build finish. This is a possibly working solution, but I would like to have a better one, which can be contained in .gitlab-ci.yml.
Attempt #3
Given the following .gitlab-ci.yml (only deploy part):
deploy:
stage: deploy
script:
- scp 'cfupdate.py' deploymgr#192.168.1.2:/etc/
only:
- master
I tried to ssh myself to the host, and using scp, copy the file, but with no luck, as the user has a password. I don't really want to use sshpass -p to pass the password, although it is savable in the Secret Variables section of GitLab. Also tried with ssh-keygen and ssh-copy-id, still needs password, and as we know, docker's SSH keys (PS indeed including all other files) are not saved, they are destroyed immediately upon the docker's shutdown.
Attempt #4
deploy:
stage: deploy
script:
- curl --form "fileupload=#cfupdate.py" 192.168.1.2:[port]/upload.php
only:
- master
This way, (haven't really tried it) it also could work, but I'm still looking for a better way. As you can see, this is a really make-do way and if we would be talking about lots of files, this method wouldn't serve well.
Any ideas? Or any suggestions about GitLab? Maybe it has a built-in function for deployment that I don't know about?
I've ran through numerous docs involving docker, gitlab-ci, etc., but they didn't help me. Though, I've successfully devised a working solution:
deploy:
stage: deploy
before_script:
- apt-get update
- apt-get -y install rsync sshpass
script:
- echo "Deploying to staging server..."
- "sshpass -e rsync -vvvurz --progress -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' . deploymgr#192.168.1.2:/etc/cfupdate/"
only:
- master
Maybe you can consider using shell executor instead of docker executor for this particular repo so you can write plain sh script like this:
deploy:
stage: deploy
script:
- cp cfupdate.py /etc/cfupdate.py
only:
- master

Resources