jailshell: composer: command not found - jail-shell

When I SSH into my server and run composer install it works without any issues. However when the command is issued from my CI/CD that would SSH on my server to get the changes I get this error:
jailshell: composer: command not found
How do I fix this?
My CI/CD:
deploy:
stage: deploy
script:
- ssh -tt user#domain.com
"cd /path/to/public_html/ &&
git checkout master &&
git pull &&
composer install &&
exit"
Edit
To answer #NicoHaase's question. There was a guide about adding an alias on my .bashrc but it still didn't work so I ended up downloading composer.phar and added it to my repository and instead of composer install, I use php composer.phar install instead. I mean it works now but that's not the solution to this question so I'm not writing that as an answer.

Related

How to run bitbucket pipeline to deploy php based app on nanobox

I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines

Batch file is not getting completely executed.

We are trying to make a Jenkins job where we will deploy our Angular app to Tomcat's webapps folder.
Our Jenkins is running on CentOS and dev server is on Windows.
We executing the following shell script in Jenkins build step.
sshpass -p "password" ssh -o StrictHostKeyChecking=no username#host "D:\Ratikanta\temp.bat"
Our batch file content is following.
d: && cd Ratikanta && svn checkout http://192.168.1.5/svn/KSP/trunk/kspweb/ && cd kspweb && npm i && npm run build && move ksp D:\Ratikanta\Tomcat7\webapps
These are working fine when running the shell script(sshpass -p ...) from CentOS terminal but, when running from Jenkins job it is executing up to npm i. Other commands like npm run build && move ksp D:\Ratikanta\Tomcat7\webapps are not getting executed.
Please help. Did we miss something? Please suggest if there are any other way to achieve our goal.

Using BitBucket Pipelines to Deploy onto VPS via SSH Access

I have been trying to wrap my head around how to utilise BitBucket's Pipelines to auto-deploy my (Laravel) application onto a Vultr Server instance.
I have the following steps I do manually, which I am trying to replicate autonomously:
I commit my changes and push to BitBucket repo
I log into my server using Terminal: ssh root#ipaddress
I cd to the correct directory: cd /var/www/html/app/
I then pull from my BitBucket repo: git pull origin master
I then run some commands: composer install, php artisan migrate etc..
I then log out: exit
My understanding is that you can use Pipelines to automatise this, is this true?
So far, I have set up a SSH key pair for pipelines and my server, so my server's authorized_keys file contains the public key from BitBucket Pipelines.
My pipelines file bitbucket-pipelines.yml is as follows:
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
caches:
- composer
script:
- ssh root#ipaddress
- cd /var/www/html/app/
- git pull origin master
- php artisan down
- composer install --no-dev --prefer-dist
- php artisan cache:clear
- php artisan config:cache
- php artisan route:cache
- php artisan migrate
- php artisan up
- echo 'Deploy finished.'
When the pipeline executes, I get the error: bash: cd: /var/www/html/app/: No such file or directory.
I read that each script step is run in it's own container.
Each step in your pipeline will start a separate Docker container to
run the commands configured in the script
The error I get makes sense if it's not executing cd /var/www/html/app within the VPS after logging into it using SSH.
Could someone guide me into the correct direction?
Thanks
The commands you are defining under script are going to be run into a Docker container and not on your VPS.
Instead, put all your commands in a bash file on your server.
1 - Create a bash file pull.sh on your VPS, to do all your deployment tasks
#/var/www/html
php artisan down
git pull origin master
composer install --no-dev --prefer-dist
php artisan cache:clear
php artisan config:cache
php artisan route:cache
php artisan migrate
php artisan up
echo 'Deploy finished.'
2 - Create a script deploy.sh in your repository, like so
echo "Deploy script started"
cd /var/www/html
sh pull.sh
echo "Deploy script finished execution"
3 - Finally update your bitbucket-pipelines.yml file
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: staging
script:
- cat ./deploy.sh | ssh <user>#<host>
- echo "Deploy step finished"
I would recommend to already have your repo cloned on your VPS in /var/www/html and test your pull.sh file manually first.
The problem with the answer marked as the solution is that the SH process won't exit if any of the commands inside fails.
This command php artisan route:cache for instance, can fail easily! not to mention the pull!
And even worse, the SH script will execute the rest of the commands without stop if any fail.
I can't use any docker command because after each, the CI process stops and I can't figure out how to avoid those commands to not exit the CI process. I'm using the SH but I'll start adding some conditionals based on the exit code of the previous command, so we know if anything went wrong during the deploy.
I know this may be an old thread, but bitbucket does provide a pipeline to do all that is mentioned above in a much cleaner way.
Please have a look at https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/ssh-run
Hope this helps.

wercker with docker switching user results in error, how to install nvm then?

Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.

How to reproduce a travis-ci build environment for debugging

I am seeing a build failure on travis-ci, which I cannot reproduce on my local machine. Are there instructions somewhere for setting up a VM that is identical to the travis-ci linux build environment? I'm glad to have travis-ci already reveal a new bug, but less excited to debug it by sending in commits that add debug code.
For container-based builds, there are now instructions on how to setup a docker image locally.
Unfortunately, quite a few steps are still manual. Here are the commands you need to get it up and running:
# change the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo su - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
travis # to create ~/.travis
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `me/project` on GitHub
cd ~/builds
mkdir me
cd me
git clone https://github.com/me/project.git
cd project
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
You can use Travis Build which is a library (which means you've to place it in ~/.travis/) to generate a shell based build script (travis compile) which can be then uploaded to the VMs using SSH and executed.
Below steps are just guidance in order to get you into the right track (if anything is missing, let me know).
Docker
Example command to run container (which can be found at Docker Hub):
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Run your container, clone your repository then test it manually.
See: Running a Container Based Docker Image Locally
SSH access
Check out this answer. Basically you need to setup bounce host, then configure your build to run SSH tunnel.
Here is the example .travis.yml:
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
Local setup
Here are the steps to test it on your local environment:
cd ~
git clone https://github.com/travis-ci/travis-build.git
ln -s ~/travis-build/ ~/.travis/travis-build
sudo gem install bundler
bundle install --gemfile ~/.travis/travis-build/Gemfile
cd repo-dir/
travis login -g <github_token>
vim .travis.yaml
travis lint # to validate script
travis compile # to transform into shell script
Vagrant/VM
After you did travis compile which would produce the bash script as result of your .travis.yml, you can use use vagrant to run this script into virtualized environment using provided Vagrantfile and the following steps:
vagrant up
vagrant ssh
cd /vagrant
bundle exec rspec spec
You probably need to install more tools in order to test it.
Here is some git hint which avoids you to generates unnecessary commits when doing trial & errors commits for Travis CI testing:
Fork the repo (or use separate branch).
After initial commit, keep adding --amend to replace your previous commit:
git commit --amend -m 'Same message.' -a
Push the amended commit by force (e.g. into already opened PR):
git push fork -f
Now Travis CI would re-check the same commit over and over again.
See also: How to run travis-ci locally.
I'm facing the same issue right now. I used to use CircleCI before, where you could just login to VM via ssh, but this doesn't work with Travis-CI VMs.
I was able to debug it (to a certain point) by setting up Travis-ci VM clone via Travis-Cookbooks. You would need to install VirtualBox and Vagrant on your computer first before cloning this repository.
Once you have Travis-Cookbooks cloned, open the folder, launch command prompt|terminal and type vagrant up. Once Vagrant finishes setting up VM (may take a long time) on your machine, you can connect to it via ssh by running vagrant ssh.
From there, you would need to clone your own repository (or just copy the code to VM) and apply the steps from your .travis.yml file.
Eregon's answer failed for me at travis compile, there error looks like:
/home/travis/.rvm/rubies/ruby-2.3.0/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- travis/support (LoadError)
I got it working with the following adjustments: (Adjustments marked with # CHANGED. I'm using the node environment)
# change the image according to the language chosen in .travis.yml
# Find images at https://quay.io/organization/travisci
docker run -it quay.io/travisci/travis-node-js /bin/bash
# now that you are in the docker image, switch to the travis user
su travis
# Install a recent ruby (default is 1.9.3) to make bundle install work
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
sudo mkdir builds # CHANGED
cd builds
sudo git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
travis # to create ~/.travis
ln -s `pwd` ~/.travis/travis-build
bundle install
bundler add travis # CHANGED
sudo mkdir bin # CHANGED
sudo chmod a+w bin/ # CHANGED
bundler binstubs travis # CHANGED
# Create project dir, assuming your project is `me/project` on GitHub
cd ~/builds
mkdir me
cd me
git clone https://github.com/me/project.git
cd project
# change to the branch or commit you want to investigate
~/.travis/travis-build/bin/travis compile > ci.sh # CHANGED
# You most likely will need to edit ci.sh as it ignores matrix and env
# In particular I needed to edit --branch=’’ to the branch name
bash ci.sh

Resources