I am trying to setup gitlab for my team on a Ubuntu 14.04 system, my network is behind a proxy and hence facing hell lot of issue during the setup.
I was able to successfully complete most of the setup and struck at this particular command
$ sudo -u git -H bundle install --deployment --without development test mysql aws kerberos
When I run above command I get the following error
Fetching source index from https://rubygems.org/
Could not fetch specs from https://rubygems.org/
From my understanding the above error is due to proxy.
If execute below command it works perfectly alright(my current user is user1)
$ sudo bundle install --deployment --without development test mysql aws kerberos
Also, I have set all the proxy configurations by exporting http_proxy and https_proxy variables.
One more thing I would like to add is, if I execute the command for user1 it again gives the same error as above
$ sudo -u user1 -H bundle install --deployment --without development test mysql aws kerberos
I could not identify where the exact problem is.
I was following the gitlab configuration from this path
As written in the documentation
Since an installation from source is a lot of work and error prone we
strongly recommend the fast and reliable Omnibus package installation
(deb/rpm)
If you have other issues in using Gitlab, create an issue at their issue tracker.
Related
I have a docker image successfully built, with the following properties:
It has bundler & Ruby gems installed for a Rails project, and successfully can run that project
It runs OpenSSH, and has a user set up with a private key so that I can SSH into it. That user also has the ability to run sudo commands
When that container is up and running, if I run this command:
docker exec -it CONTAINER_ID /bin/sh, then I am dropped into the console where I can successfully execute commands like bundle exec rails console in the project root directory.
However, if I SSH in, even if I become the root user (i.e. sudo su - root), then running that command gives me this error:
Traceback (most recent call last):
2: from /usr/local/bin/bundle:23:in `<main>'
1: from /usr/local/lib/ruby/2.6.0/rubygems.rb:302:in `activate_bin_path'
/usr/local/lib/ruby/2.6.0/rubygems.rb:283:in `find_spec_for_exe': Could not find 'bundler' (2.2.7) required by your /app/Gemfile.lock. (Gem::GemNotFoundException)
To update to the latest version installed on your system, run `bundle update --bundler`.
To install the missing version, run `gem install bundler:2.2.7`
It's as if it can't find any of the installed libraries, even though I can validate that they are present. I can also validate that when I use the docker exec... command, I gain access as the root user.
I'm going in circles trying to figure out why I'm observing such a difference in behavior. Any thoughts would be greatly appreciated.
The error you are receiving is generated since the exec can not find the bundler binary, or it finds a binary that is older version. In both cases, since you know that you have correct bundler installed, the problem will be in the PATH that is set. It's hard to tell what exactly is going on that is changing the PATH, but one thing to check is the initialization files for bash and sh.
To fix the problem, check the PATH environment variable (easiest check is echo $PATH) in both cases, and if required, set it up to the location of the bundler as follows:
In the session where it works, execute
~# which bundler
/usr/local/some_folder/bin/bundler
and in the session where you need it
export PATH=$PATH:/usr/local/some_folder/bin
You can add this to your .bashrc or .profile initialization files for the user in ssh and the root.
The other comment put me on the correct path. I ended up having to manually set these ENV vars to get things to work correctly.
export PATH=$PATH:/usr/local/bundle/bin
export BUNDLE_APP_CONFIG=/usr/local/bundle
export GEM_HOME=/usr/local/bundle
I have a Windows 2012 server that is on an internal network. I used Railsinstaller to put the basic framework on the system. Rails new doesn't work when I reach the bundler section since I can't reach the net.
I have used "gem install rails -i repo --no-rdoc --no-ri" on a net accessible system then placed the gems on my server and ran "gem install --force --local *.gem".
Then "rails new D:\DTS_WEB --edge" and now fail at "unable to connect to github.com". Trying to start the rails server fails telling me that nothing has been checked out.
I modified my gems file with
"gem 'rails', path: '....\Ruby2.2.0\lib\ruby\gems\'" but it still tries github.
I installed git with Railsinstaller along with rails. How can I get past this last obstacle and force everything to use local resources?
Is it possible to build everything on the net accessible node and just copy it into place on the server to use? My first attempt at that failed.
On a machine that has a network connection, you can install your app's gems to a directory within the project using --path:
$ bundle install --path=vendor/bundle
Then, you can copy the project folder (along with all the gems in vendor/bundle) to your internal machine.
I use gitlab.com and CI with the shared docker runner that runs tests for my Ruby on Rails project on every commit to master. I noticed that about 90% of the build time is spent on 'bundle install'. Is it possible to somehow cache the installed gems between commits to speed up the 'bundle install'?
UPDATE:
To be more specific, below is the content of my .gitlab-ci.yml. The first 3 lines of the 'test' script take about 90% of the time making the build run for 4-5 minutes.
image: ruby:2.2.4
services:
- postgres
test:
script:
- apt-get update -qy
- apt-get install -y nodejs
- bundle install --path /cache
- bundle exec rake db:drop db:create db:schema:load RAILS_ENV=test
- bundle exec rspec
I don't know if you have special requirements for doing a apt-get all the time, if that is not needed create your own dockerfile with those commands in it. So that your base has already those updates/nodejs packages. If you want to update later on, you can always update your dockerfile again.
For your gems, if you want it quicker you can cache them in between builds too. Normally this is per job and per branch. See example here http://doc.gitlab.com/ee/ci/yaml/README.html#cache
cache:
paths:
- /cache
I prefer to add key: "$CI_BUILD_REF_NAME" so that for that particular branch my files are cached. See environments to see what keys you can use more.
You can setup BUNDLE_PATH environment variable and point it to a folder where you want your gems to be installed. The first time you run bundle install it will install all gems and consequent runs will only check if there are any new gems and install only those.
Note: That's supposed to be the default behavior. Check your BUNDLE_PATH env var value. Is it being changed to a temporary per commit folder or something? Or, is 90% of the build time being spent on downloading gem meta information from rubygems.org? In which case you might want to consider using --local flag (but not sure this is a good idea on CI server).
Fetching source index for https://rubygems.org/
After looking at your .gitlab-ci.yml I noticed that your --path options is missing =. I think it is supposed to be:
- bundle install --path=/cache
So I am trying to deploy a Rails app on my web hosting service. I have developed an app locally, but this is the first time I have tried to get it to work on another server. My service provider is Blue Host and I am on their most basic shared hosting plan. Just as a test, I created a fresh application on the server, and everything ran fine. However, whenever I add any gem to the Gemfile and run 'bundle install', I get this error:
sudo: unable to stat /etc/sudoers: No such file or directory
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Gem::Exception: Cannot load gem at [/usr/lib64/ruby/gems/1.9.3/cache/rake-10.4.2.gem] in /home/user/application
An error occurred while installing rake (10.4.2), and Bundler cannot continue.
Make sure that `gem install rake -v '10.4.2'` succeeds before bundling.
Whenever I run gem install rake -v '10.4.2' the gem installs fine.
I get similar errors that mention 'sudo' when i try to run other commands as well.
I am not quite sure what this error means. Do I not have the required permissions on my server?
Always use a continuous deployment/integration.
Capistrano does part of the job. It is very simple, you develop you application offline, push to a remote repository, like BitBucket or Github, and then Capistrano takes care of cloning the remote repository to your server (you can also have many), restarting services etc.
If you want to go a step forward you can use continuous integration, so when you push to remote tests will automatically be performed and if they pass your application will be deployed.
This is a basic introduction on how deployment works, you can check online, there are plenty of resources about how to deploy rails.
Go with root user
su root
root$ /etc/
I'm building a rails app using gitlab ci and a few issues came up.
The first was it couldn't find rake to run the tests
I installed rake on my digital ocean manually to solve this
Then it was complaining the gitlab_ci_runner is not in the sudoers list
I added gitlab ci to the sudoers list which solved that problem
Now when running bundle install it is complaining that Bundler::GemspecError: Could not read gem for every single gem unless I install them myself.
Am I missing something with the way I setup gitlab ci?
Try using the 'gitlab_ci_runner' w/o switching user.
sudo -u gitlab_ci_runner -H bundle install --deployment
If you're root user that has a password (meaning your NOT using a SSH-Key) use this to switch back to root user.
su