Does Vagrant need the source code locally? - ruby-on-rails

Lets say i have a rails app locally on my machine and i use vagrant with that app.
I have worked on that vagrant and made a box from it.
Now i give the box to some others.
Do the others need to have the source code of the rails app locally on their machines or they can just use the vagrant box that i gave without having the source code locally ?

We use Vagrant for VDE (virtual development environment) in next scheme(maybe it will be useful for you too):
we keep our sources under git (can be svn/csv/etc);
we keep Vagrantfile in root folder of git repository;
in Vagrantfile we add:
config.vm.box_url = "http://<url for our box>"
nfs = !Kernel.is_windows?
config.vm.share_folder "v-root", "/tmp/vde", ".", :nfs => nfs
we store our box on S3 its easy, but as easiest way can be dropbox.
so for share your sources you need just share repository. in Readme.md you
can describe few step to launch vde
with share_folder All your sources will be available from vde(inner instance) from folder /tmp/vde

Generally the source code to your Rails app is shared from your own filesystem to the virtual machine you're running with Vagrant; it is not stored on the virtual machine's drive. The application is never actually stored permanently on the box. Thus, sending it to someone else will not allow them to run the app, as the app doesn't exist on the VM.
For more info, see "Accessing the Project Files" on the Vagrant SSH Documentation:
Accessing the Project Files
Vagrant bridges your application with the virtual environment by using a VirtualBox shared folder. The shared folder location on the virtual machine defaults to /vagrant, but can be changed. This can be verified by listing the files within that folder in the SSH session:
vagrant#vagrantbase:~$ ls /vagrant
index.html Vagrantfile
The VM has both read and write access to the shared folder.
Remember: Any changes are mirrored across both systems.

Related

How to migrate Nextcloud Docker to a new machine

I have a Nextcloud installation on a server that was installed using docker-compose. This installation utilizes a Nextcloud docker image and a separate MySQL (8.0) docker image for database access. The data and configuration files are placed in external volumes specified in the docker-compose.yml file.
I have recently put together a new machine that has more memory, a faster CPU, and (most importantly) much more disk space. I would like to migrate my current installation to the new machine.
The actual installation is simple enough: I can simply copy my docker-compose.yml file to the new machine and run it. The problem is with the data and the (somewhat unique) configuration that I have. I would like to get those onto the new machine.
The issue of migrating a dockerized Nextcloud installation has different issues from those associated with migrating a bare-metal or VM installation. For one thing, there is no clear way to place the installation into maintenance mode, you are working with two containers (effectively, this is like coordinating two different machines) and many of the steps described for migrating a bare-metal installation will not work reliably for a containerized installation (yes, one can go into the container to run some of the commands. required, but my attempts to do this resulted in screwed-up migrations).
Doing Google searches, I am seeing plenty of articles and instructions on how to migrate bare-metal Nextcloud installations from one machine to another, and how to migrate bare-metal (and virtual machine) installations to Docker. The procedures are pretty complex and involve placing the installation into maintenance mode and performing various backups and restores. Unfortunately, while I have seen a few people asking about how to migrate dockerized Nextcloud installations, there are no clear instructions on how to do this (at least, none that actually work!). Even the Nextcloud site does not discuss this!
Has anyone successfully migrated a dockerized Nextcloud installation from one machine to another? If so, how exactly was this done?
Was just able to do this myself, although I'm migrating my nextcloud install off my primary home server to a slower NAS-ish box I salvaged together after a move.
The main issues I ran into were file/dir ownership moving from one machine to another. Secondary was ensuring trusted domains were set correctly in config.php
I'm sure it'd be better to use rsync to copy/move files from machine to machine and ensure you keep ownership intact, but I used scp and changed ownership manually. Your nextcloud_data container needs the www-data user to have ownership of the dir you have mapped to /var/www/html and the nextcloud_db (I use mariadb here, YMMV) container needs the systemd-coredump user to have ownership of the dir you have mapped to /var/lib/mysql (or whatever your db backend equivalent is)
Then just make sure you switch over your trusted_domains and trusted_proxies, either using docker-compose env vars, or by editing /var/www/html/config/config.phpdirectly.
Based on Raphael PICCOLO's comments, I created a tarball of everything in the Volumes I was using for my original installation, created a new installation on my target machine, then extracted the tarball on the new machine. There is, however, one other step that must be taken if you do it this way: you must change the ownership of all the files in the tarball so that they are owned by the userID used by the new Nextcloud installation. Otherwise, the new Nextcloud applications will be unable to access any of the resources and attempts to even log in will get 500 Failures on a browser.
There is also a unique ID utilized by the MySQL container, so all the database- related data files must also undergo an ownership change.
Getting the correct userIDs is simple enough: when you first install the new Nextcloud and MySQL database, use the same volumes you had set up in the original docker-compose.yml file. Then, before untaring the data look at the userIDs of the files in the database folder and the Nextcloud folders. TThen when you put the contents of your tarball on the new installation, use chown -R to make the owership changes.
Note that I was transferring my installation from a Centos 7 machine running Docker with the traditional root user to a Centos 8 machoine running Docker in a "non- root user" mode. I do not know how permissions would be affected on other machines or modes.
Still, once the permissions were properly set up, everything works.

Initial setup for ssh on docker-compose

I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...

Version Control Vagrant and Ansible virtual box config with Rails app?

Just setting up a new Rails app and I have my Vagrant files along with a folder full of dev machine provisioning files for Ansible. These allow me to spin up a dev virtual machine, provision it and have everything up and running really quickly.
My question is, should all that be in my projects version control repository? I will be working on this project across several machines so have it accessible and synced would be useful but on the other hand I don't wish those items to be deployed when I finally deploy it to production? Also, having those files committed would keep a history of them which would also be nice.
What would you recommend?
This is very much a thing of your personal preference.
Some people keep everything in a single self-contained repo. Other people keep application code in a separate repo from their configuration/provisioning/deployment code.
Either way have their own benefits and drawbacks and there's no wrong of doing it as long as you do keep in some version control system.
When I set up new projects I create a directory structure along the lines of:
/<application_name>
./src
./deployment
./docs
Actual source code goes in src, any deployment-specific scripts (e.g. Ansible playbook dirs, Vagrant files) go in deployment and of course any documentation goes in docs.
Then I commit all this to source control. The deployment scripts are then written to be executed from their directory but change into the src directory to perform their actions.

Communication between booted Vagrant Virtual Machine and Jenkins

I am trying to create a VM to run few tests and destroy once done. I am using Jenkins 'Boot up Vagrant VM' option to boot up a VM and using chef to install required packages and run the tests in it. When testing is completed in this VM, is there any way it(VM) can communicate the results back to the job in Jenkins which triggered it?
I am stuck with this part.
I have implemented booting up of VM based on the custom vagrant box which has all essential packages and softwares required to run the tests.
First of all thanks to Markus, who if had left an answer, I'd surely accept it.
I edited the Vagrantfile to add synched folders to
config.vm.synced_folder "host/","/guest".
It creates guest folder in the VM and the host folder which we created on the host system will also reflect on the VM.
All I did then as Markus suggested was do a polling from Jenkins (using Files Found trigger plugin) to some folder to search for some specific file that one is expected to see/communicated from VM.
In VM, whenever the testing is done, I'd simply put the result in host folder and it'd automatically reflect in my local machine, in the folder which Jenkins is polling and it will build the project whichever is polling this folder and ta dahhh ....!

How do you create a shared folder to the host using vSphere?

VMWare player and workstation has the ability to easily create a shared folder directly to the host:
http://www.vmware.com/support/ws5/doc/ws_running_shared_folders.html
This feature seems to be missing or is moved in vSphere. How do you set it up in vSphere?
Thanks.
Actually, we can't have shared folders using ESXi. But we can workaround it by creating a folder in the host datastore and copying files from/to it using scp protocol. Of course, you need to have administrative privileges on the host for that.
This link explains how to set up SSH Server and Shell Access on ESXi:
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.migration.doc_50%2Fcos_upgrade_technote.1.4.html
This feature doesn't make sense with vSphere, which is why you can't find it.
Workstation, Player, Server all run on top of a "host OS" while ESX (vSphere managed) runs on bare-metal. You're not supposed to have access to the native file system on the host - so there is no option to do so.

Resources