Communication between booted Vagrant Virtual Machine and Jenkins - jenkins

I am trying to create a VM to run few tests and destroy once done. I am using Jenkins 'Boot up Vagrant VM' option to boot up a VM and using chef to install required packages and run the tests in it. When testing is completed in this VM, is there any way it(VM) can communicate the results back to the job in Jenkins which triggered it?
I am stuck with this part.
I have implemented booting up of VM based on the custom vagrant box which has all essential packages and softwares required to run the tests.

First of all thanks to Markus, who if had left an answer, I'd surely accept it.
I edited the Vagrantfile to add synched folders to
config.vm.synced_folder "host/","/guest".
It creates guest folder in the VM and the host folder which we created on the host system will also reflect on the VM.
All I did then as Markus suggested was do a polling from Jenkins (using Files Found trigger plugin) to some folder to search for some specific file that one is expected to see/communicated from VM.
In VM, whenever the testing is done, I'd simply put the result in host folder and it'd automatically reflect in my local machine, in the folder which Jenkins is polling and it will build the project whichever is polling this folder and ta dahhh ....!

Related

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

Initial setup for ssh on docker-compose

I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...

Jenkins Project Not Running Correctly When Slave is Connected via Windows Service?

I have a Jenkins project that runs automated tests on a slave machine. However, when I set the connection to the slave node up as a Windows Service, and run the project on that connection, the build itself will "succeed" (sometimes) but my tests will not run correctly. When the build does succeed, the console output looks like everything went fine; I know it isn't how it should be though, because the Selenium web browser never runs on the slave machine during the execution when it's done through the Service connection. At one point I thought it might be because installing the slave-agent as a Service puts all of the associated files in the same directory that the slave node is based in by default, but when I changed the path to the executable for the Service and moved all of the files, it would still connect, and the project still wouldn't run as it should.
As soon as I delete the Service, and launch a connection manually from my slave machine, everything goes through as expected.
Does anyone know why this might be happening? Or, if not, do you know of an alternative to connecting at startup? Thanks in advance for your advice/ideas.
Just moving my comment to an answer so you can accept it as you indicated this resolved the problem, and should make it easier for others to follow.
Have you set permissions properly? The slave task runs with the local account which may not have access to the paths or tools you are trying to use. As a service in the background, you may also need to allow the service to interact with the desktop.
The service will not show up on the computer running the tests, unless you enable the check box to allow the service to interact with the desktop:
For anyone else who may be having this problem, I wanted to post the solution I ended up using (I'm not accepting it as the answer to this particular question because it's a work-around; however, #StevenScott has the answer to making this work as a Service in the comment he posted, above.)
I nixed the Service I created and made a Scheduled Task that utilized a batch script to connect the JNLP file instead.There is a command on the slave node's page in Jenkins that has a command line option, but this did not work for me in the batch file; rather, I simply wrote it to cd to a directory that already contained a copy of the slave-agent.jnlp and just ran it from there: slave agent batch script screenshot
For this to work, you will need to disable the pop-up that appears when you run the slave-agent (the one that asks if you want to run the program).
The settings for the task should include the following:
General Settings: "Run only when user is logged on"
Triggers: "At log on" (specify your user account)
Actions: "Start a program" (specify the location of your batch script)

Grails watch files doesn't work inside Docker container running inside a Vagrant virtual machine

I have a fairly nested structure:
MacOSX workstation running a...
Vagrant VirtualBox virtual machine with ubuntu/trusty64 running a...
Docker container running...
my application written in Grails
Every layer is configured in such a way as to share a portion of the file system from the layer above. This way:
Vagrant, with config.vm.synced_folder directive in Vagrantfile
Docker, with the -v command like switch and VOLUME directive in the Dockerfile
This way I can do development on my workstation and the Grails application at the bottom should (ideally) detect changes and recompile/reload on the fly. This is a feature that used to work when I was running the same application straight on the MacOSX, but now grails seems totally unaware of file changes. Of course, if I open the files with an editor (inside the Docker container) they are indeed changed and in fact if I stop/restart the grails app the new code is used.
I don't know how grails implements the watch strategy, but if it depends on some operating system level feature I suspect that file change notifications get lost somewhere in the chain.
Anyone has an idea of what could be the cause and/or how I could go about debugging this?
There are two ways to detect file changes (that I'm aware of):
Polling, which means checking timestamps of all files in a folder at a certain interval. Getting to "near instant" change detection requires very short intervals. This is CPU and disk intensive.
OS Events (inotify on Linux, FSEvents on OS X), where changes are detectable because file operations pass through the OS subsystems. This is easy on the CPU and disk.
Network File Systems (NFS) and the like don't generate events. Since file changes do not pass through the guest OS subsystem, the OS is not aware of changes; only the OS making the changes (OS X) knows about them.
Grails and many other File Watcher tools depend on FSEvents or inotify (or similar) events.
So what to do? It's not practical to 'broadcast' NFS changes from host to all guests under normal circumstances, considering the traffic that would potentially generate. However, I am of the opinion that VirtualBox shares should count as a special exception...
A mechanism to bridge this gap could involve a process that watches the host for changes and triggers a synchronization on the guest.
Check these articles for some interesting ideas and solutions, involving some type of rsync operation:
http://drunomics.com/en/blog/syncd-sync-changes-vagrant-box (Linux)
https://github.com/ggreer/fsevents-tools (OS X)
Rsync-ing to a non-NFS folder on your guest (Docker) instance has the additional advantage that I/O performance increases dramatically. VirtualBox shares are just painfully slow.
Update!
Here's what I did. First install lsyncd (OS X example, more info at http://kesar.es/tag/lsyncd/):
brew install lsyncd
Inside my Vagrant folder on my Mac, I created the file lsyncd.lua:
settings {
logfile = "./lsyncd.log",
statusFile = "./lsyncd.status",
nodaemon = true,
pidfile = "./lsyncd.pid",
inotifyMode = "CloseWrite or Modify",
}
sync {
default.rsync,
delay = 2,
source = "./demo",
target = "vagrant#localhost:~/demo",
rsync = {
binary = "/usr/bin/rsync",
protect_args = false,
archive = true,
compress = false,
whole_file = false,
rsh = "/usr/bin/ssh -p 2222 -o StrictHostKeyChecking=no"
},
}
What this does, is sync the folder demo inside my Vagrant folder to the guest OS in /home/vagrant/demo. Note that you need to set up login with SSH keys to make this process frictionless.
Then, with the vagrant VM running, I kicked off the lsyncd process. The -log Exec is optional; it logs its activity to the stdout:
sudo lsyncd lsyncd.lua -log Exec
On the vagrant VM I started Grails (2.4.4) in my synced folder:
cd /home/vagrant/demo
grails -reloading run-app
Back on my Mac in IntelliJ I edited a Controller class. It nearly immediately triggered lsyncd (2 sec delay) and quickly after that I confirmed Grails recompiled the class!
To summarize:
Edit your project files on your Mac, execute on your VM
Use lsyncd to rsync your changes to a folder inside your VM
Grails notices the changes and triggers a reload
Much faster disk performance by not using VirtualBox share
Issues: Textmate triggers a type of FSEvent that lsyncd does not (yet) recognize, so changes are not detected. Vim and IntelliJ were fine, though.
Hope this helps someone! Took me a day to figure this stuff out.
The best way I have found to have filesystem notifications visible within the container was as follows:
Create two folders, one to map the project and a "mirror"
Map the project in the first folder
Keep a background script running in the container, rsyncing the project folder to "mirror"
Run the project from "mirror"
It may not be the most efficient or most elegant way, but this way was transparent to users of the container. No additional script needs to be run.
Not tested in a larger project, but in my case I did not realize performance issues.
https://github.com/altieres/docker-jekyll-s3
Vagrant already includes some options for rsync, so it's not necessary to install a special program on the host machine.
In Vagrantfile, I configured rsync:
config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: [ "./build", ".git/" ]
Then on command line (in host), I run:
vagrant rsync
Which performs a single synchronization from host to guest.
or
vagrant rsync-auto
Which runs automatically when changes on host are detected.
See more at Vagrant rsync Documentation and rsync-auto

ArtifactDeployer plugin -remote access denied (Linux to Windows)

I am trying to use the ArtifactDeployer plugin to copy the artifacts from WORKSPACE/jobs/ directory into a remote directory on the windows 7 machine .The Jenkins machine OS is linux
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ... [ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step [ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
I am not sure how to use the Remote Directory parameter.
Please check the sample code for how I am trying to specify the remote directory
remote Directory - \ip address of that machine\users\public
Is it possible to copy the artifacts which is on linux machine to windows 7 machine?
Please let me know how to specify the remote directory.
Reading the Plugin page doesn't seem to be very helpful when it comes to configuring it. The text seem to hint that you need to have local access (from the node where the job is running) to the (remote) folder you want to deploy too. For a first test, use a local directory (on your Linux box) to see if you get it to work. Second, the correct way to address a windows share is \\servername\sharename\subdirs. Remember that you might need to login to the share.
You might need to install samba or cifs to connect to the windows share from your linux system. There is also a setting in Windows that determines whether your windows box will accept connections to aliases. If that is not the case, you need to use the hostname in order to access the share. So IP and any alias for the server will not work then.
e.g
hostname: RTS3524
alias: JENKINSREPO
ip: 192.168.15.33
share: temp
For the example above, only \\RTS3524\temp will work but \\192.168.15.33 will not.

Resources