Deploy features.xml in servicemix during jenkins Build - jenkins

I have my features.xml file in src/main/resources/features folder , when I build my project through Jenkins after building my bundle goes to the nexus repository , my requirement is that after my bundle goes to nexus then features.xml should automatically be deployed on servicemix as part of build only. I should not open the servicemix console to install the feature. Please help

You may think about using a KAR (KAraf aRchive).
More information can be found here: http://karaf.apache.org/manual/latest-3.0.x/users-guide/kar.html
You can build а KAR (through Jenkins), containing your feature, then you can use a hot deployment.
Apache Karaf also provides a KAR deployer. It means that you can drop
a KAR file directly in the deploy folder.
Apache Karaf will automatically install KAR files from the deploy
folder. You can change the behaviours of the KAR deployer in the
etc/org.apache.karaf.kar.cfg:

I have also been working on this and my solution was to turn to automated scripting to accomplish this. I wrote a ssh and FTP based program which would stop an smx, delete the ${karaf.home}/data/cache/ directory, replace the new feature file with the one retrieved from the ftp operation, then restart the karaf container.
If you are open to looking into other possibilities:
You can look into Fuse Fabric which can link many smx Containers together and implement version increases and rollbacks. Currently I believe this would also need scripting to accomplish it automatically.
The third option is relatively new and comes in the form of Building docker images and deploying them via OpenShiftV3 which was just unveiled at the Redhat Summit 2015. Its worth noting its fairly new, but it does pack a very impressive feature set.

Related

docker, vagrant sandboxes in php development

I have a missunderstanding and i want to find the best approach of a development sandbox environment for a PHP project.
I have a github repo that will host the code , .php, .js (will using webpack with babel), .scss files and different machines for development (windows, mac, ubuntu) .
I want to be able to pull the code from git hub , run a command (like vagrant up) and start a VM / container / sandbox with apache2 , php, nodejs that will run to parse .scss files and .js files into one and then be able to start the server on every platform without having to run gulp locally on developers PC, and then have the XAMPP installed with the correct path for apache2 http folder.
What would be the best approach ? To use vagrant VM with a file config on the repo or docker containers ?
I just want to simplify the development experience and to have this automated tools that start a server, compile sass and babel js.
Thank you
While I am a big Docker fan, it isn't always the right tool for the job. Docker has principals like immutability and single-service-per-container that probably won't work well for what you're looking to do without a learning curve.
There is a great open-source Vagrant tool called PuPHPet that make configuring a development environment straightforward.
https://puphpet.com/
From the PuPHPet web site you can configure an image with Apache2, PHP, and NodeJS via their wizard and it will generate a Vagrant file that you to run locally on your workstation. This way you can have all the software you need, without having to deal with installing/maintaining it yourself. It also supports installing databases, queues, and mail applications, should you need them.
For your scenario, I would clone the code from Github onto your workstation (not the VM) and mount it using Synced Folders against Vagrant, but still directly accessible by your IDE.

TFS Release Task to Deploy a Web Deploy Package to specific directory

I've been digging for hours and i haven't been able to find what i would think is a pretty common scenario.
I am attempting to deploy a Web Deploy Package to my existing Web Site\Web App via a TFS Release. The location of my existing Web Apps is mapped to a different drive. My source code on my web server is not in C:\inetpub lets say its in D:\MyFiles.
I'm open to using any TFS task to do this. It seems like my two options are:
Run Batch Script - point to myApp.deploy.cmd
WinRm IIS Web App Deployment
I've seen lots of examples of overriding the computer name via the setParamater file but I have not seen one example of how to set the target path for the package?
Again, i want to deploy a web package via a TFS release to D:\MyFiles. I've created the package and it deploys locally to c:inetpub, I would assume if I can get it to deploy to a specified Target location locally then when I run that same. CMD file from TFS release it will use that location on the deploy to server.
UPDATE:
So... this just started working. I'm not sure what the issue was but the WinRm Task didn't do the deploy on Friday but did the deploy on Monday. I'm thinking it may have been related to a FQDN for the server path? Honestly I'm not sure what fixed it or what to do with this post? The answer below by #Andy may help someone so I won't delete it. That link is a good one and it showed me how to perform IIS configuration with Web Deploy.
Thanks in advance,
Greg
Seems you are trying to change the physical path of an IIS site/app using MSDeploy.
Just try adding an additional command (appcmd) to the MSDeploy package manifest to change the physical path of the IIS site during the deployment:
<runcommand path="%windir%\system32\inetsrv\appcmd set app /app.name:"Default Web Site/app12" /[path='/'].physicalPath:C:\temp\app12" waitInterval="5000"/>
Refer to this article for details:
WebDeploy/MSDeploy Quick TIP: Change IIS Site/APP Physical Path with MSDeploy

Public testing environment for RoR + Angular

currently I need a little hint for test running a RoR + Angular Application. And with test running I talk about that the "project owner" can see the current version of the project. For large projects with a lot of developers we had a build server, where the server got the current version from the git repository and deployed it as a "nightly build".
For projects where I'm the only developer I use dropbox to synchronize my working directory to a server, that is accessible for the project owner.
But now I'm working on a small project with 2 other developers. The build server is too much and the dropbox solution, well it's not going to work, because every one of us has a different state. And working on the same Dropbox directory is a no go.
So what's the best solution?
Consider running it on Heroku, which has a free tier. Once set up properly, deployments can be done by simply pushing to your Heroku git remote, or you can set it up to watch an existing remote repository branch (such as a particular branch on Github).
Or you could run it on your own machine. If you are behind a firewall you could set up a tunnel so your team can see it, with something like ngrok.

Apache Service Mix Deployment Approaches

Folks,
We have got an enterprise application which uses Apache Service Mix for deployment. The application consists of various services and each is created as separate Maven project (bundle).During development, we are actually building each service separately and in-order to deploy it,its being put in the deploy folder. Also, we have to uninstall the bundle from the container(say; karaf) and then install it again from the console to bring the new changes in effect. This is fine during the development phase.
Now we want to deploy the code to an UAT environment (Amazon EC2) for the client to do the testing. We are now confused about how to deploy the bundles to the remote environment. Do we have a standard approach for CI using Jenkins(or some other tool) to automate the build and deploy process , so that someone who has no knowledge about the bundles(SMX) can deploy the code. We are using Github for source code management.
We have searched a lot in this regard and couldn't find any resources which provide some leads/hints on this.
Any help/tips is highly appreciated. If you need more info, I can give more details.
~Ragesh
We do have exactly similar setup and we use the Jenkins to build and let the Sysadmin to copy the bundles to one server and then he enables the rsync to rest of the servers.
Remember, always deploy the dependent bundle first and then remaining ..
Since we have this dependency ,we can't go automating this process.

Is it possible to run a private Hex (Erlang) dependency manager (and if so how)?

I'm working in an Erlang environment. I'm looking to establish a dependency manager so that our build server can publish binaries for reuse instead of using source code dependencies. The Hexpm GitHub project implies that it is possible to run it outside of the hex.pm website, but I don't see any instructions for doing so. Specifically, I would like my build server to be able to publish packages either directly (via the filesystem) or via rebar3, and for subsequent rebar3 builds to be able to use those published packages
Is it possible to run Hex on my own server?
If so, where would I find some documentation on how to set it up (or provide the instructions directly)?
If you look at https://github.com/hexpm/hex_web there are instructions in the README.md for both installing and running it. It's a phoenix application, so it should all be relatively familiar ground if you've looked at the phoenix framework before.
As for getting rebar3 to work with your installation, there is documentation here as to the config values to use for setting the URLs to use for hex packages: http://www.rebar3.org/docs/hex-package-management.
HTH.

Resources