I have a Java EE application with a lot of static content: Javascript, images, css and such. Right now we are using weblogic plugin for eclipse to deploy our applications for testing purposes, but it's getting pretty slow and it's only going to get slower. Since we have a lot of javascript, it's often that we have to make small changes and test them in quick succession which is becoming a big headache.
We also want to move away from using weblogic plugin on Eclipse. We want to find a solution to deploying in a test environment in a way that it only copies content that was changed since the last deploy. We thought about using an Ant script, but all solutions I found on the internet involves making an .EAR and copying it to the autodeploy folder in the test domain on the server. Which would not solve the problem since generating the ear would cause even further overhead.
Is there any way to make this work?
If you deploy to a local weblogic environment on the development machine you can use an exploded deployment (i.e. an unzipped war or ear). All changes to static content and jsps will be available immediately for testing.
Related
I want to set up an Incremental Build in TFS as we want to deploy only modified files into Physical path, not the entire code.
We want the feature to build & deploy only the files that have been changed from the previous deployment. This will reduce the build and deployment time and the developers won't have to wait longer to see their changes deployed.
What you're describing is not an "incremental build". You a describing a much more complex situation than an incremental build.
What you're describing has never been an out-of-the-box option, and is in fact incredibly difficult to do properly, and ultimately would probably not impact things as much as you're hoping, anyway.
First of all, it's actually very difficult to determine a subset of files that have changed between deployments. And if you're building and deploying properly, then you're making a single build and deploying it along a pipeline of environments. This means that "what's different" at any given time is potentially different for every environment in your pipeline. Ex: DEV has version 5, QA has version 4, and PROD has version 3. So you have to start by assuming that you're going to use the oldest version. Build systems have no innate knowledge of "releases", so you'd have to build something into your build and release pipelines to track what source version constitutes the latest code in production.
Let's say you've solved that problem. You now have the ability to retrieve just the delta between what's deployed to production and the commit being built.
If you're working with compiled code, then you still need all of the source code, because you're going to have to rebuild the whole thing. Every assembly is going to get regenerated, and different metadata at compilation time is going to mean those assemblies are different even if the code that constitutes those assemblies is the same. And since assemblies can reference other assemblies, you have no straightforward way of determining at build time which assemblies have actually changed and need to be deployed. So you pretty much have no choice but to deploy all compiled assets every time. Note that this still applies to TypeScript or anything else that goes through a compiler/transpiler process; you need all of the code available, and it has to go through the entire build process.
So at this point, you still have to build your entire application to get the deployable output. Build time hasn't gone down at all. You've managed to bring down just a subset of static content (i.e. HTML pages, images, etc) to be deployed, though. That may have sped your deployments up a bit!
However, if the thing that's making your build and deployment process slow is that you have a ton of non-code-related static content, then you've gone through a very long and convoluted process to arrive at a much simpler solution: Move static content to a CDN and get it out of source control, or have a separate process that manages static content so that it can be deployed independently of unrelated application code.
You haven't really provided any information that can be used to provide a recommendation on how to proceed, but hopefully this answer is helpful in understanding why what you want to do is not going to solve your problem, unless you are dealing entirely with static content or scripts that don't require building.
I'm developing a private webapp in JSF which is available over the internet and now reached a stage where I wanted to introduce CI (Which I'm fairly new to) into the whole process. My current project setup looks like this:
myApp-persistence: maven project that handles DB access (DAOs and hibernate stuff)
myApp-core: maven project, that includes all the Java code (Beans and Utils). It has a dependency on myApp-persistence.jar
myApp-a: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-b: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-a and myApp-b are independent from each other, they are just different instances of the core for two different platforms and only display certain components differently or call different bean-methods.
Currently I'm deploying manually, i.e. use the eclipse built-in export as war function and then manually upload it to the deployments dir of my wildfly server on prod. I'm using BitBucket for versioning control and just recently discovered pipelines in BitBucket and implemented one for each repository (every project is a separate repo). Now myApp-persistence builds perfectly fine because all dependencies are accessible via the public maven repo but myApp-core (hence myApp-a and myApp-b, too) fails of course because myApp-persistence isn't published on the central maven repo.
Is it possible to tell BitBucket somehow to use the myApp-persistence.jar in the corresponding repo on BitBucket?
If yes, how? And can I also tell BitBucket to deploy directly to prod in case the build including tests ran fine?
If no, what would be a best practice to do that? I was thinking of using a second dev server (already available, so no big deal) as a CI server but then still I would need some advise or recommendations on which tools (Jenkins, artifactory, etc.) to use.
One important note maybe: I'm the only person working on this project so this might seem like an overkill but for me the process of setting that up is quite some valuable experience. That said, I'm not necessarily looking for the quickest solution but for the most professional and convenient solution.
From my point of view, you can find the solution in this post-https://christiangalsterer.wordpress.com/2015/04/23/continuous-integration-for-pull-requests-with-jenkins-and-stash/. It guides you step by step how to set up everything. The post is from 2015 but the process and idea are still the same. Hope it helps.
I'm building an iOS app that communicates with a server. We have a test / staging server, a production server and each dev has a local instance of the server for development.
I've added some simple logic which configures the address of the server depending on whether we're running a TestFlight build, an App Store build or a debug build (for development). For the development build, the app tries to hit localhost, which is all well and good if we're running on the Simulator, but not so great if we're running on device.
I'm aware of ngrok, which is a possible solution, but since the exposed URL is partially randomly generated (for the free version at least), it's not a great fit. I was thinking that a workable approach for development could be to check the name of the development machine at compile time and insert this value. But I'm not sure how to achieve this, if it's possible at all. I remember doing compile time variable filtering using ant / maven and environment property files back in my Java days, but I'm wondering if there's a fairly straightforward way to achieve this in Xcode.
Can anybody shed any light on this?
So I carried on digging, and went with the following solution. Elements of this have been touched upon in numerous other posts here.
I added a new header file called HostNameMacroHeader.h to my
project.
I added a 'Run Script' phase to my build, before the
'Compile Sources'phase. The script contains the following:
echo "//***AUTOGENERATED FILE***" > ${SRCROOT}/MyAppName/HostNameMacroHeader.h
echo "#define BUILD_HOST_NAME #\"`hostname`\"" >> ${SRCROOT}/MyAppName/HostNameMacroHeader.h
Then in my implementation, where I want to use the server address, I use the generated BUILD_HOST_NAME macro.
It's a somewhat hacky solution, but it does the job for now. Suggestions and cleaner versions are welcome.
I am working on a grails app and need regularly to deploy hot fixes to a remote server. I am using jenkins with grails plugin for automation.
My point is the following:
Most of the time i fix a few classes, with no big changes in the app (such as new database schema, new plugins....). However each time i create a patch i have to upload trough ssh a 75M war file, which takes between 15 to 20 min. Most of the data is not needed (ie all the packaged jars). What would be sufficient is to upload only the fresh compiled classes from WEB-INF/classes/ and reload the servlet container (in my case jetty).
Anybody experienced with this, preferably with jenkins?
Check the nojars argument for the war task: http://www.grails.org/doc/1.3.7/ref/Command%20Line/war.html
This way you can place all your .jars (which are usually the biggest files inside a .war) in some other directory on the server and just reference that directory in your Jetty classpath.
Or you could write a shell script to explode the .war file (after all it's just a regular .zip file), add the compiled classes and then re-package it.
You could try using CloudBees to do continuous delivery releases. They also use deltas to upload your changes, and deployments don't affect the user experience at all.
An easy to use plugin is available to make the process seamless from within your Grails app and in a Jenkins build. I've written a blog post about how to get it all working easily.
I remember seeing this subject on the mailing list...
http://grails.1312388.n4.nabble.com/Incremental-Deployment-td3066617.html
...they recommend using rsync or xdelta3 to only transfer updated files. Haven't tried it, but it might help you?
Maybe the Cloudfoundry Micro Cloud is an option, a deployment just transfers the deltas and not the whole war file.
I would like to hear the best practices or know how people perform the following task in TFS 2008.
I am intending on using TFS for building and storing web applications projects. Sometimes these projects can contain 100's of files (*.cs, *.acsx etc)
During the lifetime of the website, a small bug will get raised resulting in say a stylesheet change, and a change to default.aspx.cs for example.
On checking in these changes to TFS, and automated build would be triggered (great!), however for deploying the changes to the target production machine, I only need to deploy for example:
style.css
default.asx
MyWebApplications.dll
So my question is, can MSBuild be customized to generate a "code pack" of only the files which require deploying to the production server based on the changeset which cause the re-build?
You are probably going to have a hard time getting MSBuild itself to do this, but the ideal tool to use in your situation is the Web Deployment Tool, aka MSDeploy. With this tool you can tell it to deploy the changes to the target website. It will determine only the changed files and then just deploy those. Also you can perform customization to the deployment and a whole bunch of other stuff. It's a really great tool.