Grails run server - grails

I sucessfully managed to complete with my Grail's new website and now i want to run it on my machine to acess it everywhere. I would like to know what is the best approach to do that, i'm a bit noob in what concerns servers and connections. Any help would be aprecciated.
Jm

You need a Java servlet container. The quickest and easiest approach is probably to use Jetty on a Linux system -- most will allow you to install it directly. This allows you run a deployed system from a "war" file. Basically, you would do something like:
grails war
To build the war file, and then copy that (it'll be created in the target directory, with a name based on your application name and version) to the webapps directory for Jetty. On my (Ubuntu) system, that's /var/lib/jetty/webapps/. If you name it root.war, it will serve URLs from the root of the server configuration (typically on port 8080), so http://myhost.mydomain:8080/ will be passed to your Grails application. You'll need to restart Jetty to pick up the new root.war file, e.g., sudo /etc/init.d/jetty restart
Jetty is not the most flexible or powerful servlet container, but it is probably the quickest way to get something up and running, and you can explore more powerful alternatives (Tomcat being an obvious one) as you have time.

Unless you need persistent storage in your app (e.g. end users uploading pictures, etc), you can use Cloud Foundry. It's free for now and makes deployment very easy.
http://blog.springsource.com/2011/04/12/one-step-deployment-with-grails-and-cloud-foundry/
I've been using is via STS and it works really well. There's just one really annoying thing and that's the fact that STS ends up doing a build war with every change.
There's a cloud foundry plugin that gives you command line deployment.
http://www.grails.org/plugin/cloud-foundry
Or there's a Cloud Foundry UI plugin that "provides a simple monitoring UI for your applications deployed to Cloud Foundry. It uses the same API as the Cloud Foundry plugin but is more convenient since runs in a web browser."
http://www.grails.org/plugin/cloud-foundry-ui

Related

How to use TestCafe-Cucumber Node.js project in DevOps deployments

I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.

Is it wise to delete the default webapps from a Tomcat-based docker image?

I am containerizing an older Java web application with Docker. My Dockerfile pulls an official Tomcat image from Docker Hub (specifically, tomcat:8.5.49-jdk8-openjdk), copies my .WAR file into the webapps/ directory, and copies in some idiosyncratic configuration files and dependencies. It works.
Now I know that Tomcat comes out-of-the-box with a few directories under webapps/, including the "manager" app, and some others: ROOT, docs, examples, host-manager. I'm thinking I ought to delete these, lest one of my users access them, which might be a security risk and is unprofessional at the least.
Is it a best practice to delete those installed-by-default web apps from an official Tomcat image? Is there any downside to doing so? It seems logical to me, but a web search didn't turn up any expert opinion either way.
Every folder under webapps represents discrete Web Application contained within Tomcat Servlet Container after the server startup and deployment.
None of those web applications have any implicit or explicit correlation with either Catalina, Jasper or any other system component of Tomcat.
You should be quite OK to remove all those folders (apps) unless you need to have a Manager tool/application to manage your deployments and server. Even that can be installed again later on.

What is the benefit of dockerize the SPA web app

I dockerize my SPA web app by using nginx as base image then copy my nginx.conf and build files. As Dockerize Vue.js App mention I think many dockerizing SPA solutions are similar.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
So what's the benefit of dockerizing SPA?
----- update -----
One answer said "If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it." I don't agree with that at all. I don't need the latest version of nginx, after all I only use the basic feature of nginx. Some of my team members just use the nginx version bundled with linux when doing development. If my docker image uses the latest ngixn it actually creates the different environment than the development environment.
I realize my question will be probably closed b/c it will be seen as opinion based. But I have googled it and can't find a satisfied answer.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
This is a security concern... fire and forget is what it seems is being done here regarding the server.
If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it.
Bear in mind that if your App does not release new versions in a weekly bases then you need to consider to rebuild the docker images at least weekly in order to get the updates and keep everything up to date with the last security patches.
So what's the benefit of dockerizing SPA?
Same environment across development, staging and production. This is called 100% parity across all stages were you run your app, and this true for no matter what type of application you deploy.
If something doesn't work in production you can pull the docker image by the digest and run it locally to debug and try to understand where is the problem. If you need to ssh to a production server it means that you automation pipeline have failed or maybe your are not even using one...
Tools like Webpack compile Javascript applications to static files that can then be served with your choice of HTTP server. Once you’ve built your SPA, the built files are indistinguishable from pages like index.html and other assets like image files: they’re just static files that get served by some HTTP server.
A Docker container encapsulates a single running process. It doesn’t really do a good job at containing these static files per se.
You’ll frequently see “SPA Docker containers” that run a developer-oriented HTTP server. There’s no particular benefit to doing this, though. You can get an equally good developer experience just by developing your application locally, running npm run build or whatever to create a dist directory, and then publishing that the same way you’d publish other assets. An automation pipeline is helpful here, but this isn’t a task Docker makes wildly simpler.
(Also remember when you do this that the built application runs on the user’s browser. That means it can’t see any of the Docker-internal networking machinery: it can’t use the Docker-internal IP addresses and it can’t use the built-in Docker DNS service. Everything it reaches has to be on docker run -p published ports and it has to use a DNS name that reaches the host. The browser literally has no idea Docker is involved in this at all.)
There are a few benefits.
Firstly, building a Docker image means you are explicitly stating what your application's canonical run-time is - this version of nginx, with that SSL configuration, whatever. Changes to the run-time are in source control, so you can upgrade predictably and reversibly. You say you don't want "the latest version" - but what if that latest version patches a critical security vulnerability? Being able to upgrade predictably, on "disposable" containers means you upgrade when you want to.
Secondly, if the entire development team uses the same Docker image, you avoid the challenges with different configurations giving the "it works on my machine" response to bugs - in SPAs, different configurations of nginx can lead to different behaviour. New developers who join the team don't have to install or configure anything, and can use any device they want - they can be certain that what runs in Docker is the same as it is for all the other developers.
Thirdly, by having all your environments containerized (not just development, but test and production), you make it easy to move versions through the pipeline and only change the environment-specific values.
Now, for an SPA, these benefits are real, but may not outweigh the cost and effort of creating and maintaining Docker images - inevitably, the Docker image becomes a bottleneck and the first thing people blame. I'd only invest in it if you see lots of environment-specific pain (suggesting having a consistent run-time environment is necessary), or if you see lots of "it works on my machine" type of bug.

Dockerizing a meteor app

So, the idea is to dockerize an existing meteor app from 2015. The app is divided into two (backend and frontend). I already made a huge bash script to handle all the older dependencies...software dependencies...etc etc. I just need to run the script and we get the app running. But the idea now is to create a docker image for that project. How should I achieve this? Should I create an empty docker image and run my script there?. Thanks. I'm new to docker.
A bit more info about the stack, the script, the dependencies could be helpful.
Assuming that this app is not in development, you can simply use eg an nginx image, and give it the frontend files to serve.
For the backend there is a huge variety of options like php, node, etc.
The dockerfile of your backend image should contain the installation and setup of dependencies (except for other services like database. There are images to do those separated).
To keep things simple you should try out docker-compose to configure your containers to act as a service as a whole (and save you some configurations).
Later, to scale things up, you could look for orchestration tools such as kubernetes. But I assume, this service is not there yet (based on your question). :)

Restart a process inside a Docker container whenever the config file changes

I have a DockerFile that starts 2 processes in a single docker container using a jar file and a config file as an argument
java -jar process1.jar process1.cfg &
java -jar process2.jar process2.cfg
process1.cfg and process2.cfg are residing in mounted directories. Now whenever there is a change in any of the cfg files, I would need to restart the corresponding process for the new change to take effect. All these to be done programmatically using Java in a REST microservice that updates the config file and restarts the process. Any idea on how to go about it ?
The problem can be generically solved by your Java app starting a config change monitoring service/thread, which manages the actual business service/thread(s) by starting it in the beginning and restarting on any change (if the change actually needs a restart). File change monitoring is standard Java functionality. The solution does not need any REST, it is not bound to microservice architecture (although it is more sensible within it) and it is not limited by or to docker containers.
If you do not want any file-based configs, do the same, but the monitoring bit can be e.g. a vert.x-based web server listening for external REST requests supplying configs, on start or for any update. The rest remains the same.
In my current workplace we actually have a module that functions in exactly this way, it is deployed to a docker and uses both file system monitoring and vert.x web server for config changes.
You can even go further and make the monitoring bit start multiple instances internally if multiple configs need to be supported.

Resources