Gulp-rev on multiple servers - ruby-on-rails

We have a number of production servers and recently introduced gulp to building our frontend of our app and away from Rails asset pipeline.
So far so good except the Rails asset pipeline used to magically sync the cache busting between the servers and load balancer worked nicely. This is not the case with gulp-rev which builds on each individual server and randomly assigns a hash to the files, causing problems with load balancing (sometimes the files aren't loading because the browser is requesting the hashed file name from server 1)
What is the best way to solve this?
One suggested way is to compile or build via gulp then dump the product into each server (perhaps Jenkins could help) but we are using a pretty OG way of deploying with Chef.

Related

Deploying Data with containers across multiple environments

I wondered if I could pick your brains regarding deploying data across environments with containers... I have a hard time understanding the "standard" deployment process.
Let me set the scene:
You're running a CMS on your local development environment and you have two containers - a database container that runs MySQL and links to a volume (that stores the data), and the CMS container that links to it. All fine.
Now, I can deploy through GitLab/GitHub CI/CD by pushing any code changes to the repo in the CMS container - which will rebuild an image and push it to a 'Staging' environment (via a container registry). That's fine too.
Here's the issue - what's the standard accepted way of deploying the DATA in the database to the 'Staging' environment? Particularly through CI/CD?
I understand that having a cloudDB is best practice on staging/production, which again, is totally ok for me to do, but short of writing some Bash scripts to sqldump and rsync the data, I can't see how people do it with standard CI/CD pipelines.
Am I missing something here?

What is the benefit of dockerize the SPA web app

I dockerize my SPA web app by using nginx as base image then copy my nginx.conf and build files. As Dockerize Vue.js App mention I think many dockerizing SPA solutions are similar.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
So what's the benefit of dockerizing SPA?
----- update -----
One answer said "If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it." I don't agree with that at all. I don't need the latest version of nginx, after all I only use the basic feature of nginx. Some of my team members just use the nginx version bundled with linux when doing development. If my docker image uses the latest ngixn it actually creates the different environment than the development environment.
I realize my question will be probably closed b/c it will be seen as opinion based. But I have googled it and can't find a satisfied answer.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
This is a security concern... fire and forget is what it seems is being done here regarding the server.
If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it.
Bear in mind that if your App does not release new versions in a weekly bases then you need to consider to rebuild the docker images at least weekly in order to get the updates and keep everything up to date with the last security patches.
So what's the benefit of dockerizing SPA?
Same environment across development, staging and production. This is called 100% parity across all stages were you run your app, and this true for no matter what type of application you deploy.
If something doesn't work in production you can pull the docker image by the digest and run it locally to debug and try to understand where is the problem. If you need to ssh to a production server it means that you automation pipeline have failed or maybe your are not even using one...
Tools like Webpack compile Javascript applications to static files that can then be served with your choice of HTTP server. Once you’ve built your SPA, the built files are indistinguishable from pages like index.html and other assets like image files: they’re just static files that get served by some HTTP server.
A Docker container encapsulates a single running process. It doesn’t really do a good job at containing these static files per se.
You’ll frequently see “SPA Docker containers” that run a developer-oriented HTTP server. There’s no particular benefit to doing this, though. You can get an equally good developer experience just by developing your application locally, running npm run build or whatever to create a dist directory, and then publishing that the same way you’d publish other assets. An automation pipeline is helpful here, but this isn’t a task Docker makes wildly simpler.
(Also remember when you do this that the built application runs on the user’s browser. That means it can’t see any of the Docker-internal networking machinery: it can’t use the Docker-internal IP addresses and it can’t use the built-in Docker DNS service. Everything it reaches has to be on docker run -p published ports and it has to use a DNS name that reaches the host. The browser literally has no idea Docker is involved in this at all.)
There are a few benefits.
Firstly, building a Docker image means you are explicitly stating what your application's canonical run-time is - this version of nginx, with that SSL configuration, whatever. Changes to the run-time are in source control, so you can upgrade predictably and reversibly. You say you don't want "the latest version" - but what if that latest version patches a critical security vulnerability? Being able to upgrade predictably, on "disposable" containers means you upgrade when you want to.
Secondly, if the entire development team uses the same Docker image, you avoid the challenges with different configurations giving the "it works on my machine" response to bugs - in SPAs, different configurations of nginx can lead to different behaviour. New developers who join the team don't have to install or configure anything, and can use any device they want - they can be certain that what runs in Docker is the same as it is for all the other developers.
Thirdly, by having all your environments containerized (not just development, but test and production), you make it easy to move versions through the pipeline and only change the environment-specific values.
Now, for an SPA, these benefits are real, but may not outweigh the cost and effort of creating and maintaining Docker images - inevitably, the Docker image becomes a bottleneck and the first thing people blame. I'd only invest in it if you see lots of environment-specific pain (suggesting having a consistent run-time environment is necessary), or if you see lots of "it works on my machine" type of bug.

How to get an XML file via HTTP when Jenkins runs Codeception

Our developers develop in Virtualbox VMs which have a web server configured to point at the codebase they are working on. When they run Codeception tests, the code calls the local web server requesting a data file from within the codebase and proceeds with the test. Works well.
The problem I'm running into is running the tests on our Jenkins Server (using a Declarative Pipeline, if that matters). Jenkins installs the code in a temporary directory (/var/lib/jenkins/workspace/). Obviously I can't point the web server on Jenkins to point to the temp directory (especially when I'm building different branches in different temporary directories) so the tests fail when trying to fetch the data file.
The reason we fetch the data file via HTTP is because the code we are testing does precisely that in production; fetches code from another web server not under our control and we want to mimic the functionality in our test.
How should this be handled? Copy the data files to a common directory outside of Jenkins that the web server points to? If so, how do I handle different developers using the same data file with different values? Or do I just tell them "Don't do that?"
Any suggestions?

deploy two applications on the same domain on heroku

I have a back end api deployed on heroku,
mydomain.com
The front end is an angularjs application, I want to host it on the same url so that I will avoid the cors restriction.
Is that possible ?
The easiest ways to solve this:
by using Multiple Buildpacks on Heroku and buildpack-nginx you can have a nginx instance in your dynos that can serve your static files, and also pass requests to your backend server (unicorn) processes.
The frontend code has to reside inside the same repo as the backend code, or (as an alternative) be pulled out of a different repo in the build process.
similar to the first solution, but without nginx. Possible if you get ruby/unicorn to serve your static JS files too.
use Heroku's Docker Support to build your own app image and deploy it.
All of the above combined :)
This most likely could include adding the nodejs buildpack to setup a proper build pipeline.

Efficient rails deployment to a small EC2 instance

I've got a few rails apps running under different vhosts on a single small EC2 instance. My automated deployment process for each involves running some rake tasks (migration, asset compilation, etc.), staging everything into a versioned directory and symlinking web root to it. I'm serving the apps with Apache + Passenger. Through this process (and the rebooting of passenger), I have ruby processes eating up 100% of CPU. I understand why this is happening, but I need a way to throttle these processes down so that all of the other apps on the instance aren't as significantly impacted as they currently are.
Don't know if you've already come across this. But it's there to make EC2 deployment more convenient. https://github.com/wr0ngway/rubber
There is also a Railscast on it at: http://railscasts.com/episodes/347-rubber-and-amazon-ec2
Hopefully, these two resources will help you somewhere.

Resources