I have a Kubernetes cluster on Digitalocean where www.example.com points to my Next.js application. This works as expected, however it serves all the assets from that same pod where my Next.js is running:
https://www.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://www.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
and so on.
Now I would like to serve these static files from a CDN instead, so I started researching on how to do this. One thing I found is that DigitalOcean offers CDN functionality through their Spaces, and one other thing is the documentation from Next.js here: https://nextjs.org/docs/api-reference/next.config.js/cdn-support-with-asset-prefix
So I set up a DigitalOcean Space, which is now available through https://cdn.example.com and also I followed the instructed from Next.js and modified my next.js.config file like this:
const isProd = process.env.NODE_ENV === 'production';
module.exports = {
// Use the CDN in production and localhost for development.
assetPrefix: isProd ? 'https://cdn.example.com' : '',
future: {
webpack5: true,
},
};
And deployed it. But of course, this doesn't work - the files that are generated during the build stage never get uploaded to my CDN. So now when I open my site it doesn't load any of the static files, because these URLs don't exist:
https://cdn.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://cdn.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
So now my question is, how do I set this up? As I understood there are two possible ways to do this.
1: is to configure my DigitalOcean Space to point at the _next folder of my pod, the first request will then serve the file still from my pod, but every request after that will then serve the file from the CDN
2: during the build phase in deployment I would have to upload my files to the CDN's _next folder.
And here is where I am stuck - I have no idea how to do either. For option 1, I tried finding such a setting inside DigitalOcean but couldn't find anything.
For option 2, this is my current workflow:
I make changes to the code
I commit the changes to Github
Github Actions is configured so that it will automatically build a new Docker image
Github Actions then pushes this new Docker image to my registry
Github Actions then updates my Kubernetes cluster, tells it to use this new image for my Next.js application
If I would have to make changes to this workflow to upload things to the CDN, where would I have to do it? My Dockerfile is a multistage file (3 stages) and only in the 2nd stage I run the build command.
Related
I have a directory under /public folder with the use of CarrierWave I store all my PDF files under this dir. But the problem is all the time I deploy new version of my Rails app this directory gets cleaned up and the all files are missing. This directory is was set under my uploader.
I also have a directory named "private" which I created manually in order to not to serve sensitive files globally on WEB. Those files also gone after new deployment process.
How can I prevent files from deleting on deploy process?
Thanks.
I assume you are using some automation for deployment. Because if you are updating your code on server instance manually then you can preserve pre uploaded file, but using manually method to update code is not a good practice.
So in automation deployment we generally follow this kind of flow.
Whenever you deploy that create a new deploy version and set that as current version.
Simply that means it's creating a new directory and placing your rails project in it. Now the files you are storing inside the project directory are there in the previous version those are not gone if you are using any linux instance.(Only if you have setup that way to preserve last few versions to restore incase of new deploy is exploded)
Clear till now?
Not suppose you are not keeping any previous version, your files are gone forever.
So it's not a good idea to store any file under project repository.
Best practice is to use bucket system like AWS bucket or Google cloud bucket, where we store all the uploaded file. If having bucket is not in budget, you can choose a different directory on linux server instance outside of project directory. But you have to setup all those upload paths and directory system to be used as bucket.
This problem I am facing with is happening because of capistrano. Every time I run cap production deploy command on my server, the capistrano deployment tool syncs every file with git repo. And the files added by end-users are not stored under my git repos of course, so capistrano overwriting the empty public folder from my repo to the server. Adding the path to :linked_dirs variable under deploy.rb solved my problem.
Another approach could be using a directory which is somewhere else than your project root path (such as /home/files) to store all your files. By doing this you will be seperating your files from project and also prevent capistrano's overwriting problem.
Hope this information will be useful for someone or future me :) ..
When you deploy with capistrano, a new deploy(folder) is created from the repository.
Any files not in the repository are not carried over.
If you want to persist files in public, you need to create a directory in your server first and then create a symlink with capistrano inside public to that folder.
Then have your carrierwave uploads saved to that location.
During each deployment cap will symlink to that directory and your files will be there
I'm have following (abstract) project structure:
src/brands
src/admin
src/home
Brands & admin are a pure vue project, home is a nuxt project. I'm trying to get the brands & admin project to run on their own subdomain (brands.website.com & admin.website.com, respectively), and home on the main domain. The deploy to production/staging happens via docker (with an nginx image), and I was thinking to just copy a nginx config file from my project to the docker image to point the files in the dist folder to the correct html file (not sure how yet, I need to research that first).
For development, I used vue.config.js (since I'm using v3 of the vue cli) and I have setup the following:
index: {
entry: 'src/index/main.js',
filename: 'index.html',
},
brands: {
entry: 'src/brands/main.js',
filename: 'brands/index.html',
},
admin: {
entry: 'src/admin/main.js',
filename: 'admin/index.html',
},
I can reach the brands module via localhost:8080/brands, admin module via localhost:8080/admin and the homepage via localhost:8080, but the problem is that on my index page I'm gonna have a route that is /brands, so that would probably overwrite the brands module route with the page from nuxt (or vice versa). Now is my question if there is a better way of doing this (for example to enable subdomains in vue / localhost) and if not, is the way I'm wanting to copy the nginx config to my docker image a good practice? Or not?
Thanks in advance!
I have a similar project architecture. I have a single repo with multiple vue/nuxt projects. Each of my projects is its own npm/webpack project and is accessed by subdomain when developing locally.
Based on your example, this is how I would setup the projects.
Modify your hosts file:
127.0.0.1 website.localhost brands.website.localhost admin.website.localhost
Using localhost as the TLD was my personal decision, feel free to name the domains anyway you like.
Configure webpack dev server to serve each project at the corresponding subdomain + port.
src/brands: https://brands.website.localhost:8080
src/admin: https://admin.website.localhost:8081
src/home: https://website.localhost:8082
Whats nice about this configuration is that your dev URLs match your production URLs. https://brands.website.localhost:8080 -> https://brands.website.com
Each project will have complete control over the domain's subpaths and won't clobber other project's routes, which you alluded to with the /brands route.
I have a website that I am developing locally and pushing to RH Openshift with a PHP-5.4 and MySQL cartridge.
Most URL's work, but I am having an issue with some page URL's being recognised locally, but returning a 404 on Openshift.
Example: on development the following URL works: local.development.local/public/reset.php but visiting the Openshift url: example.rhcloud.com/public/reset.php returns a 404. However, example.rhcloud.com/reset.php works, even though reset.php is within the public folder directory.
The folder structure is the exact same on development as it is within Openshift repo folder.
Is there a specific setting I need to set in my Openshift environment to get it to recognize the URL?
The problem was due to the directory structure I had given my website. Since I had placed the majority of my code within a folder named public, Openshift was using that as the DocumentRoot. This wasn't my required functionality, so I renamed the folder to app, updated my URL's and this resolved the issue. Blog post giving the Openshift DocumentRoot logic can be found here: https://blog.openshift.com/openshift-online-march-2014-release-blog/
I've always deployed my apps through SSH by manually logging in and running git pull origin master, running migrations and pre-compiling assets.
Now i started to get more interested in Capistrano so i gave it a try, i setup a recipe with the repository pointing to github and deploy_to to /home/myusername/apps/greatapp
The current app on the server is already hooked up with Git too so i didn't know why i had to specify the github url in the recipe again, but i ran cap deploy which was successful.
The changes didn't apply, so out of curiosity i browsed to the app folder on the server and found out that Capistrano created folders: shared, releases and current. the latter contained the app, so now i have 2 copies one in /home/myusername/apps/greatapp and another in /home/myusername/apps/greatapp/current.
Is this how it should be? and I have to migrate user uploads to current and destroy the old app?
Does Capistrano pull the repo on my localhost then upload it through SSH or run pull on the server? in other words can someone outline how the deployment works?
Does Capistrano run precompile:assets?
/releases/ is for previous versions incase you want to do cap:rollback.
/current/ as you rightly pointed out is for the current version of your app.
/shared/ is for files and folders that you want to persist between deployments, they typically get symlinked to your /current/ folder as part of your recipe.
Capistrano connects to your server in a shell and then executes the git commands on the server.
Capistrano should automatically put anything in public/system (the rails convention for stored user-uploaded files) into the shared directory, and set up the necessary symlinks.
If you put in the github url, it actually fetches from your github repo. Read https://help.github.com/articles/deploying-with-capistrano for more info.
It does, by default.
Using Rails 3.0.7 and git, deploying with capistrano. I'm using different machines as web and app servers. I cannot deploy the application code to the web server, only the static assets--basically the public/ folder.
This would seem common but no luck searching for a best practice.
Is anything build around capistrano to handle this case? Otherwise I'm thinking that adding tasks to create the structure, but scp the public directory from the app server would be the solution.
So I assume there's a business reason you can't deploy the app to the other server?
If there isn't then just deploy the whole code
and configure your web server to just serve the public folder.
(in Apache/Passenger the configs would be exactly the same, you just wouldn't enable passenger on the static server)
That is the only simple way to do it..
otherwise you're going to cause yourself a load of headaches..
Nevertheless I'm going to make up a way to solve this.
If you do need to deploy just the static code
then I suggest you create two repositories
the app (eg. git#myserver:app.git
the static files (eg. git#myserver:static.git)
Now in your app include git#myserver:static.git as a submodule mounted at public/
Having done this, you should search standard capistrano recipes for deploying with git submodules (in particular I guess you'll want to store a local cache of the submodules, update it, then git submodule init somehow with that)
You can then have two capistrano recipes
I suggest you check out capistrano multi-stage... defining app and static as two stages
You can therefore just specify git#myserver:app.git as the repository for "app"
and git#myserver:static.git as the repository for "static"
then a simple cap app deploy:migrations && cap static deploy should do it.
but remember these will not be simultaneous
I too wish there were more established practices published. We've done ours based on the Django book which recommends making your public app directory a networked directory.
This is much better as scp only works if your public directory is static. Many apps will write things to the public directory, e.g. image generation on-the-fly. These files also need to be copied to the web server immediately.
I recommend using a NFS, Samba Share or similar, so that your public directory is actually just a networked folder, so when you write to it, it's like writing to the remote folder.
To integrate it into capistrano we do the following:
create this networked folder in shared/public
After deploy:update_code:
move content from current/public to shared/public (overriding files as needed)
remove or rename current/public then symlink current/public to shared/public
Downsides:
* doesn't remove old files (like someone earlier said)
* no real rollback option (apart from redeploying older version)
Best approach I've come up with is to in fact scp files over to the web server.