404 on Openshift URL, URL fine on dev - url

I have a website that I am developing locally and pushing to RH Openshift with a PHP-5.4 and MySQL cartridge.
Most URL's work, but I am having an issue with some page URL's being recognised locally, but returning a 404 on Openshift.
Example: on development the following URL works: local.development.local/public/reset.php but visiting the Openshift url: example.rhcloud.com/public/reset.php returns a 404. However, example.rhcloud.com/reset.php works, even though reset.php is within the public folder directory.
The folder structure is the exact same on development as it is within Openshift repo folder.
Is there a specific setting I need to set in my Openshift environment to get it to recognize the URL?

The problem was due to the directory structure I had given my website. Since I had placed the majority of my code within a folder named public, Openshift was using that as the DocumentRoot. This wasn't my required functionality, so I renamed the folder to app, updated my URL's and this resolved the issue. Blog post giving the Openshift DocumentRoot logic can be found here: https://blog.openshift.com/openshift-online-march-2014-release-blog/

Related

Serve files from public folder in ruby on rails app

I have been handed a Ruby Project that creates a document and serves it to the user, when I try to access the file on a local environment it it is delivered correctly, (this is the code that does so).
filepath = Rails.root.join("public",#records.document.url)
send_file (filepath)
So I know the file is constructed correctly and sending it to the user using send_file works at least in a local environment.
But when it's deployed on the production server (running Amazon EC2, ubuntu, deployed with dokku) I get a 500 Internal server error:
ActionController::MissingFile (Cannot read file *path of the file*)
Few things I'm noticing: doing a find / -iname "*filename*" tells me the file is stored in var/lib/docker/overlay2/*container_name*/merged/app/public/filename and var/lib/docker/overlay2/*container_name*/diff/app/public/filename but the result of joining Rails.root with the filename is app/public/filename, do I need to pass send_file the whole filepath?
I googled for a couple hours and it seems nginx has no access to the public folder because it's running in the host machine while the app is inside a container? How would I know if that is the case and if so, how should I serve the file?
The person who originally wrote the code told me to use OpenURI.open_uri() but googling it doesn't seem to turn up anything applicable to the situation.
Nothing you're doing here actually makes sense - its sounds like you're just following a bunch of misinformation down a bunch of rabbit holes.
The way this is supposed to work is that the files in /public - not /app/public are served directly by the HTTP server (NGinX or Apache) in production and your Rails application in development (so you don't have to configure a local HTTP server). The /app directory is for your application code and uncompiled assets. Do not serve files from there - ever.
The /public directory is used for your compiled assets and stuff like robots.txt, the default error pages and various icons. Serving the files directly by your HTTP server is far more efficient then serving them through your Rails application. You can do a litmus test to see if serving static assets are working by sending curl -v YOUR_URL/robots.txt.
If this isn't working in production you need to check your NGinX configuration. There is no shortage of guides on how to serve static files with NGinX and Docker.
Serving files with a Rails controller and send_data / send_file should only be done when its actually needed:
The file is not a static file or something that can be compiled at deploy time.
You need to provide access control to the files with your application.
Your proxying files from another source.

Getting 500: Internal Server Error after deploying Next.js project in Vercel. Might be related to Environment Variables

I am getting 500 internal server error after deploying a Next.js app to Vercel. The project works in the local machine but isn't working in the URL to which it is deployed to.
I have used environment variables in Vercel, which might be related to the issue.
I added these 4 env variables - NEXTAUTH_URL, NEXTAUTH_SECRET, TWITTER_CLIENT_ID, TWITTER_CLIENT_SECRET.
In my project, I've created a separate file '.env.local' which contains all of my project-related keys and URLs.
At first, this env variable 'NEXTAUTH_URL' was pointing to 'http://localhost:3000/'
NEXTAUTH_URL = http://localhost:3000/
And then, after deploying my app in Vercel, I updated that variable with the deployed URL in my project as well as in Vercel.
NEXTAUTH_URL = https://twitter-clone-seven-coral.vercel.app/
I have also added the above URL in 'Twitter's Developer Portal' in 'OAuth 2.0' in the 'Callback URI/Redirected URL' section:
Before deploying my app in Vercel, the CALLBACK URI/REDIRECT URL was pointing to https://localhost:3000/api/auth/callback/twitter
and WEBSITE URL was pointing to https://test.com
which I then updated after deploying the app initially.
This is the first time I'm working with Environment variables, so I do not have much idea on how to proceed with this error.
Package.json for reference:
Yes, have to set the Environment Variable , tried it with vercel but didnot supported or maybe missed something, but works fine with netlify, just deploy with the environment variable, from your project .env.local files, get the keys and values give also provide the NEXTAUTH_URL properly , then it should run
Had faced same problem and find out solution after a long research
The Trick is you have to set environment variable to vercel or any host platform
how to set environment variable in vercel
how to set environment variable in heroku

Using Digitalocean Spaces as CDN for Next.js _next directory

I have a Kubernetes cluster on Digitalocean where www.example.com points to my Next.js application. This works as expected, however it serves all the assets from that same pod where my Next.js is running:
https://www.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://www.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
and so on.
Now I would like to serve these static files from a CDN instead, so I started researching on how to do this. One thing I found is that DigitalOcean offers CDN functionality through their Spaces, and one other thing is the documentation from Next.js here: https://nextjs.org/docs/api-reference/next.config.js/cdn-support-with-asset-prefix
So I set up a DigitalOcean Space, which is now available through https://cdn.example.com and also I followed the instructed from Next.js and modified my next.js.config file like this:
const isProd = process.env.NODE_ENV === 'production';
module.exports = {
// Use the CDN in production and localhost for development.
assetPrefix: isProd ? 'https://cdn.example.com' : '',
future: {
webpack5: true,
},
};
And deployed it. But of course, this doesn't work - the files that are generated during the build stage never get uploaded to my CDN. So now when I open my site it doesn't load any of the static files, because these URLs don't exist:
https://cdn.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://cdn.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
So now my question is, how do I set this up? As I understood there are two possible ways to do this.
1: is to configure my DigitalOcean Space to point at the _next folder of my pod, the first request will then serve the file still from my pod, but every request after that will then serve the file from the CDN
2: during the build phase in deployment I would have to upload my files to the CDN's _next folder.
And here is where I am stuck - I have no idea how to do either. For option 1, I tried finding such a setting inside DigitalOcean but couldn't find anything.
For option 2, this is my current workflow:
I make changes to the code
I commit the changes to Github
Github Actions is configured so that it will automatically build a new Docker image
Github Actions then pushes this new Docker image to my registry
Github Actions then updates my Kubernetes cluster, tells it to use this new image for my Next.js application
If I would have to make changes to this workflow to upload things to the CDN, where would I have to do it? My Dockerfile is a multistage file (3 stages) and only in the 2nd stage I run the build command.

Internal Server Error with Craft CMS 3

I'm installing Craft CMS 3 on a staging environment at http://staging.overlookpro.com and am having issues getting the CMS to show. I've installed Craft CMS 3 using Composer and selecting staging.overlookpro.com as my web root folder on my server. The folders that are installed are in this format: staging.overlookpro.com/craft/*.
On my local copy I am using MAMP on macOS and the CMS works completely fine. But for some reason the staging and production sites keep showing an Internal Server Error. I've made sure I had PHP 7 installed, but the control panel will not show.
if it shows completely fine in your local environment, so make sure your URL should point to the web directory.
For admin login : http://<Hostname>/index.php?p=admin/install
You have to use it in this format, then it will work out:
https://wedot.ch/index.php?p=admin/install
Or you can use it like this:
https://wedot.ch/admin/install
Your URL must Point to your web Directory.
Use this URL: http://<Hostname>/index.php?p=admin/install

rails app - sudden 403 after pull - how do I start to debug?

I'm been working on a rails 3.1 app with one other dev.
I've just pulled some of his recent changes, using git. And am now getting a 403 on any page I try to visit.
You don't have permission to access / on this server.
I'm running the site locally through passenger.
Oddly, when I start the app using rails' internal server. I can visit the site at http://0.0.0.0:3000
Looking at the changes in this recent pull, the only files have changed are some javascripts, some html the application.rb, routes.rb and a rake file.
How do I debug this, I'm a bit lost on where to start?
EDIT:
If I roll back to an earlier version the site works, through passenger. Which leads me to believe the problem is within the rails app, rather than an Apache error. Or it could be a permissions thing, can git change file permissions in this way?
IMHO this is a configuration error in Apache or wrong directory layouts. Make sure that the passenger_base_uri still points to the public folder inside your rails project and that there are no hidden .htaccess files which block access. Also verify that your sym-links are correct (if there are any). Also check your Apache error log.
Start by launching your console to see if rails and your app can be loaded.
In your application root directory type :
rails console

Resources