How to correctly configure server for Symfony (on shared hosting)? - symfony1

I've decided to learn Symfony and right now I am reading through the very start of the "Practical Symfony" book. After reading the "Web Server Configuration" part I have a question.
The manual is describing how to correctly configure the server: browser should have access only to web/ and sf/.../ directories. The manual has great instructions regarding this and being a Linux user I had no problem following them and making everything work on my local machine. However that involves editing VirtualHost entries which normally is not easy to do on common shared hosting servers. So I wonder what is the common technique that Symfony developers use to get the same results in shared hosting environment? I think I can do that by adding "deny from all" in the root and then overwriting that rule in the allowed directories. However I am not sure if that's the easiest way and the way that is normally used.

If you can add files outside public_html directory, put all the directories there and put on the public_html directory all the files inside your web directory (put your sf directory if it was needed by your app), In this case only the web files are accessible on the public, however if you can only access the public_html directory and cannot add directory outside it, you can put all your files to a folder inside the public_html and secure it (I think .htaccess can do the trick), the web files should be in the public_html also but you must change the require_once(dirname(__FILE__).'/../config/ProjectConfiguration.class.php'); of your index.php to point to the new location of the ProjectConfiguration file.
But since this is a shared hosting environment, it is still possible that others may have access to your files but this is mostly on how the hosting provider setup their servers.

Related

How do I modify nginx.conf when utilizing Docker?

I have inherited a Microsoft Visual Studio MVC project for modification and I now need to allow users to upload files to the web server. I utilize Windows 11 and IIS Express from within MS Visual Studio for development purposes and there is no issue with IIS. However, the app runs in production on a Ubuntu-based server running NGINX as a web server.
PROBLEM: When I attempt to upload a file larger than 1MB from my browser to the production server I receive the error message "413 Request Entity Too Large." After scouring the web I have discovered: (a) NGINX invokes the 1MB limit by default; and (b) it is necessary to modify the nginx.conf file by adding the NGINX "client-max-body-size" directive.
I have located the nginx.conf file on the production server and have browsed it with Nano. However, I have stopped short of attempting to modify and save the file due to the presence of Docker. Admittedly, I know virtually nothing of Docker and, unfortunately, the principals who set up this server have long since departed the Company. Furthermore, it is unclear to me whether simply modifying the nginx.conf file and restarting NGINX on the production server will do the trick given what I presume to be the necessity to involve Docker.
As an aside, my customer utilizes Azure DevOps to facilitate collaboration. I regularly stage changes to my project using Git from within MS Visual Studio and subsequently use a Ubuntu update_app.sh script to push the changes into production. I had previously attempted to modify the nginx.conf file included with the local copy of my MVC project. The file was pushed via Azure DevOps to the production server but the modified nginx.conf file would not push to production, presumably due to the presence of Docker.
I would appreciate someone providing me with an explanation of the interaction between Docker and NGINX. I would further appreciate any tip on how to get the modified nginx.conf file pushed into production.
Thanks in advance for your assistance.
Thankfully this is pretty straightforward and NGIX has good documentation on this topic (see Copying Content and Configuration Files from the Docker Host section of Deploying NGINX and NGINX Plus on Docker).
Find the Dockerfile for the deployed solution (typically in project or solution root directory)
Copy the contents of the current NGIX configuration file (either from browsing the container image or finding configuration files within source)
Create a directory named conf within your source.
Copy the current configuration into the a file within conf and apply any changes to the configuration file.
Update Dockerfile to copy files within conf into the container where NGIX loads configuration files upon startup. Add the following line into Dockerfile after the line FROM ngix (note: line may vary but should be obvious its using ngix as base image): COPY conf {INSERT RELATIVE PATH TO CONF PARENT FOLDER}
This will copy your local configuration file into the container upon creation which NGIX will load upon startup.
Note: There are other solutions that support configuration changes without rebuilding the container.

Editing application settings of a containerized application after deployment

There might be something I fundamentally misunderstand about Docker and containers, but... my scenario is as follows:
I have created an asp.net core application and a docker image for it.
The application requires some settings being added / removed at runtime
Also some dll plugins could be added and loaded by the application
These settings would normally be stored in appsettings.json and a few other settings files located in predefined relative path (e.g. ./PluginsConfig)
I don't know how many plugins will there be and how will they be configured
I didn't want to create any kind of UI in the web application for managing settings and uploading plugins - this was to be done on the backend (I need the solution simple and cheap)
I intend to deploy this application on a single server and the admin user would be able and responsible for setting the settings, uploading plugins etc. It's an internal productivity tool - there might be many instances of this application, but they would not be related at all.
The reason I want it in docker is to have it as self-contained as possible, with all the dependencies being there.
But how would I then allow accessing, adding and editing of the plugins and config files?
I'm sure there's a pattern that would allow this scenario.
What you are looking for are volumes and bind mounts. You can bind files or directories from a host machine to a container. Thus, host and container can share files.
Sample command (bind mount - (there are also other ways))
docker container run -v /path/on/host:/path/in/container image
Detailed information for volumes and bind mounts

Is it wise to delete the default webapps from a Tomcat-based docker image?

I am containerizing an older Java web application with Docker. My Dockerfile pulls an official Tomcat image from Docker Hub (specifically, tomcat:8.5.49-jdk8-openjdk), copies my .WAR file into the webapps/ directory, and copies in some idiosyncratic configuration files and dependencies. It works.
Now I know that Tomcat comes out-of-the-box with a few directories under webapps/, including the "manager" app, and some others: ROOT, docs, examples, host-manager. I'm thinking I ought to delete these, lest one of my users access them, which might be a security risk and is unprofessional at the least.
Is it a best practice to delete those installed-by-default web apps from an official Tomcat image? Is there any downside to doing so? It seems logical to me, but a web search didn't turn up any expert opinion either way.
Every folder under webapps represents discrete Web Application contained within Tomcat Servlet Container after the server startup and deployment.
None of those web applications have any implicit or explicit correlation with either Catalina, Jasper or any other system component of Tomcat.
You should be quite OK to remove all those folders (apps) unless you need to have a Manager tool/application to manage your deployments and server. Even that can be installed again later on.

Which file permissions should be set for NGINX hosted files?

When running an NGINX server that hosts static content, which file permissions should be set? The main consideration is to have the safest configuration.
I currently have two dockerized NGINX servers behind a reverse-proxy, one of them containing files with 1000:1000 (copied directly from the host machine), the other with root:root (copied from a multi-stage build). The current configuration works, but I would like to know the best practice.
Folders need read and execute (to traverse). Static files just need read. I assume your server is not running scripts. Ownership should not be root but a user/group usable by nginx.

What is virtual directory? What's the use of it?

Please someone explain to me by example what is a virtual directory and why do we need it?
A virtual directory is a friendly name, or alias, either for a physical directory on your server hard drive that does not reside in the home directory.
Because an alias is usually shorter than the path of the physical directory, it is more convenient for users to type.
Taken from here
In essence, in IIS, it's like a shortcut to another directory on your computer while seeming like it is a subdirectory of the current directory.
I.E.
www.example.com/bob/phil
bob may be a subdirectory of the root, while phil is a directory elsewhere on the computer, not necessarily in bob
Virtual directory in a directory that is created in IIS to host a local application or go to a particular folder on the server physical or virtual directory is created .
For example: in the development team , if you wish to receive your application, you must create a virtual directory and specify the physical path.
You can create the virtual directory, click the IIS and selecting from the context menu.
3100 Dumps PDF
A web application is accessed using a virtual directory name instead of a physical folder name. For example, if you have a web application called "Shopcart" in your machine, you will have a virtual directory for this web application. You will access your web application using the URL . If your virtaul directory name is "Test", then your web application url will be "http://localhost/Test".

Resources