Is there a way to make BaseX serve an HTML document? - same-origin-policy

Is there a way to make BaseX's HTTP server serve an HTML document stored either in the db as a raw resource or in the file system, with a text/html content type, so it can be displayed in a browser?
The document is a web page that does XHR requests to BaseX. Currently, I load it on the browser through the file protocol. This necessitates making Jetty to respond with CORS headers, or else the same origin policy blocks the XHR requests.
However, this is a maintenance burden. Every update to BaseX requires manually getting a new version of the servlet filter that adds the CORS headers.
I'd like to have BaseX itself serve the HTML document (and become the origin), thus eliminating the cross origin requests.
Is it possible?

The default web.xml (located in BaseXWeb/WEB-INF) already includes configuration to serve static files from the ./static directory under the /static/ URI:
<!-- Mapping for static resources (may be restricted to a sub path) -->
<servlet>
<servlet-name>default</servlet-name>
<init-param>
<param-name>useFileMappedBuffer</param-name>
<param-value>false</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>default</servlet-name>
<url-pattern>/static/*</url-pattern>
</servlet-mapping>
You can also have a look at the BaseX DBA, which also acts as an example implementation of web applications hosted by BaseX and makes use of the ./static folder for some JavaScript files.
Of course, you could also change the default web.xml if you require the files hosted from another directory. An alternative would always be to store the documents in a database as RAW files, and serve them with adequate content type on your own. As hosting files through the ./static folder bypasses RestXQ execution and has Jetty offer the files directly, you might have some performance improvements over reading files from BaseX databases, though. A third solution might be to host a reverse proxy in front of BaseX to serve the static files (which is usually be done for production anyway), but this adds some administrative overhead in development.

Related

Prevent direct access to files on IIS server

I have two servers, one for my mvc application and the other one as a storage for large files like images etc, both running on Windows Server 2012 R2.
How can I prevent direct access to the files on storage server?
say, mvc is on IP1/ and storage is on IP2/.
Link to a file would be like: IP2/MediaFiles/2015/12/image0001.jpg.
I need only GET requests from IP1 have access to the link above. How?
UPDATE
server1 on IP1 needs to be free of file sharing since media server is on IP2 and we don't need to load files per request on server1's RAM. (server1 will crash soon!) therefore no HttpHandler can be used!
In this question I'm looking for a way to prevent unauthorized users from accessing files on server2 (on IP2) by entering direct address.
Alright I found the solution!
Working on such problems needs some trick gathered from different sources based on your needs. I was looking for a way to prevent unauthorized users from accessing files on file server which is different from your main server. (the main server is authorizing users)
First of all, I blocked ALL incoming requests containing the Url pattern of my sensitive files using IIS rules. Then I wrote some lines of code for file server to handle Http requests using IHttpHandler interface in order to 1) check authorization rules and 2) send exact files to clients without converting them to byte array. And lastly, I used This Link to prettify links to file server! That's all folks ;)
Now:
physical link [blocked] : IP2/MediaFiles/2015/12/image0001.jpg
virtual link : IP2/Please/Find/A/File/By/DB/Id/1 ---> image0001.jpg
All what you wanted is in Web.Config file. You should place it in the root directory of your file storage server if you using IIS there.
In <system.webServer> node you should place this code:
<security>
<ipSecurity allowUnlisted="false"> <!-- this line blocks everybody, except those listed below -->
<clear/> <!-- removes all upstream restrictions -->
<add ipAddress="127.0.0.1" allowed="true"/> <!-- allow requests from the local machine -->
<add ipAddress="IP1" allowed="true"/> <!-- allow the specific IP of IP1 -->
</ipSecurity>
</security>
This rule will be accepted for all subfolders of root folder. If you need to block requests only for specific folder you should place your Web.Config there.

Static content delivery vs dynamic in WildFly

My application ear is bundled with static resources like js, css, images, etc and was serving js files at URI app/scripts. These requests were passing through filters in the application. Now I configured WildFly to serve static contents like images, js and css. It is served at path app/scripts for js. Since both have same URI which one will be working now? It looks like static content is getting precedence because I noticed that now request are not passing through filters. Which method is better option to improve performance?
Hi Make your static contents as a separate deployment. And Create a folder named "MyContents.war" in deployment folder of your Wildfly and keep all your scripts, css what ever inside that folder, add the following settings in your standalone.xml file inside <server> tag.
<deployments>
<deployment name="MyContents.war" runtime-name="MyContents.war">
<fs-archive path="deployments\MyContents.war" relative-to="jboss.server.base.dir"/>
</deployment>
</deployments>
Now to access any resource like a script file for example scripts.js
http://<yourhost>:<port>/MyContents/scripts/scripts.js
Hope this helpful for you.

html5 offline cache is caching all files instead of cached files that are listed

I am trying to use offline caching of html5. But problem is it is caching all html files and not the ones I mentioned in cache manifest file:For ex: I have 4 html files: index.html, test.html, sample.html, fallback.html and I have sample.appcache manifest file that contains:
CACHE MANIFEST
index.html
sample.html
NETWORK:
test.html
FALLBACK:
/ /fallback.html
I dont have manifest="sample.appcache" attribute set in any html file. I am using JBoss 5 AS and in web.xml I added mime mappings as follows:
<mime-mapping>
<extension>appcache</extension>
<mime-type>text/cache-manifest</mime-type>
</mime-mapping>
So only index.html and sample.html needs to be cached offline but test.html is also getting cached if it was hit earlier when server was up. The fallback.html is not getting called when server is stopped in place of this test.htmlWhat is wrong with this setup?Second thing I did not understand is on IE(v9) and Firefox(v19) I don't have to set mime-mappings but for application to work on chrome(v26) and safari(v5.1.7) this setting is indeed mandatory.
I have not worked with JBoss, but the reason that Chrome and Safari want the MIME type for the appcache is because otherwise it has no way of identifying it as an cache manifest. Simple solution is to just include it :-P
As for the caching, where are you seeing it cached? Is it for sure being stored in the offline section? Keep in mind that files will still be cached like normal regardless of the cache manifest.

Should I find and copy struts styles and javascripts to the /struts/ folder enable client-side validation?

Client-side validation doesn't work for me. First I thought that it is a myeclipse fault that doesn't copy them to the root folder but then I discovered the js and css files reside in the struts core jar. Now I wonder what I should do! Should if I find and copy all js and css files from their appropriate folders to the webRoot or there is an intelligent workaround like changing the configuration? Should struts copy them by self?
I use Tiles with Struts. Could it be the problem?
My JSP files are in the WEB-INF folder! Can it have caused some problem?
Can using something like struts2-jquery plugin solve my problem?
I use struts2!
My struts2 filter configuration is
<filter>
<filter-name>struts2</filter-name>
<filter-class>
org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>struts2</filter-name>
<url-pattern>*.action</url-pattern>
</filter-mapping>
<listener>
<listener-class>org.apache.struts2.tiles.StrutsTilesListener</listener-class>
</listener>
The specific problem here is the filter configuration:
<filter-mapping>
<filter-name>struts2</filter-name>
<url-pattern>*.action</url-pattern>
</filter-mapping>
The recommended configuration (unless you specifically know what you're doing) is to map to *:
<filter-mapping>
<filter-name>struts2</filter-name>
<url-pattern>*</url-pattern>
</filter-mapping>
If you map only to *.action, non-action requests (like CSS and JavaScript) won't be processed by the filter. S2 examines the request for files it (S2) knows about, like its own CSS and JavaScript files, and serves those requests itself even though they're not action requests.
This is documented in the S2 guides' Static Content section.
It's perfectly valid to map to *.action, but then you do need to extract the static files and put them at their required location, at least if you intend to use the default S2 themes/JavaScript. That's also optional: the framework is designed to get you started relatively quickly, but if you have specific needs that the framework doesn't handle, using that part of S2 may or may not be the best choice.
In a Java project (among the others), Eclipse will copy all your files from the SOURCE folder to the OUTPUT folder.
This means that if you put MyClass.java and foobar.properties in your /src folder, you will have MyClass.class and foobar.properties under your /bin folder (according to the name chosen for the output folder), when you will build your project.
For more automation, like replacing tokens in configuration files (let's say environment variables config), or dynamically retrieving the needed libraries, the two main tools generally adopted are Apache Ant (with Apache Ivy as dependancy manager), or Apache Maven.
Usually,
properties files are put in the root of the src folder;
Struts2 XML Validation files are put under the same package of the Action to validate;
Struts2 XML Visitor Validation files are put under the same package of the POJO to validate;
JSP files, JS files, CSS files are put (if they need to be inside the war) adjacent the /WEB-INF folder, and they will not be moved by anyone.
A typical structure could be:
src |
|-- java
|-- web
|-- css
|-- js
|-- jsp
|-- WEB-INF
|-- lib
Take a look at this SO answers too:
Best location to put your CSS and JS files in a Mavenized Java Web app?
Where do CSS and JavaScript files go in a Maven web app project?
That said, you need to be more explicit when you refer to client validation.
If you mean Javascript validation, then refactor the project like above and then describe your problem, taking a look at the JS console;
but using Struts2 it would be better to validate the input with XML Validation, because client side validation is not reliable:
let's say I bypass javascript controls injecting HTML directyl with FireBug, or that i create a Form at runtime with the desired parameters and send it to the server, or that I'm using a browser with javascript disabled... the client is not under your control, then it is a good practice to validate server side.

JSF 2 Access on Facelet Files

I am starting to explore JSF 2 facelet and I would like to test this in a simple project.
I just have some query regarding the file structure in JSF 2. When I was using Spring,
I use to put all my pages under WEB-INF so that they wont be accessible to the browser.
I notice in JSF 2, you should put your *.xhtml outside of WEB-INF and allow access to them thru
the Faces Servlet.
Question, does this mean that all enterprise application that utilizes JSF always put
a security constraint in their web.xml?
<security-constraint>
<web-resource-collection>
<web-resource-name>XHTML files</web-resource-name>
<url-pattern>*.xhtml</url-pattern>
</web-resource-collection>
<auth-constraint />
</security-constraint>
Or they are using some sort of a filter, that traps all incoming request and then reject request
that has *.xhtml?
Is my understanding correct and if so which one is more apt to be used?
Thanks
A third alternative in JSF 2.x is to map the FacesServlet just straight on *.xhtml instead of *.jsf or whatever. This way you don't need to cobble with security constraints or filters to prevent endusers from directly accessing *.xhtml files. It has the only disadvantage that you cannot serve "plain vanilla" XHTML files without invoking the FacesServlet, but that would in turn already not make much sense, because such files should technically have the *.html extension.
Please note that this doesn't work in old JSF 1.x. The FacesServlet would run in an infinite loop invoking itself again and again.

Resources