How does uwsgi cache non-text/html content? - uwsgi

Can somebody explain how the following snippet from the uwsgi docs works?
cache2 = name=mycache,items=100
; load the mime types engine
mime-file = /etc/mime.types
; at each request starting with /img check it in the cache (use mime types engine for the content type)
route = ^/img/(.+) cache:key=/img/$1,name=mycache,mime=1
; at each request ending with .css check it in the cache
route = \.css$ cache:key=${REQUEST_URI},name=mycache,content_type=text/css
; fallback to text/html all of the others request
route = .* cache:key=${REQUEST_URI},name=mycache
; store each successful request (200 http status code) in the 'mycache' cache using the REQUEST_URI as key
route = .* cachestore:key=${REQUEST_URI},name=mycache
Since ${REQUEST_URI} is used for storing everything in the cache, but only a part of ${REQUEST_URI} is used to check the cache for images, how is this supposed to work? I did output ${REQUEST_URI} with the log: routing target, and it equals the complete request starting from the first / every time.
Something similar in my setup does not work (I am using /usr/local/etc/nginx/mime.types as the mime types file).
Thanks,
t.

Look carefully at this line:
route = ^/img/(.+) cache:key=/img/$1,name=mycache,mime=1
It will catch all requests for files inside /img/ directory, saving it's path relative to /img/ in variable $1. Now it asks cache for key /img/$1, so it will glue /img/ at the beginning of saved path.
For file /img/my_logo.png it will save "my_logo.png" in $1 variable, then it will glue "/img/" at the beginning of saved path, so at the end, it will query for /img/my_logo.png.
Basically, it re-creates REQUEST_URI. So if it doesn't work for you, make sure that you're using same base directory name in regex and in cache key.

Related

Resumable upload with new file with special characters in name

I'm following the documentation to create a new upload session for a resumable file upload.
My request looks like:
/v1.0/me/drive/items/:folderId/children/:fileName/createUploadSession
This works when :fileName is something like test.txt or even test 2.txt. But throwing special characters in there like test".txt or test%22.txt cause the request to fail.
There no examples in the documentation on how to deal with special characters in this case, so is this supported?
File stored in OneDrive have similar naming conventions/restrictions to files stored locally. If you consider that OneDrive can sync to your local file system, it makes sense why this is the case.
In general, you should assume you cannot use any of these characters in your file names:
~ " # % & * : < > ? / \ { | }.
You can find the complete list at Invalid file names and file types in OneDrive, OneDrive for Business, and SharePoint.

Access Denied when creating file in Visual F#

The following code runs without a hitch:
On the other hand, I get an access-denied error with this:
The destination is in my personal folder and I have full control. The directory is not read-only. Anyway, in either of those cases, the first code sample should not run either! I appreciate the help ...
In the second sample, you have two problems:
There are back slashes instead of forward slashes, so some of them may get interpreted as escape sequences.
You completely ignore the first parameter of write and specify what I assume is a folder as destination. You can't open a file stream on a folder, no wonder you get access denied.
This should work:
let write filename (ms:MemoryStream) =
let path = System.IO.Path.Combine( "C:/Users/<whatever>/signal_processor", filename )
use fs = new FileStream( path, FileMode.Create )
ms.WriteTo(fs)

downloading and storing files from given url to given path in lua

I'm new with lua but working on an application that works on specific files with given path. Now, I want to work on files that I download. Is there any lua libraries or line of codes that I can use for downloading and storing it on my computer ?
You can use the LuaSocket library and its http.request function to download using HTTP from an URL.
The function has two flavors:
Simple call: http.request('http://stackoverflow.com')
Advanced call: http.request { url = 'http://stackoverflow.com', ... }
The simple call returns 4 values - the entire content of the URL in a string, HTTP response code, headers and response line. You can then save the content to a file using the io library.
The advanced call allows you to set several parameters like HTTP method and headers. An important parameter is sink. It represents a LTN12-style sink. For storing to file, you can use sink.file:
local file = ltn12.sink.file(io.open('stackoverflow', 'w'))
http.request {
url = 'http://stackoverflow.com',
sink = file,
}

Load or Stress Testing Tool with URL Import Functionality

Can someone recommend a load testing tool which allows you to either:
a. replay an IIS (7) log(s) to simulate a real live site daily run;
b. import a CSV or equivalent list of URLS so we can achieve a similar thing as above but at a URL level;
c. .net API so I can create simple tests easily from my list of URLS is also a good way to go.
I do not really want to record my tests.
I think I can do B) with WAPT but need to create an XML file manually, not too much grief, but wondering if any tools cover these scenarios out the box.
Visual Studio Test Edition would require some code to parse the file into a suitable test run.
It is a great load testing solution.
Our load testing service lets you write a very simple script using JavaScript to pull data out of a CSV file and then fetch those URLs. For example, the following code would pluck 10 random URLs from the CSV file and fetch them as part of a single session:
var c = browserMob.openHttpClient();
var csv = browserMob.getCSV("urls.csv");
browserMob.beginTransaction();
for (var i = 0; i < 10; i++) {
browserMob.beginStep("Step 1");
var url = csv.random().get("url");
c.get(url);
browserMob.endStep();
}
browserMob.endTransaction();
The CSV file itself needs to be a normal CSV file with the first row containing a header named "url". This script would be run repeatedly for each virtual user participating in a load test.
We have support for so called 'uri-format' in our open-source tool called Yandex.Tank You simply put all your uris to a file, one uri -- one line, then specify headers in your load.ini like this:
[phantom]
address=example.org
rps_schedule=line(1, 1600, 2m)
headers = [Host: mts-maps.yandex.ru]
[Connection: close] [Bloody: yes]
ammo_file = ammo.uri
ammo.uri:
/
/index.html
/1/example.html
/2/example.html

request.serverVariables() "URL" vs "Script_Name"

I am maintaining a classic asp application and while going over the code I came across two similar lines of code:
Request.ServerVariables("URL")
' Output: "/path/to/file.asp"
Request.ServerVariables("SCRIPT_NAME")
' Output: "/path/to/file.asp"
I don't get it... what is the difference? both of them ignore the URL rewriting that I have set up which puts the /path folder as the root document (the above URL is rewritten to "/to/file.asp")
More info:
The site is deployed on IIS 7
URL Gives the base portion of the URL, without any querystring or extra path information. For the raw URL, use HTTP_URL or UNENCODED_URL.
SCRIPT_NAME A virtual path to the script being executed. Can be used for self-referencing URLs.
See, http://www.requestservervariables.com/url
and /script_name for the definitions.
This could be a bug under IIS 7.
I could not get Request.ServerVariables("URL") and Request.ServerVariables("SCRIPT_NAME") to return different values. I've tried the cases where they were called from an included file (<!--#include file="file.asp"-->) or after a Server.Transfer.
Is this maybe there in case of Server.Transfer?
In the case where you do a server.transfer i think you would get different results
i.e. SCRIPT_NAME would be e.g. /path/to.transferredfile.asp whereas URL would remain as /path/to/file.asp

Resources