webdav search returns 405 - ios

I am building a client application for webdav. I have implemented webdav protocols like MKCOl, delete, prop find, move, copy and it is working fine. When I tried to implement search method, server returns:
405 method not allowed
I am using apache2 server, do I need any configuration change in the server? I got to know from the link How to get the list of folders and files deployed on Linux WebDav? that some servers will not support search method, and suggestion given from the link is to use webdav propfind method, so I want to know whether propfind with depth infinity is feasible for file system with large collections.

You can craft the PROPFIND request to limit the fields that are returned. If you were to limit this request to the searchable parameters, it could work for you.
[is] depth infinity feasible for file system with large collections
It depends, of course, on how large the collections are. You will be receiving several hundred bytes of data for each item in the collection. A collection with millions of objects could result in a pretty big response!

Related

What is PROPFIND request

In the postman, there are multiple request types and I see PROPFIND but do not actually know what are its aspects and usage, just curious if it is useful for future use anyone can explain it in simple terms.
PROPFIND — used to retrieve properties, stored as XML, from a web resource. It is also overloaded to allow one to retrieve the collection structure (a.k.a. directory hierarchy) of a remote system.
PROPFIND is a WebDAV request used with HTTP servers that enable WebDAV. HTTP was developed with requests GET, PUT, POST, PATCH, and DELETE. WebDAV, which stands for Web Distributed Authoring and Versioning, extends the requests to add MKCOL, LOCK, UNLOCK, COPY, MOVE, PROPFIND, and PROPATCH. PROPFIND (Property Find) is a complex WebDAV request that allows clients to request from a WebDAV enabled server an XML response containing a listing of all the items in the collection specified in the request matching the properties specified in the request. Thus, to oversimplify, a client (you on your computer) may want to retrieve all html files on a server site matching a regular expression, and having version numbers after a given version number, and authored by user JimBrokaw, and approved by Director AliceFaye. See here for a more thorough description.

Azure-devops rest api - pagination and rate limit

I am trying to pull Azure-Devops entities data (teams, projects, repositories, members etc...) and process that data locally,
I cannot find any documentation regarding rate-limiting and pagination,
does anyone has any experience with that?
There is some documentation for pagination on the members api:
https://learn.microsoft.com/en-us/rest/api/azure/devops/memberentitlementmanagement/members/get?view=azure-devops-rest-6.0
But that is the only one, i couldn't find any documentation for any of the git entities,
e.g: repositories.
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/repositories/list?view=azure-devops-rest-6.0
If someone could point me to the right documentation,
Or shed some light on these subjects it would be great.
Thanks.
I cannot find any documentation regarding rate-limiting and pagination, does anyone has any experience with that?
There is a document about Service limits and rate limits, which introduced service limits and rate limits that all projects and organizations are subject to.
For the Rate limiting:
Azure DevOps Services, like many Software-as-a-Service solutions, uses
multi-tenancy to reduce costs and to enhance scalability and
performance. This leaves users vulnerable to performance issues and
even outages when other users of their shared resources have spikes in
their consumption. To combat these problems, Azure DevOps Services
limits the resources individuals can consume and the number of
requests they can make to certain commands. When these limits are
exceeded, subsequent requests may be either delayed or blocked.
You could refer Rate limits documentation for details
For the pagination, generally, REST API will have paginated response and ADO REST API normally have limits of 100 / 200 (depending which API) per page in each response. The way to retrieve next page information is to refer the response header x-ms-continuationtoken and use this for next request parameter as continuationToken.
But Microsoft does not document this very well - this should have been mentioned in every API call that supports continuation tokens:
Builds - List:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitions}&continuationToken={continuationToken}&maxBuildsPerDefinition={maxBuildsPerDefinition}&deletedFilter={deletedFilter}&queryOrder={queryOrder}&branchName={branchName}&buildIds={buildIds}&repositoryId={repositoryId}&repositoryType={repositoryType}&api-version=5.1
If I use above REST API with $top=50, as expected I get 50 back and a header called "x-ms-continuationtoken", then we could loop output the result with continuationtoken:
You could check this similar thread for some more details.
I think for most of the apis you have query parameter as $top/$skip.You can use these parameter to do pagination. Lets say the default run gives 200 documents in the response. For the next run skip those 200 by providing $skip=200 in the query parameter of the request to get the next 200 items. You can keep on iterating until count attribute of the response becomes 0.
For those apis were you don't have these parameter you can use continuation-token as mentioned by Leo Liu-MSFT.
It looks like you can pass $top and continuationToken to list Azure Git Refs.
The documentation is here:
https://learn.microsoft.com/en-us/rest/api/azure/devops/git/refs/list?view=azure-devops-rest-6.0

Controlling IIS BITS uploads

I'm running an IIS web site (built using ASP.NET/MVC) that among other things collects files from multiple agents that anonymously upload the files via BITS.
I need to make sure that only files uploaded from known sources as well as matching certain predefined file name pattern will be accepted by IIS. All other BITS upload attempts must be cancelled.
As I understand, BITS uses an ad hoc protocol over HTTP 1.1 using "BITS_POST" verb. So, ideally, I'd like to hook into IIS, analyze a BITS_POST request info and if it does not satisfy my pre-conditions, drop the request.
I've tried to create and register a filter implementing IActionFilter.OnActionExecuting, but it seems that my filter does not receive BITS_POST requests.
I'd be glad to hear if somebody have implemented similar BITS related solutions and how this was done. Anyway, other ideas are welcome too.
Regards,
Natan
I have never worked with BITS, frankly i dont know what is it.
What i usually do is such situations is implement an HTTP module. On its begin request event, you can iterate through incoming HTTP request data and decide to stop processing the request if data is not complying with requirements. You have full access to HttpContext.Current.Request object from HTTP module code.
With HTTP modules, you can execute .NET code even before entering the ASP.NET pipeline.

Client Server API pattern in REST (unreliable network use case)

Let's assume we have a client/server interaction happening over unreliable network (packet drop). A client is calling server's RESTful api (over http over tcp):
issuing a POST to http://server.com/products
server is creating an object of "product" resource (persists it to a database, etc)
server is returning 201 Created with a Location header of "http://server.com/products/12345"
! TCP packet containing an http response gets dropped and eventually this leads to a tcp connection reset
I see the following problem: the client will never get an ID of a newly created resource yet the server will have a resource created.
Questions: Is this application level behavior or should framework take care of that? How should a web framework (and Rails in particular) handle a situation like that? Are there any articles/whitepapers on REST for this topic?
The client will receive an error when the server does not respond to the POST. The client would then normally re-issue the request as they assume that it has failed. Off the top of my head I can think of two approaches to this problem.
One is that the client can generate some kind of request identifier, such as a guid, which it includes in the request. If the server receives a POST request with a duplicate GUID then it can refuse it.
The other approach is to PUT instead of POST to create. If you cannot get the client to generate the URI then you can ask the server to provide a new URI with a GET and then do a PUT to that URI.
If you search for something like "make POST idempotent" you will probably find a bunch of other suggestions on how to do this.
If it isn't reasonable for duplicate resources to be created (e.g. products with identical titles, descriptions, etc.), then unique identifiers can be generated on the server which can be tracked against created resources to prevent duplicate requests from being processed. Unlike Darrel's suggestion of generating unique IDs on the client, this would also prevent separate users from creating duplicate resources (which you may or may not find desirable). Clients will be able to distinguish between "created" responses and "duplicate" responses by their response codes (201 and 303 respectively, in my example below).
Pseudocode for generating such an identifier — in this case, a hash of a canonical representation of the request:
func product_POST
// the canonical representation need not contain every field in
// the request, just those which contribute to its "identity"
tags = join sorted request.tags
canonical = join [request.name, request.maker, tags, request.desc]
id = hash canonical
if id in products
http303 products[id]
else
products[id] = create_product_from request
http201 products[id]
end
end
This ID may or may not be part of the created resources' URIs. Personally, I'd be inclined to track them separately — at the cost of an extra lookup table — if the URIs were going to be exposed to users, as hashes tend to be ugly and difficult for humans to remember.
In many cases, it also makes sense to "expire" these unique hashes after some time. For example, if you were to make a money transfer API, a user transferring the same amount of money to the same person a few minutes apart probably indicates that the client never received the "success" response. If a user transfers the same amount of money to the same person once a month, on the other hand, they're probably paying their rent. ;-)
The problem as you describe it boils down to avoiding what are called double-adds. As mentioned by others, you need to make your posts idempotent.
This can be easily implemented at the framework level. The framework can keep a cache of completed responses. The requests have to have a request unique so that any retries are treated as such, and not as new requests.
If the successful response gets lost on its way to the client, the client will retry with the same request unique, the server will then respond with its cached response.
You are left with durability of the cache, how long to keep responses, etc. One approach is to remove responses from the server cache after a given period of time, this will depend on your app domain and traffic and can be left as a configurable step on the framework piece. Another approach is to force the client to sent acknowledgements. The acks can be sent either as separate requests (note that these could be lost too), or as extra data piggy backed on real requests.
Although what I suggest is similar to what others suggest, I strongly encourage you to keep this layer of network resiliency to do only that, deal with drop requests/responses and not allow it to deal with duplicate resources from separate requests which is an application level task. Merging both pieces will mush all functionality and will not leave you with a clear separation of responsibilities.
Not an easy problem, but if you keep it clean you can make your app much more resilient to bad networks without introducing too much complexity.
And for some related experiences by others go here.
Good luck.
As the other responders have pointed out, the basic problem here is that the standard HTTP POST method is not idempotent like the other methods. There is an effort underway to establish a standard for an idempotent POST method known as Post-Once-Exactly, or POE.
Now I'm not saying that this is a perfect solution for everybody in the situation you describe, but if it is the case that you are writing both the server and the client, you may be able to leverage some of the ideas from POE. The draft is here: https://datatracker.ietf.org/doc/html/draft-nottingham-http-poe-00
It isn't a perfect solution, which is probably why it hasn't really taken off in the six years since the draft was submitted. Some of the problems, and some clever alternate options are discussed here:
http://tech.groups.yahoo.com/group/rest-discuss/message/7646
HTTP is a stateless protocol, meaning the server can't open an HTTP connection. All connections get initialized by the client. So you can't solve such an error on the server side.
The only solution I can think of: If you know, which client created the product, you can supply it the products it created, if it pulls that information. If the client never contacts you again, you won't be able to transmit information about the new product.

Best way to redirect image requests to a different webserver?

I am trying to reduce the load on my webservers by adding an "Image server" (a dedicated server for handling image requests), and redirecting all requests for .gif,.jpg,.png etc., to it.
My question is, what is the best way to handle the redirection?
At the firewall level? (can I do this using iptables?)
At the load balancer level? (can ldirectord handle this?)
At the apache level - using rewrite rules?
Thanks for any suggestions on the best way to do this.
--Update--
One thing I would add is that these are domains that are hosted for 3rd parties, so I can't expect all the developers to modify their code and point their images to another server.
The further up the chain you can do it, the better.
Ideally, do it at the DNS level by using a different domain for your images (eg imgs.example.com)
If you can afford it, get someone else to do it by using a CDN (Content delivery network).
-Update-
There are also 2 featuers of apache's mod_rewrite that you might want to look at. They are all described well at http://httpd.apache.org/docs/1.3/misc/rewriteguide.html.
The first is under the heading "Dynamic Miror" in the above document, that uses the mod_rewrite Proxy flag [p]. This lets your server silently fetch files from another domain and return them.
The second is to just redirect the request to the new domain. This second option puts less strain on your server, but requests still need to come in and it slows down the final rendering of the page, as each request needs to make an essentially redundant request to your server first.
i agree with rikh. If you want images to be served from a different webserver, then serve them on a different web-server. For example:
<IMG src="images/Brett.jpg">
becomes
<IMG src="http://brettnesbitt.akamia-technologies.com/images/Brett.jpg">
Any kind of load balancer will still feed the image from the web-server's pipe, which is what you're trying to avoid.
i, of course, know what you really want. What you really want is for any request like:
GET images/Brett.jpg HTTP/1.1
to automatically get converted into:
HTTP/1.1 307 Temporary Redirect
Location: http://brettnesbitt.akamia-technologies.com/images/Brett.jpg
this way you don't have to do any work, except copy the images to the other web-server.
That i really don't know how to do.
By using the phrase "NAT", it implies that the firewall/router receives HTTP requests, and you want to forward the request to a different internal server if the HTTP request was for image files.
This then begs the question about what you're actually trying to save. No matter which internal web-server services the HTTP request, the data is still going to have to flow through the firewall/router's pipe.
The reason i bring it up is because the common scenario when someone wants to serve images from a different server is because they want to split up high-bandwidth, mostly static, low-CPU cost content from their actual logic.
Only using NAT to re-write the packet and send it to a different server will not work towards that common issue.
The other reason might be because images are not static content on your system, and a request to
GET images/Brett.jpg HTTP/1.1
actually builds an image on the fly, with a high-CPU cost, or only using with data available (i.e. SQL Server database) to ServerB.
If this is the case then i would still use a different server name on the image request:
GET http://www.brettsoft.com/default.aspx HTTP/1.1
GET http://imageserver.brettsoft.com/images/Brett.jpg HTTP/1.1
i understand what you're hoping for, with network packet inspection to override the NAT rule and send it to another server - i've never seen any such thing that can do that.
It sounds more "proxy-ish", where the web-proxy does this. (i.e. pfSense and m0n0wall can't do it)
Which then leads to a kind of solution we used once: a custom web-server that analyzes the request, makes the appropriate request off some internal server, and binary writes the response to the client.
That pain in the ass solution was insisted upon by a "security consultant", who apparently believes in security through obscurity.
i know IIS cannot do such things for you itself - i don't know about other web-server products.
i just asked around, and apparently if you wanted to write a custom kernel module for you linux based router, you could have it inspect packets and take appropriate action. Such a module might exist. There are, apparently, plenty of other open-sourced modules to use as a starting point.
But i'd rather shoot myself in the head.

Resources