I have an RDF-file stored on my server. The file or at least the file-content should be uploaded to a remote GrapbDB over API.
On the documentation there are two ways to do this. The first one is uploading it to server files and then loading it to GraphDB. Here the problem is, that I am not the owner of the server, GraphDB is running. So I can`t upload it to server files. Or is there maybe another API for that?
The other way is providing a public API on my server and then trigger GraphDB to download the file from my server. But my API must be protected with credantials or JWT. But I donĀ“t know how to set the credantials in the API-Call.
Isn`t there a way to upload a simple graph to a repository?
There is a browser-based user interface in GraphDB that allows you to import from local files. If this is allowed on the server you are connecting to, and you only need to do this once then I think this would be the quickest route to go.
If you want to upload a local file to GraphDB using dotNetRDF, then I would advise you to use the SPARQL 1.1 graph store protocol API via the VDS.RDF.Storage.SparqlHttpProtocolConnector as described here. The base URL you need to use will depend on the configuration of the server and possibly also on the version of GraphDB that it is running, but for the latest version (9.4) the pattern is: <RDF4J_URL>/repositories/<repo_id>/rdf-graphs/service
The connector supports HTTP Basic Authentication (which is one of the options offered by GraphDB) so if you have a user name and password you could try the SetCredentials method on the connector to specify those credentals and if necessary force the use of HTTP Basic Authentication by setting the global options property VDS.RDF.Options.ForceHttpBasicAuth to true.
Related
I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)
I have to consume a webservice with delphi-XE3 to retrieve information from a distant webserver on base of a unique number that I must send with my xml request.
I have the linkadress of the WSDL file that I can import into my project, but I have not a URL for sending my request to.
Instead according to the administrator of the remote webservice I have to address the SOAP interface on localhost and in the WSDL file the defurl is defined as:
<<<http://localhost:8080/.....>>>>>>
So my question is: how to do that ?
All the examples that I found of consuming a website with Delphi are with an external URL to send the request to, but I found none that retrieves distant information by means of listening to localhost.
Do I have to install an additional program or where do I find a tutorial to manage this.
Thank you for any help
You know, it is not really a Delphi question, aren't you?
This question is more of the protocol, IDE or test environment kind.
Anyway:
If you want to test your application on your local host, you have to have an instance of the server software to provide the service you wish to use. If you don't have it and still want to test locally (and you are totally aware of the answers the server should send), you can fake it by setting up an RPC or (at least) a HTTP server on your computer, but I would not recommend it, since it will only test your application against your expectations instead of a real life scenario.
I'm using Apache Wink to access a service, and trying to debug a problem where the server apparently does not recieve my request in the intended format (details below, but are probably immaterial). Is there a way I can make the Wink client to log the HTTP requests that it makes to the server, so that I can see what is being sent down the wire?
Details: I'm using Eclipse Lyo to create a ChangeRequest in RTC (rational team concert) using their OSLC v2 REST APIs.(Eclipse Lyo internally uses Apache Wink). In doing so, even though I've set a "Filed Against" property in the ChangeRequest being submitted, RTC does not recognize it and complains that it is missing.
I think it's better to use a proxy to monitor the traffic. If your client runs on Windows, Fiddler is a very nice tool.
What's the correct way of transferring media (photos or movies) using Worklight Adapters?
I sent a photo via the adapter and got the error: form too large, exceed the maximum size...
I read I need to change the form size through the Jetty
but the server I'll deploy the app won't be a jetty so what shell i do?
Thanks!
Please see topic Uploading large (and binary) files to Worklight adapter.
Basically, Worklight does not have the equivalent to an HTTP POST mechanism that allows you to transfer arbitrarily large chunked data. For large files of unknown sizes (photos, video, audio) you'll need to upload the file to the server outside the Worklight adapter framework. For example you could simply post it to a web server you have configured. In my case (in the above referenced answer) I needed to create an entire client-server mechanism to negotiate a port and key, start listening on that port, then accept requests and ensure the posting client passes the key as authorization to transfer the secure data.
Hopefully IBM will provide a formal service for this in a future release.
Adapters do not work with html forms, they work with data.
You will need to convert your image to base64 and submit as a adapter invocation parameter.
Having more information regarding what exactly you're trying to achieve might be helpful.
When uploading files to Amazon S3 using the browser http upload feature, I know I can specify a success_action_redirect field/value that will tell my browser where to go when the upload is done.
I'm wondering: is it possible to ask Amazon to make a web hook style POST request to my web server whenever a file gets uploaded?
Basically, I want a way of being notified whenever a client uploads a new file, so that my server can process the upload. I'd like to do this without relying on the client to make the request to my server to tell me the file has been uploaded (never trust the client, right?).
They just recently announced AWS Lambda which lets you run code in response to events, with S3 uploads being one of the supported events.
Amazon can publish a notification to SNS or SQS when an object has been created in your specified S3 bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
There is no support from Amazon regarding this as yet but we can get around this with other tools like s3cmd etc, which allow us to write cronjobs to notify us of any change in the keys on S3. So if a new key is created (notified via timestamp) we could have it send a GET request to our server endpoint listening for updates from S3 with the associated metadata.
We could use GET or POST here as the data would be very minimal I think. Probably a form data with POST should do.