My application generates a complex calendar at runtime, so any user has tasks for a specific date/time and every task has a description + some properties.
I have been requested to "publish" this calendar as webCal. I have no knowledge about webcal, anyway I wonder if anyone of you has already tried for it and can write his comments or suggestions.
One issue is "how to identify a user"? Since I have a multiuser calendar, how do I publish individual calendars for every user?
I think to a kind of Delphi service application that runs continuously, publishing the calendar.
It depends on whether your users need write access to their calendar.
I once wrote a simple (command-line) utility that exports a single (.ics) calendar file. If changed, it uploads the exported .ics file to a web server, where it can be picked up by calendar clients (e.g. Google Calendar, iCalendar, Sunbird, Outlook). Publishing for different users can easily be done by uploading the .ics file to a different folder for each user.
Next, I scheduled this utility to run regularly. Of course, you could have your Delphi service do this regularly.
Many calendar clients understand http:// URLs as well as webdav:// URLs. Authentication can be arranged by using one of the regular HTTP authentication schemes. Of course, you’ll want to use SSL to secure things.
The only limitation of this is approach is that the resulting calendar is, in effect, read-only.
If you wanted to provide write access, you’ll need a real webcal server. A real webcal implementation would mean supporting the webdav protocol (which itself is an extension of the HTTP protocol) on the server, and picking up the changes from your Delphi service. Either that, or writing a WebDav/CalDav server in Delphi (e.g. using the Indy TIdWebdav component by extending the TIdHTTPServer component, since Indy doesn’t sport a TIdWebdavServer component).
You’d have to process all the webdav-specific commands yourself (using the OnCommandOther event), according to the WebDAV specs. This question about writing a WebDAV server may provide some pointers...
Alternatively, you could use a 3rd-party webdav server, and pick up any changes from your Delphi service.
Related
I have found some, but not all, pieces of the puzzle.
Using Graph APIs, when a user selects a document in my own web application, I can:-
Create a new temporary folder in their OneDrive account
Upload my.docx file to this location
Get the url for my.docx
Open the URL in a new tab, loading Office 365's MSWord editor (or viewer and editor after one more click)
This is where it gets a bit trickier. How can I get the edited content back into the location where my web application historically stored these documents?
Theorising, I can:-
Create a webhook subscription to the new folder I create
Implement a webhook listener (and validation) service
When the listener receives an 'update' notice for the document:-
Call the download(content) API, or from the driveItem metadata, download it from #microsoft.graph.downloadUrl
Persist it to my desired location within my web application
To me this sounds like it'll suffer from big delays. The webhook subscriptions typically send batches of changes and the frequency looks uncertain. It certainly wouldn't be great for versioning every individual save operation during the editing session.
Have I missed some more obvious path to Word as a Service? i.e. another API or a mixture of APIs?
Alternatives I've considered but haven't yet scoped: implement WOPI or WebDav within my own web application.
It sounds like you're only using OneDrive to take advantage of its built-in support for the MS-WOPI protocol. WOPI is basically an enhanced WebDav interface that is used by Office to work with remote document (i.e. files stored on OneDrive, Box, DropBox, etc.).
Your solution is generally fine and it is certainly easy enough to orchestrate. You can absolutely use webhooks to subscribe to changes to the file. You'll likely want some mechanism in your app to notify your system when they're "done" so you can clean up the file afterwards.
If you want a more robust solution, you'll need to look at WOPI. Implementing WOPI would allow you to keep these files on your system permanently. Office Online would use the WOPI interface to speak with your storage system and open/save/edit files in-place.
Keep in mind that implementing WOPI (or any protocol for that matter) is often a non-trivial endeavor. You will also need to get your final solution validated and whitelisted by Office before it can be used. Details on this process and how to request access can be found at the Microsoft Cloud Storage Provider Program website.
Today OneNote and Excel are the only office "document clients" that have API's exposed via a REST API publicly available in the Microsoft Graph.
The only other "publicly available options" I'm aware of are:
WOPI APIs, that kind of act like a REST API but muche older
The office add-in model (hosted in a client) with the JavaScript API
The word object library (old, relying on dcoms and needs to have office installed and licensed on the machine)
I am working on developing an integration with Workday. Under my initial analysis, I found that Workday provides multiple wsdls for different modules like "Human resource", "Inventory" etc. I can see this complete list at https://community.workday.com/sites/default/files/file-hosting/productionapi/operations/index.html
I am trying to understand how I get get this list progamatically in my integration so that my user can select one of the wsdls rather than typing in the full name of WSDL. Please share your thoughts on this.
You can programatically retrieve a list of all web service operations by creating a Custom Report based on the "Public Web Services" data source. The report can then be exposed as a RESTful WS for easy retrieval.
Some fields you can include in the report are: Web service, supported operations, api version, endpoint url, WSDL url, etc, etc.
This is highly customisable, in the sense that you can query the RESTful WS Report for specific versions, specific operations, etc, via Prompts / URL Params.
The report-as-a-service, supports also a variety of output formats as well as its own WSDL.
The purpose of a SOAP WSDL is to generate a client stub, i.e. a model that lets your client interact with objects exposed or consumed by the service provider. You don't interact with a WSDL at runtime - you interact with the stub. If you want to make multiple services available, you have to include each WSDL in your client application at compile time and generate their stubs. The services in a given API version do not change, so there isn't a reason to do this dynamically.
To add to the query asked, what we are trying to understand is that whether there an API call/request which we could hit to get the list of web services available to populate it on the UI to select from it.
For Example: In this link, https://community.workday.com/sites/default/files/file-hosting/productionapi/index.html, we have Absence_Management, Academic_Advising, Academic_Foundation and so on and Now, if I want it to be displayed to the end user so that He can select the webservice to be used and accordingly we could download the WSDL as to work on it.
With CloudKit, you can focus on your client-side app development and let iCloud eliminate the need to write server-side application logic. CloudKit provides you with Authentication, private and public database, structured and asset storage services — all for free with very high limits.
You cannot upload any code to run on Apple's servers?
I've heard it being compared to Google App Engine and other cloud computing platforms, but without the ability to run your own code, isn't the whole thing pretty limited and not really comparable?
For example, if I want to build a news app which periodically pushes stories on topics that the user is interested, then this can't be done just using CloudKit because I would need scheduled jobs and data processing on the server.
Any thoughts?
Server-side
As you said CloudKit doesn't allow server-side code.
But there are possibilities.
Crons
You don't want to connect to the iCloud Dashboard everyday in order to perform the push by adding a record. One solution here is to code an app on a mac server (I guess mac mini as server will become more popular with CloudKit) that add a new Daily CKRecord every day.
Subscriptions
Subscriptions concept is that the client registers for specific updates. You can create a record type called Daily for instance and make users register to it. You should check the Apple documentation and WWDC14 videos (even if Subscriptions are not detailed, it's a good start point).
The good thing is push notifications are linked with the subscription concept. So basically you say: Send my a notification for each new CKRecord of type Daily added.
BaaS party
What is the point for using CloudKit (vs Parse and other?)
Price: CloudKit has a really nice pricing
Ready to go: 2 clicks inside XCode and you are ready to go
User consistency: you get free user login for all his devices through their iCloud account. With a very good privacy system. And you can get relationships with a smart system.
But:
You are stick on Apple platform. We don't even know if we could export the data..
Only data-centered for now (no server-side code)
The CloudKit dashboard is too limited
The future
CloudKit is still pretty new. At the WWDC some guys behind it made me understand that they are still heavily working on it. My bets are they are working on 2 important points :
Server side code execution through remote scheduled tasks
CloudKit for Analytics (Visualization side)
Edit: Apple guys are fully aware and concerned about the lack of web access for the data. It means that one day it may be accessible from other platforms. I read in a comment that Apple probably would have bought Parse if CloudKit wasn't better, AFAIK they tried to buy Parse (skills buy it's said, but we don't really know).
Update WWDC15
CloudKit is now available in JS and some dashboard are available now. Wait and see.
Update February 2016
CloudKit Now Supports Server-to-Server Web Service Requests
Web Services Reference
In some cases, we do not need server-side logic, and just storing static data can cover all the usage scenario.
In this case, it would be very helpful if there's a free accessible storage that you can store something. CloudKit provides such stuffs rather then full service platform.
Yes it is limited. Anyway can be useful for some people. For example, your case actually can be supported CloudKit. Though CloudKit is just a static storage, it support subscription. Which monitors a set of conditions and pushes the event notification to client. It's fortunate that the only background job feature supported by CloudKit is just what you need.
Anyway, if you need more, then you might need to consider full fledged servers. Usually simple web services with simple server-side code execution support are also limited.
You cannot upload any code to run on Apple's servers?
You can and you can't. You can't upload code / SOAP based web services to the server, instead of it you can upload / store observers on the server, called subscription.
whole thing pretty limited and not really comparable?
I would say in CloudKit and in MBaas client communicates with server though a more narrower more robust interface: you can not upload exotic web service to do XML parsing, database manipulations and based on it trigger push notifications, but RestFull architecture allows you to perform the 4 basic operation on the data store, and with subscription client can get notified about INSERT / UPDATE / DELETE operations performed on tables.
I think MBaas is just the next step in evolution of server - client architecture. First it seems it is limiting, but you can do all as in SOAP based web services world. Development is extremely fast / scalable / comfortable to use and easier to control things like permissions / setup, maintain server, security needs almost no effort.
Believe it or not, you can actually get REALLY far with this approach.
I've not used CloudKit, but I can describe for you my application stack:
AngularJS (or your favorite client side HTML rendering framework): A single page will host a series of templates/controllers selected by the router and driven by users changing the anchor to select which page they're on.
Firebase.io (or your favorite cloud storage): Any dynamic data goes into the cloud document store. The controller needs to load the data and render the template on the client, and when the data changes, send the data back. This also provides the authentication and authorization as well, since you can limit access to the data.
Now you need a place to serve the HTML/CSS/JS/images... which requires no 'server side code execution', just a web server where you can put the assets.
Using this technique you could store all the user's topics in the database for that user, and when the page loads, go and aggregate all the sources for those topics (also stored in the database) completely client side. There's nothing in your example application which actually requires server side execution that I can see, so long as you have cloud storage which will provide you with authentication and authorization services, and a 'dumb' web server for serving up static assets.
CloudKit isn't a full-fledged web hosting service. Instead, it's an SDK for iCloud. You shouldn't be putting a web site up there, just storing user data that you may want to use in multiple applications or platforms.
iCloud APIs enable your apps to store app data in iCloud, keeping your apps up to date automatically. Use iCloud to give your users a consistent and seamless experience across iCloud-enabled devices.
This is more of an architectural question rather than anything code-related. I'm working on a project to auto-create invoices in QuickBooks using data from our ERP, in order to speed up the process of adding that data. It seems that it is a requirement that in order to access data within the QuickBooks company file directly, you need to use the SDK (which actually opens the file using a QuickBooks application instance). Since my application will need to access two company files depending on where the request came from, I can't keep it open, and this opening/closing adds 15ish seconds to each request.
Why is this a requirement, does anyone know? Is there another way to directly access the data and bypass the massive overhead of the QuickBooks application?
Thanks!
Why is this a requirement, does anyone know?
My understanding (based on a really old forum post by an Intuit developer) is that the way the SDK works is by using a GUI message pump Windows COM call to push data to QuickBooks.
No GUI present = no message pump = no connection to QuickBooks. Thus, you need the GUI components of QB in place.
Is there another way to directly access the data and bypass the massive overhead of the QuickBooks application?
No.
If it's a problem, consider batching your requests, queueing them up, and sending more than one at a time in a batch. This is what the Web Connector (and most other QuickBooks integrated apps) do to avoid this issue. It's very, very rare that an application truly needs real-time connections to QuickBooks (and be careful if you think you do - QuickBooks is not a great candidate for real-time access to data - there's a lot of things that can lock you out of QuickBooks so you have to be careful if you're building an app that assumes you'll always be able to connect to it).
I have some delphi code which, given a list of items, calculates the total price taking into account any special deals that might apply.
This code is non-trivial to rewrite in another language.
How should I set it up to communicate with a website running on the same server? The website will need to ask it for a price every time the user updates their shopping cart. It's possible that there will be multiple concurrent requests.
The delphi code needs to maintain an in-memory list of special deals, periodically refreshed from a database. So it cannot simply be executed every time or anything as simple as that.
I don't know what the website is written in, or even which http server it runs under, so I'm just looking for ideas or standard methods.
It sounds like the win32 app is already running as a Windows Service on the box. So, if you can't modify that service, you are going to have to deal with whatever way it wants to accept and respond to requests. This could be through sockets or some higher level communication protocol like web services.
You could do a couple of things. Write an assembly that knows how to communicate with the service and have your web site use that assembly. Or you could build a shim service that knows how to communicate with the legacy service, but exposes communication over higher level protocols such as web services. Either way will have the benefit of hiding the concurrency, threading and communications issue behind an easy to call interface, but the latter will make communicating with the service easier for everyone going forward.
If you can modify the delphi app to take an XML request and respond with an XML answer over a TCP socket (ideally using the HTTP protocol), you will be able to make it interoperate with most web server frameworks relatively easily. But the exact details of how to make that integration happen will depend on the language/framework it was written in.
If the web server is on windows you can compile your delphi app as a DLL that can return XML or HTML, taking parameters as part of the URL or a POST operation. Some details on making a Delphi DLL for web servers are here.
It doesn't matter what web server or OS the existing system is running under. What matters is what you want YOUR code to run under. If it is windows then the easiest solution would be to use WebBroker and write a custom ISAPI application, or use SOAP to expose web services. The first method could be used if you wanted to write a rest like API for instance, the second if your web application has the ability to consume web services.
Another option, if you are running both on the same box under IIS, is to create a COM/Automation object which you then invoke via server side scripting (ASP). If the application is an ASP.NET application, then I would use PRISM to port your code into an assembly.
I have done this with a quite complicated workers compensation calculator. I created a windows service using RemObjects Sdk. The calculations are exposed as a soap method so it can be accessed by nearly anything.
It's not necessary to use RemObjects in the service but it makes it much easier to do as it handles a lot of the underlying plumbing. The clients don't need RemObjects, they just need to be able to call soap methods. Nearly any programming langugae can do that.
You could also create an isapi dll for IIS that exposes a soap interface. This would be useful if other websites on different servers needed access to the methods. However I have handled this in my case by opening a port in the firewall to access my windows service.
There is a lot of examples on the web. A couple of places to start reading are About.Com and Dr Bob.
Torn this app into Windows Service. Write Web Service that will communicate with your windows service. You should spend some time designing your Web Service, because this Web Service is going to be your consistent interface, shielding old Delphi app. So in the future whenever you will want to write web app, mobile app, or whatever you will imagine, you will have one consistent interface – XML Web Service.
A popular way to integrate a web application with background services is a message broker.
The message flow would be:
the web application sends a "calculation request" message to a message destination on the message broker, which contains all needed parameters and also a correlation id to match the calculation request with the response from the Delphi service
one (or, in a high availability / load balanced environment more) Delphi services handle the messages: pull the next incoming message, process it by feeding the parameters to the calculation engine, and send a "calculation result message" back to the web server
the web server can either synchronously wait for the response (and discard responses which have no matching correlation ide) and build the result HTML document, or continue with other tasks and asynchronously receive the calculation result in a separate thread, for example in a Ajax based web application
See for an introduction this slideshow about the Dopplr image service:
http://de.slideshare.net/carsonified/dopplr-its-made-of-messages-matt-biddulph-presentation
If you can make it a service (but not a library), you have to do inter-process communication somehow - there are a few ways to do this on Windows:
Sockets directly which is hardest since you have to do marshalling/auth yourself
Shared Memory (yuck!)
RPC which works great but isn't trivial
DCOM which is easier but a pain to configure
WCF - but can you call it from your Windows Service written in Delphi?