Im trying to build a web-link to a busy social networking website using intraweb.
Intraweb creates temporary folders for each session to store temporary files, which auto-delete when the session expires.
If hosted on Win 32, the limit is 65,536 folders - which means only 65k concurrent sessions are possible.
Is there a way to turn off the temp file creation or allow for more concurrent sessions in intraweb?
IntraWeb is just not designed for handling such session amounts. IntraWeb is designed for Web applications and not for Web sites. Eventhough a plain IntraWeb session takes only a few kbytes, IntraWeb's session handling model is more a "fat" model. It is perfectly suited for creating complex stateful applications that can handle a few hundred concurrent sessions.
For Web sites with thousands of users per day - where many users just open one page and go away again - you /could/ certainly use Webbroker - but that basically means that you have to build up everything from scratch.
If you are a Delphi guy then I would recommend looking into Delphi Prism plus ASP.NET. There are tons of ASP.NET controls that simplify building your Web site in a rapid way. ASP.NET controls from DevExpress.com, Telerik.com and others work perfectly fine in Delphi Prism.
I am pretty sure you'll run out of system resources before you get close to 65,000 users on one box. To handle that load you'll need a load-balanced cluster, and then the 65K limit won't be an issue. I would not focus on this limitation.
Related
Are there major disadvantages using the embedded Firebird 3 in a multi-user application server (Delphi Webbroker) instead of the full blown server install?
The application usually has very short transactions with low data volume.
As far as I am informed accessing one database file with multiple threads through the embedded server is not problematic but user security is not available. As the application server does the rights stuff I do not need Firebird security.
But will I loose performance or things like garbage collection?
Firebird Embedded provides all the features (except network access and authentication) that a normal Firebird server provides. However, because it is in-process, any problems that cause your application to crash, will take Firebird with it and vice versa.
Other possible downsides:
Garbage collection will - as far as I know - always use the 'cooperative' model (where the connection to find old record versions, is the one that cleans it up),
You can't use other tools to access your database remotely which may make administration harder,
You can't put your database on a separate server from your web application (think of security requirements).
Personally, I would only choose Firebird Embedded if the situation calls for it. In all other situations, I will use Firebird Server.
We are following Embedded Architecture for our S4HANA 1610 system.
Please let me know what will be the impact on Server if we implement 200+ Standard Fiori Apps in our System ?
Regards,
Sayed
When you say “server”, are you referring to the ABAP backend, consisting of one or more SAP application servers and usually one database server?
In this case, you might get a very first impression using transaction ST03.
Here, you get a detailed analysis of resource consumption on the SAP application server.
You also get information about database access times, as seen from the application server.
This can give you a good hint about resource consumption on the database server.
Usually, the ABAP backend is accessed from Fiori via OData calls.
Not every user interaction causes an OData call, some interactions are handled locally at the frontend.
In general, implemented apps only require some space on the hard disk, as long as nobody is using them.
So the important questions for defining the expected workload are:
How many users are working with these apps in which frequency (Avg.
thinktime)?
How many OData calls are sent from these apps to the backend and how
many dialog steps are handled by the frontend itself?
How expensive are these OData calls (see ST03)?
Every app reflects one or more typical business processes, which need to be defined.
Your specific Customizing also plays an important role, because it controls different internal functionality.
It’s also mandatory, to optimize database access, because in productive use, tables might get bigger in size, which might slow down database access over time.
Usually, this kind of sizing is done by SAP Hardware and Technology partners.
I did my research on how to implement comet like chat on asp.net / MVC.
what i found was it can be done by Long Polling..
about long polling , because it keeps the threads open so many concurrent connections will be made making its porformance poor (or flat zero), cuz IIS aint meant for many concurrent connections
Now the Tools For Business :Pokein, SignalR, SocketIO, Now.Js (Skipping paid tools, Free is pretty :) )
as far as i know all these use long polling ,then what do they actually du to improve performance in IIS (All these can be used with asp.net)..
I also found Facebook uses Erlang (Dunno how to use it) to make it happen & ofcourse $100 million worth of hardware(balancing 70 million user). and FB uses long polling not some comet server( as far as my research goes).
I want to implement scalable long polling on asp.net MVC 3
the two finalsit i found are Here and here
All i want to know which is better and why..
and also which tool is best among the given ones
My opinion would be that SignalR would be the better choice, if not only because if you use SignalR.WebSockets, it will automatically upgrade the connection to web sockets if the user's browser supports it. This way, over time, as users begin to upgrade the browsers and away from the long-polling scheme, the scalability of your chat application will actually get better.
Moreover, there is an awesome code example called JabbR, created by the very people that created SignalR. (who also happen to be developers on the ASP .NET team)
http://jabbr.net/ - an example of SignalR in action.
https://github.com/davidfowl/JabbR - JabbR source.
Though your marked the answer, I am tempted to give this answer as I'd been through this and for long time as well.
I have used the solutions from two big COMET players. One is websync and the other is PokeIn. Web sync was good but expensive. I had lots of problems with PokeIn in terms of successfully using it. I actually did not use this for Chat server but for a push live update where some external program sends/pushes the updates to the subscribed clients.
I suggest you try using IHttpAsyncHandler based logic. This is again a long-polling sort of technique, but the client returns after sending the request and the server can send the response asynchronously.
Sorry for the self-publicity. I have a sample implementation of this in a project named flycomet in codeplex. This simply has a handler which receives requests and based on the type of request responds with the replies if any.
Currently the implementation is not given as a chat server but as a Windows Console Push Client Application and the subscribers can be from asp .net or MVC or silverlight. The advantage is you can tune the code to scale for yourself.
If you want to modify this as a chat application, it is quite easy to push the data through jQuery.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking for some starting points integrating a Win32 Delphi application's data with a remote database for a web application.
Problem(s) this project intends to solve:
1) The desktop does not perform well over vpns. Users in remote office could use the web app instead.
2) Some companies prefer a web app to the desktop app
3) Mobile devices could hit the web app as a front end.
Issues I've identified:
Web application will run on a Unix based system, probably Linux while the desktop application uses NexusDB while the web application will likely be Postgres. Dissimilar platforms and databases.
Using Delphi it appears the Microsoft Sync Framework is not available for this project.
My first thought was to give the web app your standard REST API and have the desktop app hit the API as though it's a client every n-minutes from the local database server. Tons of issues I see with this already!
Richard, I have been down this path before and all I can say is DON'T DO IT! I use to work for a company that had a large Delphi Desktop Application (over 250 forms) running on DBISAM (very similar to what you have). Clients wanted a "Web" interface so people could remotely work and then have the web app and desktop app synch changes. Well, a few years later and the application was horrible - data issues and user workflow was terrible because managing the same data in two different places is a nightmare.
I would recommend moving your database to something like MySQL (Delphi and Web Client both hit) and use one database between the two interfaces. The reason the Delphi client is not working well over the VPN is because desktop databases like NexusDB and DBISAM copy way to much data over the pipe when it runs queries (pulls back all the data and then filters/orders, etc)- it not truly client / server like SQL Server or MySQL where all the heavy lifting is being done on the server and only the results come back. Of course, moving the Delphi app to DB like MySQL could eleviate speed issues all together - but you don't solve #2 and #3 with that.
Another option is to move the entire application to the web and only have 1 application to support. Of course, a good UI developer in a tool like Delphi can always make a superior user interface to a web app - especially in data-entry heavy applications - so that may not be an option for you.
I would be very weary of "synching data".
My 2 cents worth.
Mike
If you use a RESTful based ORM, you could have both for instance AJAX and Client Delphi applications calling the same Delphi server, using JSON as transmission format, HTTP/1.1 as remote connection layer, Delphi and Javascript objects to access the data.
For instance, if you type http://localhost:8080/root/SampleRecord in your browser, you'll receive something like:
[{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
And if you ask for http://localhost:8080/root/SampleRecord/1 you'll get:
{"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
This can be consumed by any AJAX application, if you know a bit about JavaScript.
And the same HTTP/1.1 RESTful requests (GET/POST/PUT/DELETE/LOCK/UNLOCK...) are already available in any Client HTTP/1.1 application. The framework implements the server using the very fast kernel-mode http.sys (faster than any other HTTP server on Windows), and fast HTTP API for the client. You can even use HTTPS to handle a secure connection.
IMHO, using such an ORM is better than using only a database connection, because:
It will follow more strictly the n-Tier principle: the business rules are written ONCE in the Delphi server, and you consume only services and RESTful operations with business objects;
It will use HTTP/1.1 for connection which is faster, more standard across the Internet than any direct database connection, and can be strongly secured via HTTPS;
JSON and RESTful over HTTP are de-facto standard for AJAX applications (even Microsoft uses it for WCF);
The data will be transmitted using JSON, which is a very nice format for multiple front-end;
The Stateless approach makes it very strong, even in unconnected mode;
Using a local small replication of the database (we encourage SQLite for this) allow you to have client access in unconnected mode (for Delphi client, or for HTML 5 clients).
I recommend you have one database, and two front ends (web UI that calls SOAP methods for its back end work, and a SOAP method call based rich client in Delphi, and a SOAP server tier that implements SOAP accessible methods which contains your business logic).
From what you're describing, you think replication will merely speed you up, but what it will do instead, is slow you down and cause you to have replication, coherence, and relational integrity problems that must be sorted out by hand (by you).
Take a look at this
CopyCat is a database replication
engine, written as a component set for
Embarcadero Delphi. CopyCat has been
in production use since 2004, and is
very stable. It is relied upon daily
by a number of small to large
businesses for applications ranging
from inter-site synchronization,
itinerant work, database backup and
more. We are confident that it can
fulfill your needs as well. Read on...
I'm thinking about to write a restful service which is able to upload and stream large video files (GB) (in future it might not only be videos and could also be large documents.
I researched so far and what really makes sense to me could be to use off:
WCF Data Services and Implement IDataServiceStreamProvider and on the back-end I want to Strore the large files into SQL SERVER 2008 using the new SQL Type FILESTREAM.Looks also like I had to use some Win 32 API to access the filesystem SafeFileHandle handle = SqlNativeClient.OpenSqlFilestream
Since WCF Data Services likes to play with Entity Framework or Linq-To-SQL who can be the streaming implementation and is there a support for the SQL Server Filestream Type?
this is the plan but I don't know how to assemble it together... I thougt about chunking the large files and to be able to resume and cancel.
For the upload: I am not sure to use the silverlight upload control or some other nifty ajax tool.
Can anyone point me in the right direction here... or would u think this is this a way to go? Thoughts, Links? whould be great...
I did something where I was sending huge data files. I used these two examples to help write my code
http://msdn.microsoft.com/en-us/library/ms751463.aspx
http://www.codeproject.com/KB/WCF/WCFDownloadUploadService.aspx
This is a very important number to know 2147483647
silverfighter:
Only on IIS6, I could not configure WCF Data Services to send more than 30 MB Stream over the network. I believe it is not built for large stream transactions. Just try to upload a 27 MB file and monitor the relevant w3wp process, you will be surprised by the amount of memory consumed.
The solution was to create a WCF Service Application hosted under its own w3wp process and responsible only for download / upload over WCF. i recommend you use the following project http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Hope the above could help.
Not Related to the question but related to answer of #Houssam Hamdan :
The 30 MB limit is not because of WCF data services but it's IIS's limitation that can be changed through config file and settings of IIS and catching some exceptions thrown by IIS