creating new object again and again is bad practice . why ? - php-5.3

I heard numerous times that creating object for same database classes again and again is the bad practice.i really dont understand why is it so . Please somebody explain .

It's a bad idea in general , not just for database classes.
The more objects you have , the more memory being used to maintain your application.
For instance , take a look at: PHP Object Creation and Memory Usage
Hope I helped.

http://particletree.com/notebook/object-oriented-php-memory-concerns/
Have a look at that link to see how much memory is required to create an object (which has only 1 variable). Heavy class means bigger object. Having several memory-hungry scripts and a certain level of user base are enough to take up all memory in no time.
With regards to database classes, depending on whether the class opens up a connection on initialisation, as both the web server and database server have settings for maximum number of connections. The more objects you create, the more connections would be created which is not a good practice. One connection per database, if you can. If your database classes reuses the connection or use lazy initialisation to create the connection when required, you will still face the aforementioned memory problem.
To put it simply, reuse your objects (and your database connection).

Related

Syncing of memory and database objects upon changes in objects in memory

I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core

Advantage of using shared instances in database connection class

In many codes I have seen classes (that imports FMDatabase) that contains methods for creating a database connection, adding or removing value from data from database.. I found a shared instance method to create a shared instance of our class. What are it's advantages? I know one thing that each time you create a new connection there will be a time delay. We can avoid it using shared instance. But, are there any more reasons for using shared instance?
Keeping only one connection to database give you an unique point where you can read or write on your database (enqueuing all disk operation here) avoiding a lot of problems about handling reads and writes in different places of your application.

db4o Client See Changes from Another Client

I'm running a db4o server with multiple clients accessing it. I just ran into the issue of one client not seeing the changes from another client. From my research on the web, it looks like there are basically two ways to solve it.
1: Call Refresh() on the object (from http://www.gamlor.info/wordpress/2009/11/db4o-client-server-and-concurrency/):
const int activationDeph = 4;
client2.Ext().Refresh(objFromClient2, activationDeph);
2: Instead of caching the IObjectContainer, open a new IObjectContainer for every DB request.
Is that right?
Yes, #1 is more efficient, but is that really realistic to specify which objects to refresh? I mean, when a DB is involved, every time a client accesses it, it should get the latest information. That's why I'm leaning towards #2. Plus, I don't have major efficiency concerns.
So, am I right that those are the two approaches? Or is there another?
And, wait a sec... what happens when your object goes out of scope? On a timer, I call a method that gets an object from the DB server. That method instantiates the object. Since the object went out of scope, it's not there to refresh. And when I call the DB, I don't see the changes from the client. In this case, it seems like the only option is to open a new IObjectContainer. No?
** Edit **
I thought I'd post some code using the solution I finally decided to use. Since there were some serious complexities with using a new IObjectContainer for every call, I'm simply going to do a Refresh() in every method that accesses the DB (see Refresh() line below). Since I've encapsulated my DB access into logic classes, I can make sure to do the Refresh() there, every time. I just tested this and it seems to be working.
Note: The Database variable below is the the db4o IObjectContainer.
public static ApplicationServer GetByName(string serverName)
{
ApplicationServer appServer = (from ApplicationServer server in Database
where server.Name.ToUpperInvariant() == serverName.ToUpperInvariant()
select server).FirstOrDefault();
Database.Ext().Refresh(appServer, 10);
return appServer;
}
1) As you said, the major problem with this that you usually really don't know what objects to refresh.
You can use the committed event to refresh objects as soon as any client has committed. db4o will distribute that event. Note that this also consumes some network traffic & time to send the events. And there will be a time frame where your objects have a stale state.
2) It actually the cleanest method, but not for every db request. Use a object container for every logical unit of work. Any operation which is one 'atomic' unit of work in your business-operations.
Anyway in general. db4o was never build with the client server scenario as first priority, and it shows in the concurrent scenarios. You cannot avoid working with stale (and even inconsistent) object state and there is no concurrency control options (except the low level semaphores).
My recommendation: Use a client container per unit of work. Be aware that even then you might get stale data, which then might lead to a inconsistent view & update. When there are rarely any contentions & races in your application scenario and you can tolerate a mistake once in a while, then this is fine. However if you really need to ensure correctness, then I recommend to use a database which has a better concurrency store =(

WCF Windows Service not releasing resources/memory after every call

I have created WCF application which is running on Windows Service. It was installed using Windows Installer. I have followed procedure mentioned in following article for same.
http://msdn.microsoft.com/en-us/library/bb332338.aspx#msdnwcfhc_topic4
Most WCF properties are kept as default for net.tcp protocol, per call instance and so on.
Memory consumption of service keeps on increasing after every call and does not decrease. At the end it throws OutOfMemory consumption.
Application returns very heavy string based data. With memory-profiler I found memory is still allocated to string objects and increases during call.
As per my understanding string is managed objects should release data once out of scope.
Let me know if any other configuration/coding information is needed specifically.
There must be something keeping references to those strings in the code. Can you use your profiler to trace the references that are keeping the string objects alive?
After many unsuccessful attempts to handle LOH (http://msdn.microsoft.com/en-us/magazine/cc534993.aspx) which was very large string in my context, I have created custom class to handle it.
Instead of storing large string in single object am storing it as collection of small strings in custom object. .NET disposed it properly without creating above mentioned problem.
Other possible solution with worked for me is to use file object to store large file and access it as text reader. This works well and keeps footprint of application small. Unfortunately for me it did not work and accessing file location was not allowed for application.
It would be very difficult to answer this question without some code to look at. You can always call GC.Collect(GC.MaxGeneration) to force garbage collection and see if this doesn't reduce your memory consumption. Ideally this would only be temporary code to track down what is going on in the application. If forcing garbage collection does not reduce memory consumption then references to the strings must be being retained, via static member variables or whatever: having no conception of what the code is, any theory would be a shot in the dark

Session Management in TWebModule

I am using a TWebModule with Apache. If I understand correctly Apache will spawn another instance of my TWebModule object if all previous created objects are busy processing requests. Is this correct?
I have created my own SessionObject and have created a TStringList to store them. The StringList is created in the initialization section at the bottom of my source code file holding the TWebModule object. I am finding initialization can be called multiple times (presumably when Apache has to spawn another process).
Is there a way I could have a global "Sessions" TStringlist to hold all of my session objects? Or is the "Safe", proper method to store session information in a database and retrieve it based on a cookie for each request?
The reason I want this is to cut down on database access and instead hold session information in memory.
Thanks.
As Stijn suggested, using a separate storage to hold the session data really is the best way to go. Even better is to try to write your application so that the web browser contains the state inherently in the design. This will greatly increase the ability to scale your application into the thousands or tens of thousands of concurrent users with much less hardware.
Intraweb is a great option, but suffers from the scale issue in the sense that more concurrent users, even IDLE users, require more hardware to support. It is far better to design from the onset a method of your server running as internally stateless as possible. Of course if you have a fixed number of users and don't expect any growth, then this is less of an issue.
That's odd. If initialization sections get called more than once, it might be because the DLL is loaded in separate process spaces. One option I can think up is to check if the "Sessions" object already exists when you create it on initialization. If the DLL really is loaded in separate processes, this will not help, and then I suggest writing a central Session storage process and use inter-process-communication from within your TWebModule (there are a few methods: messages, named pipes, COM...)
Intraweb in application mode really handles session management and database access very smoothly, and scales well. I've commented on it previously. While this doesn't directly answer the question you asked, when I faced the same issues Intraweb solved them for me.

Resources