I am new to weblogic. Currently i am seeing the active connection high count keeps on increasing in my application. But the active current count is normal and there are no leaked connections showing in console.
Can anyone explain why my active connection high count is increasing even though active current count is normal. Also please explain the concept of active connection high count. Will this high number effect on performance of my application?
Highest number of active database connections in this instance of the data source since the data source was instantiated
Related
We experience intermittent, seemingly random brownouts of a firebase realtime database. We are beginning to shard our data into multiple databases, however, we are not sure this will solve our problem. It appears to us that firebase cannot scale to meet our needs in terms of doing frequent writes to a specific data set.
We sync data from a third-party data source in cycles (every 4-10 minutes, 1000 active jobs). Each update has the potential to change a few thousand nodes in firebase, most of which lie pretty low. However, most of the time the number of low-level nodes changed is much lower. We do differential updates on the sync'd data in order to allow very small writes to the lower-level nodes. This helps prevent our users from downloading a ton of additional data. We also batch all of our updates per cycle into only a handful of writes, between 10-20 (not sure of the performance impact of a batched write to multiple nodes vs. a write to a single node).
Here is an image of the database load graph, which includes some sharding:
Database Load
The "blue" line is our "main" database. The "orange line" is a database containing only the data that requires many writes, as described above. Currently, the main (blue) database is supporting normal operations, including reads/writes, etc.. The shard (orange) database is only handling writes. The mirror of these is pretty indicative of a "write" load issue, given that a large percentage of writes occurs in the morning.
At times, the database load reaches 100% and remains in this state for 30+ minutes.
Please let me know if I can expand on anything or explain anything in more detail. Would appreciate any suggestions on debugging strategies or explanations as to why this may be occurring.
We are actively refactoring a lot of code to mitigate this issue, however, it is not obvious what the main driver is.
I'm graphing my power meter with an old laptop in my barn.
This sends data using mqtt to mrtg(cacti)
Lately this laptop has begun to lockup when playing spotify.
This is a separate issue.
However, when I reboot, all the power used in the mean time is shown as being used in a single time period, giving a huge spike, so the rest of the data is hardly visible.
Is it possible to, when the data finally arrives, to intrapolate it on all the missing datapoints?
The laptop sending data was down between Sat 18:00 and Sun 11:00 approx, but of cause the real powermeter keeps running.
I'd rather have a straight line between the two datapoints, it is still loss of data, but is more true than a spike.
Edit: Complication, as Cacti reads the data asynchroneously from mqtt, it keeps getting the latest count even if the data is stale.
I guess I need to get my mqtt->cacti interface to send NaN or U if the timestamp of the data has not changed.
You have 2 options.
Add a timestamp to the message that way you can rebuild the data as the queued messages are delivered when the laptop reconnects to the broker.
Use a QOS 0 subscriptions and ensure that clean session is set to true, this will mean the missing readings are dropped. Zero data is probably easier to interpret from the graph than a large spike.
This is my very first question and I hope it's well explained and so I can find an answer.
I work at the website project for a delivery company that has all the data in an Oracle9i server.
Most of the web user just want to know when they're going to get their package but I'm sure there's also robots that query that info several times a day to update their systems.
I'm working on a code to stop those robots (asking for a captcha after the 3rd query in 15min, for example) because we have some web services they can use to query all the data in bulk.
Now, my problem is that peak hours 12.00-14.00 the database starts to answer very slowly.
Here is some data I've parsed from the web application. I don't have logs at this level for the web services, but there was also a lot of queries there.
It shows the timestamp when I request a connection from the datasource, the Integer.toHexString(connection.hashCode()), the name of the datasource, the timestamp when I close the connection and the difference between both timestamps.
Most of the time the queries end in less than a second but yesterday I had this strange delay for more than 2minutes.
Is there some kind of maximun number of connections allowed on the database so when it surpass that limit the database queues my query for sometime before trying again?
Thanks in advance.
Is there some kind of maximun number of connections allowed on the databas
Yes.
SESSIONS is one of the basic initialization parameters and
specifies the maximum number of sessions that can be created in the
system. Because every login requires a session, this parameter
effectively determines the maximum number of concurrent users in the
system.
The default value is derived from the PROCESSES parameter (1.5 times this plus 22); therefore if you didn't change the PROCESSES parameter (default 100) the maximum number of sessions to your database will be 172.
You can determine the value by querying V$PARAMETER:
SQL> select value
2 from v$parameter
3 where name = 'sessions';
VALUE
--------------------------------
480
so when it surpass that limit the database queues my query for sometime before trying again?
No.
When you attempt to exceed the value of the SESSIONS parameter the exception ORA-00018: maximum number of sessions exceeded will be raised.
Something may well be queuing your query but it will be within your own code and not specified by Oracle.
It sounds as though you should find out more information. If not at the maximum number of sessions then you need to capture the query that's taking a long time and profile it; this would, I think, be the more likely scenario. If you're at the maximum number of sessions then you need to look at your (companies) code to determine what's happening.
You haven't really explained anything about your application but it sounds as though you're opening a session (or more) per user. You might want to reconsider whether this is the correct approach.
Thanks for the edit vape.
I've also found the real problem.
I had the method that asks for a connection to the datasource synchronized and it caused locks while requesting connections at peak hours. I've had it removed and everything is working fine.
I'm trying to track down a performance issue by looking at the "Active Queries" tab in the Advantage Management Utility.
The documentation for this tab says:
Active: True if the query is being actively processed by the server. A query must be active to be cancelled.
Is a query Active until it completes? Or can it become inactive for another reason, like waiting on a resource (disk IO or a lock)?
I ask because I only 1-2 queries in an "active" state at a given time, but I also have 20+ worker threads running. Which makes little sense to me.
Active means the server is actively looking for rows to populate the cursor with for the request. It will remain active until enough rows have been made to satisfy the request. If the query needs to wait on a lock or disk I\O it will remain active. One caveat to this is live cursors. Live cursors are treated as tables by the client rather than SQL statements. Are the SQL statements that are open, but not active live cursors?
You might try calling the stored procedure sp_mgGetWorkerThreadActivity to see what commands the other threads are doing.
I have a site with several pages for each company and I want to show how their page is performing in terms of number of people coming to this profile.
We have already made sure that bots are excluded.
Currently, we are recording each hit in a DB with either insert (for the first request in a day to a profile) or update (for the following requests in a day to a profile). But, given that requests have gone from few thousands per days to tens of thousands per day, these inserts/updates are causing major performance issues.
Assuming no JS solution, what will be the best way to handle this?
I am using Ruby on Rails, MySQL, Memcache, Apache, HaProxy for running overall show.
Any help will be much appreciated.
Thx
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation-From-Scribd-Launch
you should start reading from slide 17.
i think the performance isnt a problem, if it's possible to build solution like this for website as big as scribd.
Here are 4 ways to address this, from easy estimates to complex and accurate:
Track only a percentage (10% or 1%) of users, then multiply to get an estimate of the count.
After the first 50 counts for a given page, start updating the count 1/13th of the time by a count of 13. This helps if it's a few page doing many counts while keeping small counts accurate. (use 13 as it's hard to notice that the incr isn't 1).
Save exact counts in a cache layer like memcache or local server memory and save them all to disk when they hit 10 counts or have been in the cache for a certain amount of time.
Build a separate counting layer that 1) always has the current count available in memory, 2) persists the count to it's own tables/database, 3) has calls that adjust both places