Pass vijava ServiceInstance via rabbitmq or another task queue - connection

I'm trying to create a system where a master will create a connection to vcenter and passes the serviceinstance object to a bunch of performance collectors that can then do their work and exit. My question is what would be the best method to share the SI object? I was thinking of using a messaging queue for the purpose, but I'm not really keen on serializing objects. Is there any other more efficient way?

That SI is only going to work on that vCenter which created the SI. If thats is not going to be a problem for you then simply place the session id on the bus for your workers to pick up then they should be able to create a new SI using the session id.
The first time you connect:
ServiceInstance serviceInstance = new ServiceInstance(new URL("https://vcenter/sdk"),user, passwd, true);
String sessionId = serviceInstance.getServerConnection().getSessionStr();
Next place that sessionId on the bus. Have your worker pick it up and do:
ServiceInstance si2 = new ServiceInstance(new URL("https://vcenter/sdk"), sessionId, true);
The default timeout for that session is like 30 mins IIRC..
Also a little self plugging I would suggest a move from vijava to yavijava. Its a fork I maintain which has added lots of nifty features, and Im even currently adding 6.0 support. https://github.com/yavijava/yavijava

Related

Correct way to use DbConnection, DbTransaction with connection pooling, transactionScope and dependency injection?

I have a Oracle database and I'm using the Oracle.ManagedDataAccess.
In some cases I will need to do actions in a single transactions, but often not.
I'm not sure what the best way to handle DbConnection objects within a single TransactionScope.
I could inject a DbConnection into the repositories and then even use LifetimePerScope to ensure they all get the same DbConnection instance. But is that a smart move, is it ok to .Open() the connection once.
using (var scope = _lifetimeScope.BeginLifetimeScope())
{
var connection = scope.Resolve<IDbConnection>();
var personRepo = scope.Resolve<IPersonRepository>();
var workRepo = scope.Resolve<IWorkRepository>();
connection.Open();
var transaction = connection.BeginTransaction()
personRepo.DeleteById(someId);
workRepo.DeleteByPersonId(someId);
transaction.Commit();
}
This would force me to always use a LifetimeScope, even if not using a Transaction, and open the connection outside the repository method.
Are TransactionScopes dependent on a single connection or can I open multiple connections (how does the connectionPool handle that while a transaction is open?) within the same transaction?
I'm a total outsider to DbConnections and all that so I might be totally misunderstanding the best way to use TransactionScope and DbConnections.
Possible duplicate of: Why always close Database connection?
Since this has a bounty, I can't flag it as a duplicate :(
Anyway, connection pooling is largely done for you. You should close connections as soon as you can, to return them to the pool.
Transactions are related to a specific open connection and should be finished when closing the connection.
TransactionScope related to BeginTransaction() is specific to a connection.
If you want to maintain a transaction across multiple connections(multiple DBs,resources), then you need DTC aware TransactionScope. Here is a similar SO post. You need to use Oracle.ManagedDataAccessDTC.dll to facilitate that.
You might want to go through these links:
1.All about transactionscope
2.How To Configure DTC to Support Oracle Transactions
Hope this helps.

Zend\Session\Container annoyingly locks while in use, what's your workaround?

I have a controller with two actions. One performs a very long computation, and at several steps, stores status in a session container:
public function longAction()
{
$session = new Container('SessionContainer');
$session->finished = 0;
$session->status = "A";
// do something long
$session->status = "B";
// do more long jobs
$session->status = "C";
// ...
}
The second controller:
public function shortAction()
{
$session = new Container('SessionContainer');
return new JsonModel(
array(
'status' => $session->status
)
);
}
These are both called via AJAX, but I can evidence the same behavior in just using browser tabs. I first call /module/long which does its thing. While it completes its tasks, calling /module/short (I thought would just echo JSON) stalls /module/long is done!
Bringing this up, some ZFers felt this was a valid protection against race conditions; but I can't be the only one with this use case that really doesn't care about the latter.
Any cheap tricks that avoid heading towards queues, databases, or memory caches? Trying to keep it lightweight.
this is the expected behavior. this is why:
Sessions are identified using a cookie to store the session id, this allows your browser to pickup the same session on the next request.
As you long process is using sessions, it will not call session_write_close() until the whole process execution is complete, meaning the session is still open while the long process is running.
when you connect with another browser tab the browser will try and pickup the same session (using the same cookie) which is still open and running the long process.
If you open the link using a different browser you will see the page will load fine and not wait around for the session_write_close() to be called, this is because it's opening a separate session (however you will not see the text you want as it's a separate session)
You could try and manually write and close (session_write_close()) the session, but that's probably not the best way to go about things.
It's definitely worth looking at something like Gearman for this, there's not that much extra work, and it's designed especially for this kind of async job processing. Even writing status to the database would be better, but that's still not ideal.

Why does all data go away when restarting Neo4j?

I don't understand this paradigm I guess?
For a small single server or development environment... I hate having to load 100's of thousands of records just to analyze it in a graph... am I missing the big picture here?
UPDATE (3/21/2012 10:38a):
My current setup:
Default Install
Default Configs
Server Setup
Creating nodes via REST API
How do you instantiate your database, embedded or server? Are you running ImpermanentGraphDatabase, because that's the in-memory test database. If you use the normal EmbeddedGraphDatabase your graph is persisted trasactionally along the way when you insert your data.
Please give a little more information.
If using Java embedded transactions must be closed when saving objects or they might get lost. In earlier versions this was done by calling finally { tx.finish(); }, later versions (2.1+) it should happen automatically when instantiated within the try-with-resource. (This makes it possible to run into problems if the Transaction tx is instantiated outside the try clause).
GraphDatabaseService graphDb = new GraphDatabaseFactory().newEmbeddedDatabase(DB_PATH);
try (Transaction tx = graphDb.beginTx()) {
// create some nodes here
}

Selectively prevent Session from being created

In my app, I have an external monitor that pings the app ever few minutes and measures its uptime / response time Every time the monitor connects, a new server session is created, so when I look at the number of sessions, it's always a minimum of 15, even during times where there are no actual users.
I tried to address this with putting the session creation code into a filter, but that doesn't seem to do it - I guess session automatically gets created when the user opens the first page?
all() {
before = {
if (actionName=='signin') {
def session = request.session //creates session if not exists
}
}
}
I can configure the monitor to pass in a paramter if I need to (i.e. http://servername.com/?nosession, but not sure how to make sure the session isn't created.
Right now there is nothing you can do to prevent the session creation. See: http://jira.codehaus.org/browse/GRAILS-1238
Fortunately, until you are hitting high numbers of requests per second, this isn't a huge problem. One thing we did to get around the false data in our "currently active users" report, was to log the sessions to the database. We create a session record only when the user logs in. Then on specifically mapped URLs, we will "touch" that session record to update the last accessed time. The session record keeps track of user agent, IP, etc and is useful for many reasons. Doing something like this would get around the bogus session count.

ASP MVC - Comet/Reverse Ajax/PUSH - Is this code thread safe?

I'm trying to implement comet style features by polling the server for changes in data and holding the connection open untill there is something to response with.
Firstly i have a static variable on my controller which stores the time that the data was last updated:
public static volatile DateTime lastUpdateTime = 0;
So whenever the data i'm polling changes this variable will be changed.
I then have an Action, which takes the last time that the data was retrieved as a parameter:
public ActionResult Push(DateTime lastViewTime)
{
while (lastUpdateTime <= lastViewTime)
{
System.Threading.Thread.Sleep(10000);
}
return Content("testing 1 2 3...");
}
So if lastUpdateTime is less than or equal to the lastViewTime, we know that there is no new data, and we simply hold the request there in a loop, keeping the connection open, untill there is new information, which we could then send back to the client, which would handle the response and then make a new request, so the connection is essentially always open.
This seems to work fine but i'm concerned about thread safety, is this OK? Does lastUpdateTime need to be marked as volatile? Is there a better way?
Thanks
edit: perhaps i should use a lock object when i update the time value
private static object lastUpdateTimeLock = new object();
..
lock (lastUpdateTimeLock)
{
lastUpdateTime = DateTime.Now;
}
Regarding your original question, you do have to be careful with DateTimes, since they're actual objects in the .NET runtime. Only a few data types can be natively accessed (eg ints, bools) without locking (assuming you're not using Interlocked). If you want to avoid any issues with Datetimes, you can get the ticks as a long and use the Interlocked class to manage them.
That said, if you're looking for comet capabilities in a .NET application, you're unfortunately going to have to go a lot further than what you've got here. IIS/ASP.NET won't scale with the approach you've got in place right now; you'll hit limits before you even get to 100 users. Among other things, you will have to switch to using async handlers, and implement a custom bounded thread pool for the incoming requests.
If you really want a tested solution for ASP.NET/IIS, check out WebSync, it's a full comet server designed specifically for that purpose.
Honestly my concern would be with the number of connections kept open and the empty while loop. The connections you're probably fine on, but I'd definitely want to do some load testing to be sure.
The while (lastUpdateTime <= lastViewTime) {} seems like it should have a Thread.Sleep(100) or something in there. Otherwise I'd think it would consume a lot of cpu cycles needlessly.
The lock does not seem necessary to me around lastUpdateTime = DateTime.Now since the previous value does not matter. If it were lastUpdateTime = lastUpdateTime + 1 or something, then maybe it would be.

Resources