Hazelcast embedded, local or in process example - local

I would like to use Hazelcast as cache embedded, local or in process. I think that these three concepts are the same.
Is it a correct example?
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
ICacheManager cacheManager = hz.getCacheManager();
ICache<String, CaLpgDataCollectionDto<CaBigNumber>> icache = cacheManager.getCache("Test");
icache.put("Test1",lpgDatasource);
CaLpgDataCollectionDto<CaBigNumber> testDatasource = icache.get("Test1");
If not. I would like to see one.
Kind regards.

Yours is the correct embedded usage of Hazelcast cache.
You can find many more samples at Hazelcast code samples repository. Especially JCache samples if you are interested in caching api.

Related

What do the prefixes (ep and vb) in couchbase monitoring statistics mean?

While looking at the statistics for CouchBase monitoring, like ep_queue_size, vb_num_eject_replicas, ep_warmup_value_count, and vb_active_curr_items, I see that most of them are prefixed with ep_ or vb_. I have not been able to find any explanation on what these prefixes mean.
I have tried searching the docs at https://docs.couchbase.com/server/current/manage/monitor/monitoring-cli.html but have found nothing. I think it might be that ep_ stands for ephemeral bucket and vb_ stands for vBucket, but that's just a wild guess.
ep refers to Eventually Persistent, referring to the eventually persistent data layer of Couchbase. Reference to it is here.
vb refers to vBucket. Reference to it is here.

Installation & Configuration of Gremlin-neo4j in windows

Hi i am new to gremlin and neo4j anyone please tell me how to install and configure this database.
I use this http://tinkerpop.apache.org/docs/3.1.0-incubating/ link for reference but i con't configure it.
That's a really old version of TinkerPop you are referencing in that link. The latest version is 3.3.3, please considering using that.
The most simple way to get started is to just create a Graph instance which will start Neo4j in an embedded mode:
Graph graph = Neo4j.open('data/neo4j');
GraphTraversalSource g = graph.traversal();
List<Vertex> vertices = g.V().toList()
To have greater control over the Neo4j specific configuration rather than all the defaults then you will want to create a properties file or Configuration object and pass that to open() rather than a directory where your data is:
Configuration conf = new BaseConfiguration();
conf.setProperty("gremlin.neo4j.directory","/tmp/neo4j");
conf.setProperty("gremlin.neo4j.multiProperties",false);
conf.setProperty("gremlin.neo4j.conf.dbms.transaction.timeout","60000s");
Graph graph = Neo4jGraph.open(configuration);
GraphTraversalSource g = graph.traversal();
List<Vertex> vertices = g.V().toList()
I'd suggest sticking to embedded mode initially, but connecting in high availability mode is also possible using the "configuration" approach above with specifics defined here in the documentation.

--workerCacheMB setting missing in apache beam 0.6?

In Google Cloud Dataflow 1.x, I presumably had access to this critical pipeline option called:
workerCacheMb
I tried to set in in my beam 0.6 pipeline, but couldn't do so (it said that no such option existed.). I then scoured through the options source code to see if any option had a similar name -- but I still couldn't find it.
I need to set it as I think that my worfklow's incredibly slowness is due to a side input that 3GB but that appears to be taking well over 20 minutes to read. (I have a View.asList() and then I'm trying to do a for-loop on the list -- it's taking more than 20 minutes and still going; even at 3 GB, that's way too slow.) So, I was hoping that setting the workerCacheMb would help. (The only other theory I have is to switch from serializablecoder to AvroCoder....)
Are you using the right class of options?
The following code works for me in Beam:
DataflowWorkerHarnessOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create()
.cloneAs(DataflowWorkerHarnessOptions.class);
options.setWorkerCacheMb(3000);

symfony2 free -m out of memory

I have symfony2 app out there. But we have RAM memory problems... It works like a charm when there is 50 active people (google analytics).
I select data from DB usally like this:
$qb=$this->createQueryBuilder('s')
->addSelect('u')
->where('s.user = :user')
->andWhere('s.admin_status = false')
->andWhere('s.typ_statusu != :group')
->setParameter('user', $user)
->setParameter('group', 'group')
->innerJoin('s.user', 'u')
->orderBy('s.time', 'DESC')
->setMaxResults(15);
return $query=$qb->getQuery()->getResult();
The queries are fast i dont have problem with them.
Let me please know exactly what you need and i will paste it here. I need to fix it so much..
BUT THE PROBLEM COME NOW: When there is 470people at the same time.. (google analytics) there is about 7GB of memory away... then it fall down after peak to 5GB. But WHY SO MUCH??? My scripts take from 10-17MB of memory usage in app_dev.
I also use APC. How can i solve this situation? Why is there so much memory consumed? Thx for any advice!!
Whats your average memory?
BTW: When i will not solve this i will be in big troubles.
One problem could be doctrine and if you are hydrating too much obejcts in every single request.
Set max execution time of a script to only 30 seconds:
max_execution_time = 30
Set APC shm_size to something reasonable compared to your memory:
apc.shm_size = 256M
Then optimize your query. And if you use php/symfony in cli, you better limit the resource usage for php in cli too.
Ensure you are understanding correctly memory consumption. http://blog.scoutapp.com/articles/2010/10/06/determining-free-memory-on-linux
To fast Apc you can remove the modified check with apc.stat = 0 but you will need to clear apc-cache every time you modify existing files: http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
To reduce memory consumption reduce hydration my adding ->select('x') and fetching only the essential.
To optimize memory consumption enable mysql-cache, something like /etc/mysql/my.cnf:
query_cache_size=128M
query_cache_limit=1M
Do not forget to enable and check your slow-query-log to avoid bottlenecks.
I suspect that your page has more that one query. How many queries happening on the page? Worst thing in the doctrine is the ability to make queries through getter(ex. getComments()). If you are using a many-to-many relation, this leads to huge problems. You can see all queries via profiler in dev environment.
Also possible, that problem in the settings of apache or php. Incorrect settings of php-fpm lead to problems too. The best solution is to stress test your server with tools like siege and look what goes on through the htop or top. 300 peoples can be a heavy load for the "naked" apache
Have you ever tried to retrieve scalar results instead of a collection of objects ?
// [...]
return $query=$qb->getQuery()->getScalarResult();
http://docs.doctrine-project.org/en/latest/reference/query-builder.html#executing-a-query
http://doctrine-orm.readthedocs.org/en/2.0.x/reference/dql-doctrine-query-language.html#query-result-formats
At symfony configuration level, have you double-checked your configuration to ensure caching has been enabled properly ?
http://symfony.com/doc/current/reference/configuration/doctrine.html#caching-drivers
Detaching entities from your entity manager could prove useful depending on your overall process:
http://docs.doctrine-project.org/en/2.0.x/reference/working-with-objects.html#detaching-entities

OMNIORB: Read current orb setting

It is possible to use CORBA::ORB_init to set the native codeset for the orb.
But if in an application an orb is retrieved in different configurations the orb is initialized only once.
"-ORBconfigFile config1.cfg"
CORBA::ORB_var orb1 = CORBA::ORB_init(orbInitParams.argc(), orbInitParams.argv());
"-ORBconfigFile config2.cfg"
CORBA::ORB_var orb2 = CORBA::ORB_init(orbInitParams.argc(), orbInitParams.argv());
But the thing is that the first one wins. So in a big application where the caller of the second ORB_init does not know of the first caller he will get the orb configured like 1.
This matters if 1. uses
nativeCharCodeSet = ISO-8859-1
while 2 uses
nativeCharCodeSet = UTF-8
Is there a way to read the ORB setting to check if settings are attached successful?
Why this shows up: I am using Omniorb in a dll (Thats where I initialize it). Now the application has a second component using omniorb which comes first. So I lost my UTF-8 configuration.
With omniorb it seems not possible to have to orbs in one process or is it possible to read the configuration.

Resources