Planning to subscribe Aura cloud managed services with memory 4GB, 0.8 CPU and 8 GB storage plan.
But the storage is not enough. Is it possible to increase the storage in this plan?
How many core of CPUs in this plan if its mentioned as 0.8 CPU?
The Aura pricing structure is very simple. You can increase storage (or memory or CPU) by paying for a higher-priced tier. Of course, you can contact Neo4j directly to ask if they have any other options.
0.8 CPU means that you get the equivalent of 80% of a single core.
You can get more details from the Aura knowledge base and developer guide.
Related
I have a pretty big model I'm trying to run (30 GB of ram minimum) but every time I start a new instance, I can adjust the CPU ram but not the GPU. Is there a way on Google's AI notebook service to increase the ram for a GPU?
Thanks for the help.
In short: you can't. You might consider switching to Colab Pro that features e.g. better GPU:
With Colab Pro you get priority access to our fastest GPUs. For
example, you may get access to T4 and P100 GPUs at times when
non-subscribers get K80s. You also get priority access to TPUs. There
are still usage limits in Colab Pro, though, and the types of GPUs and
TPUs available in Colab Pro may vary over time.
In the free version of Colab there is very limited access to faster
GPUs, and usage limits are much lower than they are in Colab Pro.
That being said, don't count on getting best-in-class GPU just for yourself for ~10 USD / month. If you need high-memory dedicated GPU, you will likely have to resort to using a dedicated service. You should easily find services with 24 GB cards for less than 1 USD / hour.
Yes, you can create a personalized AI Notebook and also edit its hardware after the creation of it. Please take special care if you are not hitting the quota limit for GPU if you still are not able to change these settings.
What is an optimal minimum or recommended hardware (mostly cores-ram) for an orleans silo? for applications having CPU tasks and IO tasks
and with which criteria orleans decides to scale, adding more nodes in the cloud?
We recommend at least 4 core machines, 8 cores is even before. In terms on memory it mostly depends on your application usage. Orleans itself is pretty modest with its internal memory usage. The general guidelines is to prefer fewer larger machines over more smaller machines.
Orleans does not automatically add new nodes. Thus should be done outside Orleans, via the mechanisns provided by the Cloud provider. Once new nodes are added, Orleans will automatically join them to Orleans cluster and will start utilizing them.
Does anyone here can help me compare the price/month of these two elasticsearch hosting services?
Specifically, what is the equivalent of the Bonsai10 that costs $50/month when compared to the amazon elasticsearch pricing?
I just want to know which of the two services saves me money on a monthly basis for my rails app.
Thanks!
Bonsai10 is 8 core 1GB memory 10GB disk, limited to 20 shards & 1 million documents.
Amazon's AES doesn't have comparable sizing/pricing. All will be more expensive.
If you want 10GB of storage, you could run a single m3.large.elasticsearch (2 core 7.5GB memory, 32GB disk) at US$140/month.
If you want 8 cores, single m3.2xlarge.elasticsearch (8 core 30GB memory, 160GB disk) at US$560/month.
Elastic's cloud is more comparable. 1GB memory 16GB disk will run US$45/month. They don't publish the CPU count.
Out of the other better hosted elasticsearch providers (because they list actual resources you receive, full list below), qbox offers the lowest cost comparable plan for US$40/month for 1GB memory 20GB disk. No CPU count https://qbox.io/pricing
Objectrocket
Compose.io (an IBM company)
Qbox
Elastic
We're planning to evaluate and eventually potentially purchase perfino. I went quickly through the docs and cannot find the system requirements for the installation. Also I cannot find it's compatibility with JBoss 7.1. Can you provide details please?
There are no hard system requirements for disk space, it depends on the amount of business transactions that you're recording. All data will be consolidated, so the database reaches a maximum size after a while, but it's not possible to say what that size will be. Consolidation times can be configured in the general settings.
There are also no hard system requirements for CPU and physical memory. A low-end machine will have no problems monitoring 100 JVMs, but the exact details again depend on the amount of monitored business transactions.
JBoss 7.1 is supported. "Supported" means that web service and EJB calls can be tracked between JVMs, otherwise all application servers work with perfino.
I haven't found any official system requirements, but this is what we figured out experimentally.
We collect about 10,000 transactions a minute from 8 JVMs. We have a lot of distinct and long SQL queries. We use AWS machine with 2 VCPUs and 8GB RAM.
When the Perfino GUI is not being used, the CPU load is low. However, for the GUI to work properly, we had to modify perfino_service.vmoptions:
-Xmx6000m. Before that we had experienced multiple OutOfMemoryError in Perfino when filtering in the transactions view. After changing the memory settings, the GUI is running fine.
This means that you need a machine with about 8GB RAM. I guess this depends on the number of distinct transactions you collect. Our limit is high, at 30,000.
After 6 weeks of usage, there's 7GB of files in the perfino directory. Perfino can clear old recordings after a configurable time.
I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database.
Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine?
Thanks for any help.
There is no "average" machine. There is a wide range of still-in-use computers, including those that run DOS/Win3.1/Win9x and have less than 64MB of installed RAM.
If you don't set any minimum hardware requirements for your application, at least consider the oldest OS you're planning to support, and use the official minimum hardware requirements of that OS to gain a lower-bound assesment.
Generally, if your application is going to consume a considerable amount of RAM, you may want to let the user configure the upper bounds of the application's memory management mechanism.
That said, if you decide to dynamically manage the upper bounds based on realtime data, there are quite a few things you can do.
If you're developing a windows application, you can use WMI to get the system's total memory amount, and base your limitations on that value (say, use up to 5% of the total memory).
In .NET, if your data structures are complex and you find it hard to assess the amount of memory you consume, you can query the Garbage Collector for the amount of allocated memory using GC.GetTotalMemory(false), or use a System.Diagnostics.Process object.