The user interface for Autonomous Database (Shared) seems to have an OCPU and storage limitation of 128 OCPUs and 128 TB. Is there a way to provision a database with more OCPU or TB than this?
Autonomous Database on Shared Infrastructure has a 128 OCPU and 128 TB storage limit on the user interface. However, you may provision larger databases by contacting Oracle via a service request or via your sales representative.
For more than 128 OCPUs, you may also enable auto-scaling to get immediate access to upto 3 x 128 OCPUs as your workload requires it.
Ref - I am a product manager on the Oracle Autonomous Database.
Related
Planning to subscribe Aura cloud managed services with memory 4GB, 0.8 CPU and 8 GB storage plan.
But the storage is not enough. Is it possible to increase the storage in this plan?
How many core of CPUs in this plan if its mentioned as 0.8 CPU?
The Aura pricing structure is very simple. You can increase storage (or memory or CPU) by paying for a higher-priced tier. Of course, you can contact Neo4j directly to ask if they have any other options.
0.8 CPU means that you get the equivalent of 80% of a single core.
You can get more details from the Aura knowledge base and developer guide.
For example, my company requires 4 VMs, 2 TB and 256k memory. Which cloud service provider is better for me among AWS, Azure and Google and why?
Do you care about data location, price, security, network, or what?
If you give no information except some random server specs that even say "256k memory", maybe you should just get a cheap server from Hetzner or other bulk VPS provider.
We have enabled the clustering in the 2 Ejabberd servers. But still we are getting the CPU overload alert after 78 sessions (around 156 users) connected to Ejabberd and server went to hung status.
Since we are getting the alert after around 150+ users connected, what are all the possible resources we can increase at hardware level (like memory, processor, etc.,) to resolve this issue?
Ejabberd Version: 17.01
CPU Count: 4 (each server)
Memory: 8GB (each server)
You get CPU overload with just 78 clients connected in each node? Obviously there is something weird there!
Are the clients just connected, or are they sending many messages?
Do the accounts have a small roster, or do they have thousands of contacts?
What happens if only one node is used, not in cluster: does it handle many more accounts, or it overloads CPU like in cluster?
Say my data store is going to increase in size, if the data increases how storage manager would manage the data. Does storage manager split the data with different domain machines ( definitely that is not the case)?
How exactly would the process work? What is the recommendation in this area, key-value store?
If you have a storage manager that is soon to run out of disk space, you can startup a new storage manager with a larger disk subsystem or that points to extensible cloud storage such as Amazon S3. Once the new storage manager is up-to-date the old one can be taken offline. This entire operation can be done while the database is running. Generally, we also recommend that you always run with at least 2 storage managers for redundancy.
If you have more questions, feel free to direct them to the NuoDB forum:
http://www.nuodb.com/community
NuoDB supports multiple back-end storage devices, including the Hadoop Distributed File System (HDFS). If you start a broker configured for HDFS, you can use HDFS tools to expand distributed storage on-the-fly and there's no need for any NuoDB admin operations. As Duke described, you can transition from a file-based Storage Manager to an HDFS one without interrupting services.
NuoDB interfaces with the storage layer using filesystem semantics for discrete storage units called "atoms". These map easily into the HDFS directory structure, simplifying administration on that end.
when using a DBaaS (database as a service) such as Xeround with a Rails app hosted on EC2 instances, how is it possible to limit the number of concurrent connections to the database (according to the DB service plan limits) ? is it necessary to do so at all ?
I know that ActiveRecord connections pool is per process and is thread safe, but what if there are several processes (also across several different machines) ?
Unfortunately there is no way to correctly limit the number of connections across multiple clients (applications). The only way, which is pretty much static and empirical, is to divide the number of max allowed connection by the number of apps and set the result as the connections limit per application.
Use a Connection pool base class for managing Active Record database connections.