Cloud with shared memory - memory

do you know any if any well known clouds, e.g. Amazon, Azure, Google App Engine that has feature of shared memory? E.g. you can access data fast (from memory) and those are automatically synchronized with other nodes (machines...whatever).

Not quite shared memory, but Windows Azure has a Cache you can use. It's configurable from 128MB to 4GB, and exists outside of a specific deployment, letting you share cache content across instances, deployments, even on-premises applications.
More info on Cache is here.

Related

Is there a way to have a shared (temp) folder between apps or multiple instances of apps on Bluemix?

I am running a Rails app on Bluemix and want to use carrierwave for file uploads. So far no problem as I am using external storage to persist the files (ftp, s3, webdav etc.). However, in order to keep performance well I need to enable caching with carrierewave_backgrounder - and here it starts to get tricky. Thing is that I need to specify a temp folder for backgrounding the upload process (temp folder where the file remains before it is persisted on the actual storage), which is shared between all possible workers and app instances. If so at all, how can this be achieved?
Check out Object Storage - you can store files and then delete them when you no longer have a need for them. Redis is another option, as are any of the noSQL databases available on Bluemix.
typically in any cloud you never store on a file system of your VM or PaaS environment - reason being when you scale out, you have multiple VMS and a file written on one VM will not be available when 100s of VMs come up. The recommend practice is to look for storage services that the cloud platform provides. In Bluemix you have Storage options such as Cloud Object Storage, File Storgae and Block Storage.
As suggested before - you can take a look at the cloud object storage and utilize the service. Here is the documentation for Cloud Object Storage: https://ibm-public-cos.github.io/crs-docs/?&cm_mc_uid=06526771022514957173672&cm_mc_sid_50200000=1498597403&cm_mc_sid_52640000=1498599343. This contains quick start guide, storing, retrieving and API usage. Hope this helps.

How to properly manage storage in Jelastic

Okay, another question.
In AWS I have EBS, which allows me to create volumes, define iops/size for them, mount to desired EC2 machines and take snapshots.
How can I achieve same features in Jelastic? I have option to create "Storage Container" but it belongs only to one environment. How can I backup this volume?
Also, what's the best practice of managing storage devices for things like databases? Use separate storage container?
I have option to create "Storage Container" but it belongs only to one environment.
Yes the Storage Container belongs to 1 environment (either part of one of your other environments, or its own), but you can mount it in 1+ other containers (i.e. inside containers of other environments).
You can basically consider a storage container to be similar to AWS EBS: it can be mounted anywhere you like (multiple times even) in containers within environments in the same region.
How can I backup this volume?
Check your hosting provider's backup policy. In our case we perform backups of all containers for our customers for free. Customers do not need to take additional backups themselves. No need for those extra costs and steps... It might be different at some other Jelastic providers so please check this with your chosen provider(s).
If you wish to make your own backups, you can define a script to do it and set it in cron for example. That script can transfer archives to S3 or anything you wish.
what's the best practice of managing storage devices for things like databases?
Just like with AWS, you may experience performance issues if you use remote storage for database access. Jelastic should generally give you lower latency than EBS, but even so I recommend to keep your database storage local (not via storage containers).
Unlike AWS EC2, you do not have the general risk of local storage disappearing (i.e. your Jelastic containers local storage is not ephemeral; you can safely write data there and expect it to be persistent).
If you need multiple database nodes, it is recommended to use database software level clustering features (master-master or master-slave replication for example) instead of sharing the filesystem.
Remember that any shared filesystem is a shared (single) point of failure. What you gain in application / software convenience you may also lose in reliability / high availability. It's often worth making the extra steps in your application to handle this issue another way, or perhaps consider using lsyncd (there are Jelastic marketplace addons for this) to replicate parts of your filesystem instead of mounting a shared storage container.

can we connect storage server to application server as an external hard disk

I am new to storage domain .can Some one please help me in understanding the below things
Can a storage sever be connected to Application server?
1.How storage servers are different from applications servers
2.Can multiple application servers connect to storage serves over the network
3.what kind of files will be served by NAS and SAN severs
Firstly this question belongs on server-fault stack exchange still it is a good conceptual question...
So the answers are~~
Yes storage servers can connect to application server (app servers are in fact software frameworks or specific portion of a server program implementation). Application servers communicate with storage server to store / retrieve / process data.
Apart from high disk space, what else is different about storage servers you may ask ? In many cases, they come with a host of specialized services. This can include storage management software, extra hardware for higher resilience, a range of RAID (redundant array of independent disks) configurations and extra network connections to enable more users to be desktops to be connected to it.
Where as, application server is a software program that handles all application operations between users and an organization's backend business applications or databases. An application server is typically used for complex transaction-based applications. To support high-end needs, an application server has to have built-in redundancy, monitor for high-availability, high-performance distributed application services and support for complex database access. For mobile computing, mobile app server is mobile middleware that makes back-end systems accessible to mobile applications to support Mobile application development. Frankly speaking, application servers lie in the territory between database servers and the end user, and they often connect the two.
Multiple application servers CAN and in reality DOES connect to storage serves over the network or even directly. but for concurrent access to data there must be guaranteed reliability of data between transactions. Something like ACID properties.
Cming to the third one, NAS, it turns out, is NOT really storage networking. Actual network-attached storage would be storage attached to a storage-area network (SAN). NAS, on the other hand, is just a specialized server attached to a local-area network. All it does is make its files available to users and applications connected to that NAS box — much the same as a storage server. To further conceptualize the difference between a NAS and a SAN, NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system and mounted.

how nuodb manages the storage size increase

Say my data store is going to increase in size, if the data increases how storage manager would manage the data. Does storage manager split the data with different domain machines ( definitely that is not the case)?
How exactly would the process work? What is the recommendation in this area, key-value store?
If you have a storage manager that is soon to run out of disk space, you can startup a new storage manager with a larger disk subsystem or that points to extensible cloud storage such as Amazon S3. Once the new storage manager is up-to-date the old one can be taken offline. This entire operation can be done while the database is running. Generally, we also recommend that you always run with at least 2 storage managers for redundancy.
If you have more questions, feel free to direct them to the NuoDB forum:
http://www.nuodb.com/community
NuoDB supports multiple back-end storage devices, including the Hadoop Distributed File System (HDFS). If you start a broker configured for HDFS, you can use HDFS tools to expand distributed storage on-the-fly and there's no need for any NuoDB admin operations. As Duke described, you can transition from a file-based Storage Manager to an HDFS one without interrupting services.
NuoDB interfaces with the storage layer using filesystem semantics for discrete storage units called "atoms". These map easily into the HDFS directory structure, simplifying administration on that end.

Default DB Size of Heroku App

I am pretty new to web dev and thought I needed a 20gb shared db in order to test out apps that have larger than 5mb stored.
My friend let me know this was not true because I am using single app. He told me shared dbs were used for sharing data between multiple applications.
If so, what is Heroku's default, unshared db size? I had difficulty in finding this information on Heroku's website and google searches.
Could anyone chime in?
A shared database in this case means the server itself is shared -- so the server's CPU will be used to serve other databases in addition to your own.
A dedicated database server's CPU's are yours and yours alone.
If you need to exceed the 5MB threshold, you need to add the 20GB add on. More information: http://www.heroku.com/pricing

Resources