Is there a way that I can use the 100% of my network bandwidth with only one connection? [closed] - informix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program that reads about a million of rows and group the rows; the client computer is not stressed at all, no more than 5% cpu usage and the network card is used at about 10% or less.
If in the same client machine I run four copies of the program the use grow at the same rate, with the four programs running, I get about 20% cpu usage and about 40% network usage. That makes me think that I can improve the performance using threads to read the information from the database. But I don't want to introduce this complexity if a configuration change could do the same.
Client: Windows 7, CSDK 3.50.TC7
Server: AIX 5.3, IBM Informix Dynamic Server Version 11.50.FC3

There are a few tweaks you can try, most notably setting the fetch buffer size. The environment variable FET_BUF_SIZE can be set to a value such as 32767. This may help you get closer to saturating the client and the network.
Multiple threads sharing a single connection will not help. Multiple threads using multiple connections might help - they'd each be running a separate query, of course.
If the client program is grouping the rows, we have to ask "why?". It is generally best to leave the server (DBMS) to do that. That said, if the server is compute bound and the client PC is wallowing in idle cycles, it may make sense to do the grunt work on the client instead of the server. Just make sure you minimize the data to be relayed over the network.

Related

Hosting <10gb of read-only data with reasonably fast but cheap access [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
tl;dr
I've currently got a PostGresQL database with about 10gb of data. This data is "archived" data- so it won't ever be changing, but I do need the data to be queryable/searchable/available for reading in the cheapest method possible for my Rails app.
Details:
I'm running a Digital Ocean server, but this is a no-profit project, so keeping costs low is essential. I'm currently using a low-end droplet 4 GB Memory / 40 GB Disk / SFO2 - Ubuntu 16.04.1 x64
Querying this data/loading the pages it's used on can take a significant amount of time occasionally. Some pages timeout because they take over a minute to load. (Given, those are very large pages, but still)
I've been looking at moving the database over to Amazon RedShift, but the base prices seem large- as they're aimed at MUCH larger projects than mine.
Is my best bet to try to put more and more into making the queries small and only rendering small bits at a time? Even basic pages have a long query time because the server is slowed down so much. Or is there a method similar to RedShift that will allow me to quickly query the data while also storing it externally for a reasonable price?
You can try Amazon S3 and Amazon Athena. S3 is a super simple storage where you can dump your data in text files and Athena is a service that provides SQL-like interface to data stored on S3. S3 is super cheap and Athena has per runtime cost. Since you said your data isn't going to change and is going to be queried rarely it's a good solution. Check this out: 9 Things to Consider When Choosing Amazon Athena

Aurora vs MemSQL for low throughput but low latency purpose [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are changing our current user side MySQL database. We have a website + mobile app for which users around the US query our database. the relevant data is is contained in three tables, and a join query against the three tables is needed to send the relevant results to the users.
The result sent back to the users are of small size (<6kb). If our objective is to go for low latency and throughput is a low priority, which of the two following databases would perform better:
MemSQL or AWS Aurora?
they both have the same starting cost for hardware (~$0.28/hr). We are only considering those two databases at this stage so that we can continue our "MySQL" in-house knowledge.
I like that i can outsource the DB headache to Aurora. But surely MemSQL's ability to read/write to memory is a lower latency solution?
Nothing beats in-memory for speed and this is what MemSQL is built for. It stores tables (in rowstore mode) in memory and uses a custom query engine to cache queries into an intermediate language so it can execute as fast as possible. Aurora is more like a classic disk-based MySQL instance but with lots of infrastructure changes and optimizations to make the most of Amazon's services.
Before deciding though, you need to figure out what "low-latency" means - is this within seconds or milliseconds?
MemSQL will be faster and most likely in milliseconds depending on your query. Aurora will be slower but can probably deliver sub-second, again depending on your query and the resources allocated and how the data is structured.
Without any more details, the answer is to judge by what your performance tolerance is and then experiment.

Memory breakdown based on its speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In one technical discussion the person asked me which things you look into when you buy a laptop.
Then he asked me to Sort different types of memory e.g RAM etc on the basis of speed.In simple words he wanted memory hierarchy .
Technically speaking a processor's registers are the fastest memory a computer has. The size is very small and people generally don't include those numbers when talking about a CPU.
The quickest memory in a computer that would be advertised is the memory that is directly attached to the CPU. It's called cache, and in modern processors you have 3 levels - L1, L2, and L3 - where the first level is the fastest but also the smallest (it's expensive to produce and power). Cache typically ranges from several kilobytes to a few megabytes and is typically made from SRAM.
After that there is RAM. Today's computers use DDR3 for main memory. It's much larger and cheaper than cache, and you'll find sticks upwards of 1 gigabyte in size. The most common type of RAM today is DRAM.
Lastly storage space, such as a hard drive or flash drive, is a form of memory but in general conversation it's grouped separately from the previous types of memory. E.g. you would ask how much "memory" a computer has - meaning RAM - and how much "storage" it has - meaning hard drive space.

Number of Erlang nodes possible/practical? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
1) What's the largest theoretical number of nodes that can exist in an Erlang network ('theoretical' perhaps meaning 'whatever is allowed or disallowed by the language')?
2) What's the practical number of nodes that can exist in an Erlang network? I know this could probably vary hugely depending on a variety of factors. If you want to throw me some numbers, you can assume each node is a separate machine accessible through the internet, not through a LAN (I assume this is possible?), and each machine is simply a 'generic desktop PC' of average performance. In fact, you can assume 'average' for anything you need an assumption for (average latency, average bandwidth i.e. cable modem, etc).
3) What's the largest number of nodes in an Erlang network that is known to have existed?
Related to above questions... doesn't each node keep a tcp connection to all other nodes? So if you were to have thousands of nodes... ?
If it makes any difference, I'm not asking these questions for trivia purposes. They are exploratory questions for a possible project.
Thanks.
1) unlimited, Erlang the language does not it self specify any limitations to this. It will depend on the runtime implementation.
2) Normally I would not use Erlang's built in distribution for doing things over the internet. Firewalls tend to screw things up a lot, and the current implementation is not really aimed at that use case. Rather it is meant to be used in a LAN where you have more control over the environment.
If you do want to connect nodes using over the Internet then you should do so using another protocol built on top of the tcp stack.
3) I've heard of people getting it a bit over 100, but after that things start to degenerate because all nodes are connected in a full mesh.
For a larger discussion have a look at this: http://learnyousomeerlang.com/distribunomicon#fallacies-of-distributed-computing

Lowest Spec VPS to run Ruby on Rails [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I want to get started with my first Ruby on Rails application.
It will pull an image, and some text about the image display both, and have a small box to write some text about the image, which will then be written to a database once submitted.
It's unlikely to have any more than 5 concurrent users, as it's a personal project.
What's the lowest VPS spec needed to run ruby on rails? Would it be possible on 64mb (128 burst) ram or could I go even lower?
The lowest I'd advocate is a 512MB system. The Ruby on Rails stack can be 50-100MB alone unless you're very careful about pruning off extras. This is an inconsequential amount of memory on a modern system, though, where 4096MB is common even in the VPS world.
Linode offers a $19.95 plan for the basic 512MB system which, while not the cheapest around, is very affordable even for personal projects. There are less expensive providers, but their quality of service may vary considerably.
If you're using Passenger then even a 512MB machine can run several lightly loaded sites.
Instead of running your own VPS you might want to use Heroku which doesn't allocate memory to customers directly but instead shares of CPU time they call "Dynos" that are somewhat more abstract than a VPS.
Younger Joseph. You should've learn to use git and started using Heroku. Heroku doesn't publicize it clearly on the website but they offer a free plan.

Resources