Is there any way or code to make the http request be more faster? [closed] - ios

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Now I want to make app to show pic from server.But found it is so slowly to make the pics come out. Is there any way or code to make the http request be more faster?

Approach the problem from a different angle. Think it as an entropy/economy thing.
There is two distant point in the problem you are describing and they want to transfer a data between them. Lets say this is going to cost 100 units to realize in ideal conditions. And again lets assume it is not possible to further lower this cost. This cost is where the energy required to transfer is minimum
Now assuming that transfer rates are not under our control. Here are some theoretical "seemingly" improvements which are actually just different trade-off sets.
Forward Caching / Caching: Preload/download all images whenever possible so they will be ready when the user requests them. Install everything static with the first time installation.
Trade-off: You spent most of your 100 points on disk space and pre-process power this may make your app go little slower always but the performance will be great once they are loaded on disk. Effectiveness decrease as images you want changes frequently
Compression / Mapping: If your bottleneck is at transfer rates compress/map the images as much as you can so they will be transferred with low cost but when they arrive you will use much processor power once they arrive at the app.
Trade-off: CPU power is used a lot more then before but while in transfer they will be moving fast. Side that compresses uses a lot more memory and side that decompress also uses more memory and CPU. Only good if your images are slow moving because they are huge then you will see this trade-offs benefits. If you want to try this install 7z and check advanced zipping settings and try really huge maps.
Drawing algorithms: If your images are drawings instead of bitmaps/real pictures. Send only vector graphics format, change all your pictures(raster images to be technical) to vector images. Will greatly reduces the number of bytes to carry an image to app but it will need more processor power at the end.
Trade-off: Aside from not all pictures can be converted to vector graphics. you are going to be using more CPU and memory but there excellent libraries already built in which are very optimized. You can also think this as "mathematical compression" where a single infinite line can not be stored in any computer in universe a simple one line mathematical expression such as x = y + 1 will make it happen.
Write your own server: If you are sure the bottleneck is at the communication time with the service provider(in this case a http server which is most probably very efficient). Write you own server which answers you very quickly and starts sending the file. The likely hood of this improvements is to low to even talk about a trade off.
Never be forced to send repeating information: Design and tag your information and pictures in such a way. You will only send non-repeating chunks where the receiving side will store any chunk received to further improve its cache of information.
Trade-Off:Combination of 1,2 and 3 you this is just another form of disturbing 100 points of cost. You get the point.
BitTorrent Ideology:If the bottleneck is your servers bandwidth there is a simple formula to see if using your user's bandwidth is logical and have positive effects. Again it is probably only effective if your data set is very large.
Trade-Off: This is an interesting option to discuss about trade-off. You will be using your users bandwidth and proximity to compensate for your servers lack of bandwidth. It requires a little bit more CPU then conventional way of acquiring data(Maintain more TCP connections)
As a closing note: There can not be a function call that can improve and make the cost of transferring information from 100 points to 95 points. In current technology level it seems that that we are really close to effectively transfer. I mean compression, mapping and various other techniques are pretty mature including network transfer methodologies but there is always a room for improvement. For example currently we think sending data with the light speed is the absolute maximum as they are electrical signals but quantum entangled observation technique denies this limit where two entangled particles theoretically send and receive information in infinite speed across universe(??). Also If you can develop a better and more efficient way of compression that will be awesome.
Anyway, as you have asked a question which does not provide much information where we can talk. I would strongly recommend thinking like an engineer, creating a test environment pointing out to the main cause and attacking it with all you got. If you can define the problem with better mathematical expression or pointing out the bottleneck we can answer it better than generic information theory.
Also one last note. I am not saying information transfer is not going to be any more efficient it may go up %1000 percent tomorrow, I am just saying this field is pretty mature to get any huge improvements without working on mathematics and theory for years. Like any other field of research.

Related

Is there any way to calculate DRAM access latency (cycles) from data size?

I need to calculate DRAM access latency using given data size to be transfered between DRAM-SRAM
The data is seperated to "load size" and "store size" and "number of iteration of load and store" is given.
I think the features I need to consider are many like first DRAM access latency, transfer one word latency, address load latency etc..
Is there some popular equation to get this by given information?
Thank you in advance.
Your question has many parts, I think I can help better if I knew the ultimate goal? If it's simply to measure access latency:
If you are using an x86 processor maybe the Intel Memory Latency Checker will help
Intel® Memory Latency Checker (Intel® MLC) is a tool used to measure memory latencies and b/w, and how they change with increasing load on the system. It also provides several options for more fine-grained investigation where b/w and latencies from a specific set of cores to caches or memory can be measured as well.
If not x86, I think the Gem5 Simulator has what you are looking for, here is the main page but more specifically, for your needs, I think this config for Gem5 will be the most helpful.
Now regarding a popular equation, the best I could find is this Carnegie Melon paper that goes over my head: https://users.ece.cmu.edu/~omutlu/pub/chargecache_low-latency-dram_hpca16.pdf However, it looks like your main "features" as you put it revolve around cores and memory channels. The equation from the paper:
Storagebits = C ∗MC ∗Entries∗(EntrySizebits +LRUbits)
Is used to create a cache that will ultimately (the goal of ChargeCache) reduce access latency in DRAM. I'm sure this isn't the equation you are looking for but just a piece of the puzzle. The LRUbits relate to the cache this mechanism (in the memory controller, no DRAM modification necessary) creates.
EntrySizebits is determined by this equation EntrySizebits = log2(R)+log2(B)+log2(Ro)+1 and
R, B, and Ro are the number of ranks, banks, and rows in DRAM, respectively
I was surprised to learn highly charged rows (recently accessed) will have a significantly lower access latency.
If this goes over your head as well, maybe this 2007 paper by Ulrich Drepper titled What Every Programmer Should Know About Memory will help you find the elements you need for your equation. I'm still working through this paper myself, and there is some dated references but those depend on what cpu you're working with. Hope this helps, I look forward to being corrected on any of this, as I'm new to the topic.

Is it practical to include an adaptive or optimizing memory strategy into a library?

I have a library that does I/O. There are a couple of external knobs for tuning the sizes of the memory buffers used internally. When I ran some tests I found that the sizes of the buffers can affect performance significantly.
But the optimum size seems to depend on a bunch of things - the available memory on the PC, the the size of the files being processed (varies from very small to huge), the number of files, the speed of the output stream relative to the input stream, and I'm not sure what else.
Does it make sense to build an adaptive memory strategy into the library? or is it better to just punt on that, and let the users of the library figure out what to use?
Has anyone done something like this - and how hard is it? Did it work?
Given different buffer sizes, I suppose the library could track the time it takes for various operations, and then it could make some decisions about which size was optimal. I could imagine having the library rotate through various buffer sizes in the initial I/O rounds... and then it eventually would do the calculations and adjust the buffer size in future rounds depending on the outcomes. But then, how often to re-check? How often to adjust?
The adaptive approach is sometimes referred to as "autonomic", using the analogy of a Human's autonomic nervous system: you don't conciously control your heart rate and respiration, your autonomic nervous system does that.
You can read about some of this here, and here (apologies for the plugs, but I wanted to show that the concept is being taken seriously, and is manifesting in real products.)
My experience of using products that try to do this is that they do acually work, but can make me unhappy: that's because there is a tendency for them to take a "Father knows best" approach. You make some (you believe) small change to your app, or the environment and something unexecpected happens. You don't know why, and you don't know if it's good. So my rule for autonomy is:
Tell me what you are doing and why
Now sometimes the underlying math is quite complex - consider that some autonomic systems are trending and hence making predictive changes (number of requests of this type growing, let's provision more of resource X) so the mathematical models are non-trivial. Hence simple explanations are not always available. However some level of feedback to the watching humans can be reassuring.

Is it possible to quantify scalability as a requirement? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
G'day,
I was reading the item Quantify in the book "97 Things Every Software Architect Should Know" (sanitised Amazon link) and it got me wondering how to quantify scalability.
I have designed two systems for a major British broadcasting corporation that are used to:
detect the country of origin for incoming HTTP requests, or
determine the suitable video formats for a mobile phones's screen geometry and current connection type.
Both of the designs were required to provide scalability.
My designs for both systems are scalable horizontally behind caching load-balancing layers which are used to handle incoming requests for both of these services and distribute them across several servers which actually provide the service itself. Initial increases in service capacity are made by adding more servers behind the load-balance layer, hence the term horizontal scalability.
There is a limit to the scalability of this architecture however if the load balance layer starts having difficulty coping with the incoming request traffic.
So, is it possible to quantify scalability? Would it be an estimate of how many additional servers you could add to horizontally scale the solution?
I think this comes down to what scalability means in a given context and therefore the answer would be it depends.
I've seen scalability in requirements for things that simply didn't exist yet. For example, a new loan application tool that specifically called out needing to work on the iPhone and other mobile devices in the future.
I've also seen scalability used to describe potential expansion of more data centers and web servers in different areas of the world to improve performance.
Both examples above can be quantifiable if there is a known target for the future. But scalability may not be quantifiable if there really is no known target or plan which makes it a moving target.
I think it is possible in some contexts - for example scalability of a web application could be quantified in terms of numbers of users, numbers of concurrent requests, mean and standard deviation of response time, etc. You can also get into general numbers for bandwidth and storage, transactions per second, and recovery times (for backup and DR).
You can also often give numbers within the application domain - let's say the system supports commenting, you can quantify what is the order of magnitude of the number of comments that it needs to be able to store.
It is however worth bearing in mind that not everything that matters can be measured, and not everything that can be measured matters. :-)
The proper measure of scalability (not the simplest one;-) is a set of curves defining resource demanded (CPUs, memory, storage, local bandwidth, ...), and performance (e.g. latency) delivered, as the load grows (e.g. in terms of queries per second, but other measures such as total data throughput demanded may also be appropriate for some applications). Decision makers will typically demand that such accurate but complex measures be boiled down to a few key numbers (specific spots on some of the several curves), but I always try to negotiate for more-accurate as against simpler-to-understand measurements of such key metrics!-)
When I think of scalability I think of:
performance - how responsive the app needs to be for a given load
how large a load the app can grow into and at what unit cost (if its per server include software, support, etc)
how fast you can scale the app up and how much buffer you want over peak period usage (we can add 50% more bandwidth in 2-3 hours and require a 30% buffer over planned peak usage)
Redundancy is something else, but should also be included and considered.
"The system shall scale as to maintain a linear relationship of X for cost/user".
Here's one way:
"assume that a single processor can process 100 units of work per second..."
From http://www.information-management.com/issues/19971101/972-1.html

Reasons for NOT scaling-up vs. -out?

As a programmer I make revolutionary findings every few years. I'm either ahead of the curve, or behind it by about π in the phase. One hard lesson I learned was that scaling OUT is not always better, quite often the biggest performance gains are when we regrouped and scaled up.
What reasons to you have for scaling out vs. up? Price, performance, vision, projected usage? If so, how did this work for you?
We once scaled out to several hundred nodes that would serialize and cache necessary data out to each node and run maths processes on the records. Many, many billions of records needed to be (cross-)analyzed. It was the perfect business and technical case to employ scale-out. We kept optimizing until we processed about 24 hours of data in 26 hours wallclock. Really long story short, we leased a gigantic (for the time) IBM pSeries, put Oracle Enterprise on it, indexed our data and ended up processing the same 24 hours of data in about 6 hours. Revolution for me.
So many enterprise systems are OLTP and the data are not shard'd, but the desire by many is to cluster or scale-out. Is this a reaction to new techniques or perceived performance?
Do applications in general today or our programming matras lend themselves better for scale-out? Do we/should we take this trend always into account in the future?
Because scaling up
Is limited ultimately by the size of box you can actually buy
Can become extremely cost-ineffective, e.g. a machine with 128 cores and 128G ram is vastly more expensive than 16 with 8 cores and 8G ram each.
Some things don't scale up well - such as IO read operations.
By scaling out, if your architecture is right, you can also achieve high availability. A 128-core, 128G ram machine is very expensive, but to have a 2nd redundant one is extortionate.
And also to some extent, because that's what Google do.
Scaling out is best for embarrassingly parallel problems. It takes some work, but a number of web services fit that category (thus the current popularity). Otherwise you run into Amdahl's law, which then means to gain speed you have to scale up not out. I suspect you ran into that problem. Also IO bound operations also tend to do well with scaling out largely because waiting for IO increases the % that is parallelizable.
The blog post Scaling Up vs. Scaling Out: Hidden Costs by Jeff Atwood has some interesting points to consider, such as software licensing and power costs.
Not surprisingly, it all depends on your problem. If you can easily partition it with into subproblems that don't communicate much, scaling out gives trivial speedups. For instance, searching for a word in 1B web pages can be done by one machine searching 1B pages, or by 1M machines doing 1000 pages each without a significant loss in efficiency (so with a 1,000,000x speedup). This is called "embarrassingly parallel".
Other algorithms, however, do require much more intensive communication between the subparts. Your example requiring cross-analysis is the perfect example of where communication can often drown out the performance gains of adding more boxes. In these cases, you'll want to keep communication inside a (bigger) box, going over high-speed interconnects, rather than something as 'common' as (10-)Gig-E.
Of course, this is a fairly theoretical point of view. Other factors, such as I/O, reliability, easy of programming (one big shared-memory machine usually gives a lot less headaches than a cluster) can also have a big influence.
Finally, due to the (often extreme) cost benefits of scaling out using cheap commodity hardware, the cluster/grid approach has recently attracted much more (algorithmic) research. This makes that new ways of parallelization have been developed that minimize communication, and thus do much better on a cluster -- whereas common knowledge used to dictate that these types of algorithms could only run effectively on big iron machines...

Quality vs. ROI - When is Good Enough, good enough? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
UPDATED: I'm asking this from a development perspective, however to illustrate, a canoical non-development example that comes to mind is that if it costs, say, $10,000
to keep a uptime rate of 99%, then it theoretically can cost $100,000 to keep a rate
of 99.9%, and possibly $1,000,000 to keep a rate of 99.99%.
Somewhat like calculus in approaching 0, as we closely approach 100%,
the cost can increase exponentially. Therefore, as a developer or PM, where do you decide
that the deliverable is "good enough" given the time and monetary constraints, e.g.: are you getting a good ROI at 99%, 99.9%,
99.99%?
I'm using a non-development example because I'm not sure of a solid metric for development. Maybe in the above example "uptime" could be replaced with "function point to defect ratio", or some such reasonable measure rate of bugs vs. the complexity of code. I would also welcome input regarding all stages of a software development lifecycle.
Keep the classic Project Triangle constraints in mind (quality vs. speed vs. cost). And let's assume that the customer wants the best quality you can deliver given the original budget.
There's no way to answer this without knowing what happens when your application goes down.
If someone dies when your application goes down, uptime is worth spending millions or even billions of dollars on (aerospace, medical devices).
If someone may be injured if your software goes down, uptime is worth hundreds of thousands or millions of dollars (industrial control systems, auto safety devices)
If someone looses millions of dollars if your software goes down, uptime is worth spending millions on (financial services, large e-commerce apps).
If someone looses thousands of dollars if your software goes down, uptime is worth spending thousands on (retail, small e-commerce apps).
If someone will swear at the computer and looses productivity while it reboots when your software goes down, then uptime is worth spending thousands on (most internal software).
etc.
Basically take (cost of going down) x (number of times the software will go down) and you know how much to spend on uptime.
The Quality vs Good Enough discussion I've seen has a practical ROI at 95% defect fixes. Obviously show stoppers / critical defects are fixed (and always there are the exceptions like air-plane autopilots etc, that need to not have so many defects).
I can't seem to find the reference to the 95% defect fixes, it is either in Rapid Development or in Applied Software Measurement by Caper Jones.
Here is a link to a useful strategy for attacking code quality:
http://www.gamedev.net/reference/articles/article1050.asp
The client, of course, would likely balk at that number and might say no more than 1 hour of downtime per year is acceptable. That's 12 times more stable. Do you tell the customer, sorry, we can't do that for $100,000, or do you make your best attempt, hoping your analysis was conservative?
Flat out tell the customer what they want isn't reasonable. In order to gain that kind of uptime, a massive amount of money would be needed, and realistically, the chances of reaching that percentage of uptime constantly just isn't possible.
I personally would go back to the customer and tell them that you'll provide them with the best setup with 100k and set up an outage report guideline. Something like, for every outage you have, we will complete an investigation as to why this outage happened, and how what we will do to make the chances of it happening again almost non existent.
I think offering SLAs is just a mistake.
I think the answer to this question depends entirely on the individual application.
Software that has an impact on human safety has much different requirements than, say, an RSS feed reader.
The project triangle is a gross simplification. In lots of cases you can actually save time by improving quality. For example by reducing repairs and avoiding costs in maintenance. This is not only true in software development.Toyota lean production proved that this works in manufacturing too.
The whole process of software development is far too complex to make generalizations on cost vs quality. Quality is a fuzzy concept that consists of multiple factors. Is testable code of higher quality than performant code? Is maintainable code of higher quality than testable code? Do you need testable code for an RSS reader or performant code? And for a fly-by-wire F16?
It's more productive to make informed desisions on a case-by-case basis. And don't be afraid to over-invest in quality. It's usually much cheaper and safer than under-investing.
To answer in an equally simplistic way..
..When you stop hearing from the customers (and not because they stopped using your product).. except for enhancement requests and bouquets :)
And its not a triangle, it has 4 corners - Cost Time Quality and Scope.
To expand on what "17 of 26" said, the answer depends on value to the customer. In the case of critical software, like aircrafct controller applications, the value to the customer of a high quality rating by whatever measure they use is quite high. To the user of an RSS feed reader, the value of high quality is considerably lower.
It's all about the customer (notice I didn't say user - sometimes they're the same, and sometimes they're not).
Chasing the word "Quality" is like chasing the horizon. I have never seen anything (in the IT world or outside) that is 100% quality. There's always room for improvement.
Secondly, "quality" is an overly broad term. It means something different to everyone and subjective in it's degree of implementation.
That being said, every effort boils down to what "engineering" means--making the right choices to balance cost, time and key characteristics (ie. speed, size, shape, weight, etc.) These are constraints.

Resources