Is Traefik on Docker significantly slower with HTTPS (vs HTTP)? - docker

I've deployed a local instance of https://librespeed.org/ in order to test my LAN speeds. After changing some old cables, the speeds were good (~800mpbs symmetric).
I wanted to leave the service running and give it a URL, so I created a docker-compose.yml and gave it some labels in order to expose it through Traefik (as my other services).
To my surprise, after this change the speed was dramatically reduced (~450mbps, almost 50% decrease).
At first I blamed Traefik, but then I just disabled HTTPS and the speeds where back to ~800mbps.
What I've checked:
All other settings and stack are exactly the same.
TLS handshake seems to be happening only once, so this does not explain the difference.
The cypher being used is TLS_AES_128_GCM_SHA256, 128bit keys, TLS 1.3. I didn't change any of Traefik default settings about cyphers, so this is probably Traefik's default.
The browser used to test was Firefox 84.0.2 (64-bit).
What I'd like to know:
Is this a common performance downgrade?
Is Traefik really slow encrypting traffic?
Does dockerization impact AES encryption in some way (perhaps blocking some hardware access)?
Thanks in advance
Edit: the noble people of reddit made me realize that my old CPU does not have hardware AES acceleration, so that answers most of my concerns. I think this question is still relevant anyway, at least to alert other people that this can happen).

The noble people of reddit made me realize that my old CPU does not have hardware AES acceleration, so that explains the performance downgrade. I still don't know if this would happen anyway because of docker, but I hope it does not.

Related

Are add-ons like Redis To Go secure for Heroku/EC2?

I'm looking into using Heroku as our platform instead of managing our own systems. We have a Ruby/Rails stack and use Resque as our background job processor. I'm evaluating addons such as RedisToGo and RedisGreen, but it looks like there's no secure transport layer for all the services. However, according to RedisGreen's FAQ it doesn't matter:
Do you offer an encrypted connection to your servers?
No. Most organizations working in EC2 or Heroku treat Amazon’s internal network as a “trusted” one, so transport-level security doesn’t make much sense. We recommend against transferring data that should be secure over the open Internet.
As an Ops guy, makes me feel a bit queasy to have unencrypted data transfers. On the other hand, they make a good point. If Amazon is considered a trust internal network, then we wouldn't have to worry about 3rd parties trying to sniff us out.
So my question: is it safe to use these add-ons if I'm on the Heroku/EC2 ecosystem?
I've used AWS for years without any problems and most AWS users don't seem to be malicious. Also, Amazon has a comprehensive monitoring solution for their infrastructure. For example, they would be able to tell if another customer is trying to hack into your server in a few minutes if not seconds. I believe AWS also doesn't allow promiscuous mode on their Virtual/Physical networking infrastructure.
However, you have to also see how secure you want to be about your data. If you want 100% security that no other user is going to sniff your data then encrypt your connections/data transfers. Although unlikely, other AWS users could potentially sniff the data if they are sharing the same ethernet segment.
The current recommendation is to use a secure proxy with Redis if you want to have SSL encryption of your traffic (see the debate at https://code.google.com/p/redis/issues/detail?id=71 for example). AFAIK, only Redis Cloud can offer that functionality among existing Heroku's Redis providers.
As for whether security is a requirement for Heroku apps and their add-on over AWS, that really has to do with your data's nature and the risk of it being read by a potentially malicious party. Just remember that even a very low risk is still a risk and no security mechanism is unbreakable, so it's basically a matter of how much you're willing to invest to make it harder for someone to mess with you stuff.
(Due diligence - I work at Garantia Data, the company operating Redis Cloud and Memcached Cloud.)
I consider this highly dangerous. Heroku, for example, suggests running apps locally for development and copying the config to do so:
https://devcenter.heroku.com/articles/heroku-local
What that implies is that your laptop will then do an unencrypted connection - potentially through public wifis and definitely through the public internet - to your Redis To Go instance. As such, whether AWS will allow sniffing on their network or not is then completely irrelevant.

Why is membase server so slow in response time?

I have a problem that membase is being very slow on my environment.
I am running several production servers (Passenger) on rails 2.3.10 ruby 1.8.7.
Those servers communicate with 2 membase machines in a cluster.
the membase machines each have 64G of memory and a100g EBS attached to them, 1G swap.
My problem is that membase is being VERY slow in response time and is actually the slowest part right now in all of the application lifecycle.
my question is: Why?
the rails gem I am using is memcache-northscale.
the membase server is 1.7.1 (latest).
The server is doing between 2K-7K ops per second (for the cluster)
The response time from membase (based on NewRelic) is 250ms in average which is HUGE and unreasonable.
Does anybody know why is this happening?
What can I do inorder to improve this time?
It's hard to immediately say with the data at hand, but I think I have a few things you may wish to dig into to narrow down where the issue may be.
First of all, do your stats with membase show a significant number of background fetches? This is in the Web UI statistics for "disk reads per second". If so, that's the likely culprit for the higher latencies.
You can read more about the statistics and sizing in the manual, particularly the sections on statistics and cluster design considerations.
Second, you're reporting 250ms on average. Is this a sliding average, or overall? Do you have something like max 90th or max 99th latencies? Some outlying disk fetches can give you a large average, when most requests (for example, those from RAM that don't need disk fetches) are actually quite speedy.
Are your systems spread throughout availability zones? What kind of instances are you using? Are the clients and servers in the same Amazon AWS region? I suspect the answer may be "yes" to the first, which means about 1.5ms overhead when using xlarge instances from recent measurements. This can matter if you're doing a lot of fetches synchronously and in serial in a given method.
I expect it's all in one region, but it's worth double checking since those latencies sound like WAN latencies.
Finally, there is an updated Ruby gem, backwards compatible with Fauna. Couchbase, Inc. has been working to add back to Fauna upstream. If possible, you may want to try the gem referenced here:
http://www.couchbase.org/code/couchbase/ruby/2.0.0
You will also want to look at running Moxi on the client-side. By accessing Membase, you need to go through a proxy (called Moxi). By default, it's installed on the server which means you might make a request to one of the servers that doesn't actually have the key. Moxi will go get it...but then you're doubling the network traffic.
Installing Moxi on the client-side will eliminate this extra network traffic: http://www.couchbase.org/wiki/display/membase/Moxi
Perry

Detecting end-user connection speed problems in Apache for Windows

Our company provides web-based management software (servicedesk, helpdesk, timesheet, etc) for our clients.
One of them have been causing a great headache for some months complaining about the connection speed with our servers.
In our individual tests, the connection and response speeds are always great.
Some information about this specific client :
They have about 300 PC's on their local network, all using the same bandwith/server for internet access.
They dont allow us to ping their server, so we cant establish a trace route.
They claim every other site (google, blogs, news, etc) are always responding fast. We know for a fact they have no intention to mislead us and know this to be true.
They might have up to 100 PC's simulateneously logged in our software at any given time. They have a need to increase that amount up to 300 so this is a major issue.
They are helpfull and colaborative in this issue we are trying to resolve for a long time.
Some information about our server and software :
We have been able to allocate more then 400 users at a single time without major speed losses for other clients.
We have gone extensive lengths to make good use of data caching and opcode caching in the software itself, and we did notice the improvement (from fast to faster)
There are no database, CPU or memory bottlenecks or leaks. Other clients are able to access the server just fine.
We have little to no knowledge on how to do some analyzing on specific end-user problems (Apache running under Windows server), and this is where I could use a lot of help.
Anything that might be related to Apache configuration would also be helpfull.
While all signs points to it being an internal problem in this specific client network, we are dedicating this effort to solve that too, if that is the case, but do not have capable or instructed professionals to deal with network problems (they do, however, while their main argument is that 'all other sites are fast, only yours is slow')
you might want to have a look at the tools from google "page speed family": http://code.google.com/speed/page-speed/docs/overview.html
your customer should maybe run the page speed extension for you. maybe then you can find out what is the problem: http://code.google.com/speed/page-speed/docs/extension.html

Localhost is taking abnormally long time to load any page

The logs don't show anything different, and the computer is four times faster than the last one. Anyone know any common reasons why making a request to localhost would take a very long time?
I am using Mongrel.
Hard to give a solution based on the little information you give, so try to narrow it down. I would say that these three causes seem the most likely:
the database is slow. You can check this if your queries take a long time (check the logs). Perhaps you are using a slow connector (i.e. the default Ruby MySQL library), or your indexes haven't made it to your new machine.
Mongrel is slow. Check by starting it with Webrick and see if that's any better
your computer is slow. Perhaps it's running something else that's taking up CPU or memory. See your performance monitor (application to use for this differs per OS).
Could be a conflict between IPv4 and IPv6. If you're running Apache you have to take special steps to make it work nicely with IPv6 (my information here might be out of date.) I've found that an IPv6-enabled client would try to talk IPv6 to the server, and Apache would not receive the request. After it timed out the client would retry on IPv4.

Proxy choices: mod_proxy_balancer, nginx + proxy balancer, haproxy?

We're running a Rails site at http://hansard.millbanksystems.com, on a dedicated Accelerator. We currently have Apache setup with mod-proxy-balancer, proxying to four mongrels running the application.
Some requests are rather slow and in order to prevent the situation where other requests get queued up behind them, we're considering options for proxying that will direct requests to an idle mongrel if there is one.
Options appear to include:
recompiling mod_proxy_balancer for Apache as described at http://labs.reevoo.com/
compiling nginx with the fair proxy balancer for Solaris
compiling haproxy for Open Solaris (although this may not work well with SMF)
Are these reasonable options? Have we missed anything obvious? We'd be very grateful for your advice.
Apache is a bit of a strange beast to use for your balancing. It's certainly capable but it's like using a tank to do the shopping.
Haproxy/Nginx are more specifically tailored for the job. You should get higher throughput and use fewer resources at the same time.
HAProxy offers a much richer set of features for load-balancing than mod_proxy_balancer, nginx, and pretty much any other software out there.
In particular for your situation, the log output is highly customisable so it should be much easier to identify when, where and why slow requests occur.
Also, there are a few different load distribution algorithms available, with nice automatic failover capabilities too.
37Signals have a post on Rails and HAProxy here (originally seen here).
if you want to avoid Apache, it is possible to deploy a Mongrel cluster with an alternative web server, such as nginx or lighttpd, and a load balancer of some variety such as Pound or a hardware-based solution.
Pounds (http://www.apsis.ch/pound/) worked well for me!
The only issue with haproxy and SMF is that you can't use it's soft-restart feature to implement the 'refresh' action, unless you write a wrapper script. I wrote about that in a bit more detail here
However, IME haproxy has been absolutely bomb-proof on solaris, and I would recommend it highly. We ship anything from a few hundred GB to a couple of TB a day through a single haproxy instance on solaris 10 and so far (touch wood) in 2+ years of operation we've not had any problems with it.
Pound is an HTTP load balancer that I've used successfully in the past. It includes a dynamic scaling feature that may help with your specific problem:
DynScale (0|1): Enable or disable
the dynamic rescaling code (default:
0). If enabled Pound will periodically
try to modify the back-end priorities
in order to equalise the response
times from the various back-ends. This
value can be overridden for specific
services.
Pound is small, well documented, and easy to configure.
I've used mod_proxy_balancer + mongrel_cluster successfully (small traffic website).

Resources