file uploading over IBM LoadBalancer fault in IBM WebSphere cluster - upload

I have a WebSphere application cluster on SUSE15 with v9 and java8.
2 load balancer and 4 IHS and 4 WAS. it designs for file uploading.
file uploading through load balancers is very very slow and always failed.
as I investigated, the problem is LB, because if I skip them(Load balancers) and directly connect to IHS or WAS, uploading a file, works as expected.
I couldn't find the reason and related load balancer's config.
does anyone know the reason and the answer?

Related

AWS ELB/ECS Http response headers changed

Some context here:
An old Symfony app is used in multiple EC2 instances. Handles millions of requests each day without issues.
For dev purposes, the app was added to a container and that container is used locally by the developers without having to install all the requirements. The dockerized app uses the same nginx/supervisor/php-fpm configs that productive ec2 instances.
To make easier some dev processes, it was decided to create multiple dev environments using AWS Fargate, instead of EC2 instances.
The image is pushed to ECR and is deployed using FARGATE strategy to clusters.
The approach perhaps is too much, since we have 1 Cluster running 1 service only with 1 task. That Service uses an ELB -> Target group.
The application is working fine, but after some time (hours, or days), some requests are returned with different headers. The response is a JSON, but the content type is returned as HTML, other headers are dropped from the request like access-control-allow-headers, access-control-allow-credentials, access-control-allow-methods, triggering a CORS error in the client's browser.
The weird part is that if 1 page creates 10 requests to this service, 9 will work correctly, but 1 request will return 200 with different headers. That endpoint consistently will behave in the same way to any user until the task is restarted.
The response headers are returned by the Symfony app. I also tried to force those headers including those in nginx config by default for every response, and the result is the same.
The docker image exposes port 80 to the service.
The load balancer has the rule to forward HTTPS (443) traffic to port 80, so traffic can reach the container.
The load balancer has enabled the use of HTTP/2
The only notable difference besides EC2/Fargate implementations is the load balancer. The production load balancer is an old class load balancer with only HTTP/1 enabled and the new ones are Applications load balancers using HTTP/2.
This is driving me crazy. Has anyone experienced something like this?
Incorrect headers
Correct headers

Grails Hosting on EC2 Amazon Linux Instance

I have successfully uploaded and deployed my grails application on amazon elastic beanstalk with Tomcat 8 and Java 8 on linux ec2 and web app is up and running. It works well when doing REST API calls to and from RDS database. I have a api to upload file to the server from mobile app and from web app frontend. When running this grails app in localhost its works great for this api and uploads files successfully to user.home/{myapplicationDirectory}/somefile path in my Windows OS. But after running this app in elastic beanstalk and trying to upload image from mobile gives NPE as FileNotFoundException
FileNotFoundException occurred when processing request: [POST] /api/images/add
/usr/share/tomcat8/sdpl/images/260519011919.zip (No such file or directory)
Stacktrace follows:
java.io.FileNotFoundException: /usr/share/tomcat8/sdpl/images/260519011919.zip (No such file or directory)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
I have a service to get application data storage directory with this method
def String getApplicationPath() {
return System.getProperty("user.home") + File.separator + "images" + File.separator;
}
Hi as I don't see your full application I don't want to be too presumptuous but as you're using AWS Beanstalk you should consider local file storage to always be a temporary storage. Your server could be terminate and restarted by beanstalk if it stops responding or fails any health checks.
You have other options available, again I don't know if you considered them and have a good reason for using the local file system so forgive me if that's the case, though if not, you could use S3 for the storage of images, then you don't have to worry about disk space, and the images could automatically be then served via AWS's CDN - Cloudfront, thus also reducing load on your app.
Alternatively, when you really want to store these images in the filesystem, you can look at using EFS, the Elastic File System. Your EBS instance could mount the filesystem on startup so it will be always available whenever your instance(s) start.
I didn't suggest using a standard EBS volume, as you can only ever attach a volume to a single instance, if you used EFS, you don't have to worry about space and it can be mounted to multiple instances so is a little more flexible.

GCP Load Balancer: 502 Server Error, "failed_to_connect_to_backend"

I have a dockerized Go application running on two GCP instances, everything works fine when using them with their individual external IPs, but when put through the load balancer, they're either slow to answer or it answers a 502 server error. The health checks seems to be ok, so I really don't understand.
In the logs, the error thrown is
failed_to_connect_to_backend
I've already seen other answers on this question, but none of them seems to provide an answer for my case. I cannot modify the way the application is served, so it doesn't seems to be a timeout thing.
To troubleshoot 502 response from the Load Balancer due to "failed_to_connect_to_backend." I would check the followings:
1) Usually, "failed_to_connect_to_backend" error message indicates that the load balancer is failing to connect to backends, investigating URL map rules is also a good point to start. I would also suggest reviewing your Load Balancer's URL map to make sure that Host rules, Path matcher, and Path rules are correctly defined and comply with descriptions in this article.
2) Also check if the backend instances are exhausting their resources, If a backend server is overwhelmed, it will refuse incoming requests, potentially causing the load balancer to give up on it and return the specific 502 error you're experiencing. For Apache, you could use this link and nginx this link. Also, check the output on how many established connections are present at any one time using 'netstat' and watch command.
3) I would also recommend testing again with the HTTP(S) request directly to the instance, request the same URL that reporting 502. You might do this test in another VM instance in your VPC network.
checking whether your backend block google's cloud cdn ip address or not.those addresses can be found here:https://cloud.google.com/compute/docs/faq#find_ip_range
this happened to me more than once, I was using apache in my servers, and the issue was not of CPU, but of configuration,
I am using apache mpm_event in combination with php-fpm and there are many settings that will limit the max amount of requests that you want apache and fpm to allow.
In my case I increased in Apache MPM config MaxRequestWorkers from the default 150 to 600, and in PHP FPM config pm.max_children to 80 (I don't remember what was the default here)
This worked as expected, hope this helps you to extrapolate to your own stack.
Just encountered 502 errors myself on access to a Prometheus pod running on my GKE Standard cluster (exposed through IAP).
The issue was that the configured External HTTP/S Load Balancer's health check was coming back unhealthy. This was despite the Prometheus pod running as expected. After digging into the issue I found out that the GCP auto-generated health check was faulty, it was checking URL / instead of /-/ready. When I deleted the Prometheus k8s Ingress resource (which auto-generates GCPs LB and Health Check) and recreated it - the issue was resolved (after a few minutes of resource propagation).

What is the best way to use HTTP 2 with AWS Elastic beanstalk

I have a Ruby on Rails App hosted on AWS using Elastic-beanstalk which works with HTTP 1 now I want to use HTTP 2. Can someone suggest me the best approach
If I remember correctly when you add a new load balancer to your Elastic Beanstalk environment, it defaults to using a Classic Load Balancer, which doesn't support HTTP/2, I think the solution would be using an Application Load Balancer that does support it, you can find this info here. You can also specify it while creating your environment as you can see here. This will only allow HTTP/2 communication between the client and the ALB, your ALB will convert those HTTP/2 requests into HTTP/1.1 to communicate with your instance.
As seen here: "If end-to-end HTTP/2 is a requirement for your application you can use a Layer 4 ELB ( Classic Load Balancer with TCP listener or Network Load Balancer). If you are interested also in SSL offloading the only option for now is Classic Load Balancer with an SSL listener."

delayed_paperclip and load balancer

It seems paperclip and/or delayed_paperclip saves a temporary file on local web server fs before upload it to S3 or Rackspace Cloud File (i'm using fog.io).
If you have a worker for delayed_job on a separated server, this worker won't be able to see this tmp file on the web sever.
Or you could have one worker in each web server, but then worker1 on webserver1 can't see files on webserver2. You would need a queue/server affinity, right?
So the question is: how should I use delayed_paperclip with load balancer?
Thank you!

Resources