I have deployed code on 2 dockers in an ubuntu 16.04 instance and running the application on flask server, python 2.7.12
Previously i thought the error was from boto3 library, because i was uploading files to s3 so I removed that library dependency by using s3cmd os level commands to upload files but still i am getting the same error after a 100 to 500 hits, i am actually closing every file. I am even closing http connection by using "Connection" : "close" in the headers.
Is there a limit for number of requests from a docker ? and i am using threaded = True on flask is it because of that ?
Related
Some context here:
An old Symfony app is used in multiple EC2 instances. Handles millions of requests each day without issues.
For dev purposes, the app was added to a container and that container is used locally by the developers without having to install all the requirements. The dockerized app uses the same nginx/supervisor/php-fpm configs that productive ec2 instances.
To make easier some dev processes, it was decided to create multiple dev environments using AWS Fargate, instead of EC2 instances.
The image is pushed to ECR and is deployed using FARGATE strategy to clusters.
The approach perhaps is too much, since we have 1 Cluster running 1 service only with 1 task. That Service uses an ELB -> Target group.
The application is working fine, but after some time (hours, or days), some requests are returned with different headers. The response is a JSON, but the content type is returned as HTML, other headers are dropped from the request like access-control-allow-headers, access-control-allow-credentials, access-control-allow-methods, triggering a CORS error in the client's browser.
The weird part is that if 1 page creates 10 requests to this service, 9 will work correctly, but 1 request will return 200 with different headers. That endpoint consistently will behave in the same way to any user until the task is restarted.
The response headers are returned by the Symfony app. I also tried to force those headers including those in nginx config by default for every response, and the result is the same.
The docker image exposes port 80 to the service.
The load balancer has the rule to forward HTTPS (443) traffic to port 80, so traffic can reach the container.
The load balancer has enabled the use of HTTP/2
The only notable difference besides EC2/Fargate implementations is the load balancer. The production load balancer is an old class load balancer with only HTTP/1 enabled and the new ones are Applications load balancers using HTTP/2.
This is driving me crazy. Has anyone experienced something like this?
Incorrect headers
Correct headers
I have successfully uploaded and deployed my grails application on amazon elastic beanstalk with Tomcat 8 and Java 8 on linux ec2 and web app is up and running. It works well when doing REST API calls to and from RDS database. I have a api to upload file to the server from mobile app and from web app frontend. When running this grails app in localhost its works great for this api and uploads files successfully to user.home/{myapplicationDirectory}/somefile path in my Windows OS. But after running this app in elastic beanstalk and trying to upload image from mobile gives NPE as FileNotFoundException
FileNotFoundException occurred when processing request: [POST] /api/images/add
/usr/share/tomcat8/sdpl/images/260519011919.zip (No such file or directory)
Stacktrace follows:
java.io.FileNotFoundException: /usr/share/tomcat8/sdpl/images/260519011919.zip (No such file or directory)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
I have a service to get application data storage directory with this method
def String getApplicationPath() {
return System.getProperty("user.home") + File.separator + "images" + File.separator;
}
Hi as I don't see your full application I don't want to be too presumptuous but as you're using AWS Beanstalk you should consider local file storage to always be a temporary storage. Your server could be terminate and restarted by beanstalk if it stops responding or fails any health checks.
You have other options available, again I don't know if you considered them and have a good reason for using the local file system so forgive me if that's the case, though if not, you could use S3 for the storage of images, then you don't have to worry about disk space, and the images could automatically be then served via AWS's CDN - Cloudfront, thus also reducing load on your app.
Alternatively, when you really want to store these images in the filesystem, you can look at using EFS, the Elastic File System. Your EBS instance could mount the filesystem on startup so it will be always available whenever your instance(s) start.
I didn't suggest using a standard EBS volume, as you can only ever attach a volume to a single instance, if you used EFS, you don't have to worry about space and it can be mounted to multiple instances so is a little more flexible.
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure
I am running rails applications with nginx+passenger
after nginx started serve, I can access it
but after sometime,may be one hour or half a day, it tells me the following message
Internal server error
An error occurred while starting the web application. It sent an unknown response type "".
then i need to reboot the server to let nginx serve normally
My server is running on AliYun and it's memory size is only 512M, is it too small too run passenger?
or what's wrong with the configureation?
It's only a workaround and you should find what is the actual problem (by monitoring memory usage, processor usage, open file handles etc) but until then you can use passenger_max_requests directive
We have a big problem with downloads when the size is over than 1gb.
We are using Rails 2.3.5, passenger 2.2.9 on Amazon EC2 2gb with 2gb of Ram and Fedora 10.
Files are stored on /mnt/files, project is on /mnt/www/project
We tried to send files with Nginx and x-accel-redirect and also Apache with x-sendfile.
We can download only and always 1.09gb instead of 1.54gb!!
We can download files without problems where size is less than 1gb
If we link same file (that is not corrupted) in rails public dir, we can download without any problem.
X-Accel-Redirect or X-SendFile are configured correctly, tested and checked a lot of time.
So:
Nginx with x-accel-redirect [fail]
Apache with x-send-file [fail]
Send File without x-accel-redirect or x-sendfile on nginx or apache [fail]
Linking file in public and direct download [works]
Any suggestion?
Thanks!!!
If you're looking to restrict access to these downloads, have you tried the Access Key module?