I have a web server that sole task is to run images processing.
The image processing on the web server loads the same XML settings file (up to 2MB).
To save XML file loading time for every request I use follow architecture:
The web server is Apache (2.2.16) with MPM prefork and FastCGI (fcgi_mod) with image processing application (C++).
Invoked image processing application loads the XML file and spawns N threads and each thread runs:
thread_func()
{
FCGX_InitRequest
while(true)
{
FCGX_Accept_r
Request (image) processing and FCGX_FPrintF/FCGX_PutStr
FCGX_Finish_r
}
}
One of the weakness of this model is that at most one thread in each Apache process will do its work while being between FCGX_Accept_r and FCGX_Finish_r calls. Does it worth then to have 1 thread only per process (single thread), but to increase the MaxClients ?
Thank you in advance :)
you can use the fastcgi module, it's allow more than one thread per process
Related
I have a rails 6 application using Unicorn.
One of my endpoints handles image uploads to s3.
For test purposes I have made a stack with a single web server running a single unicorn worker.
I have noticed that even when multiple large image uploads are performed through this endpoint taking ~2m per image post request, the unicorn worker still is able to pick up other requests at the same time and process them.
My question now is, is it possible for the unicorn master to release the unicorn worker during a request (while waiting e.g. for an upload to go through) and allow the worker to process other requests?
Thank you!
I want to use Apache Tika for enterprise-level huge and lots of documents. Which one I use, Tika Server or Tika App or Java calls? Can you suggest me a system architecture? (i.e. Load balanced 3-4 Tika physically different Server)
Making PUT calls to a REST endpoint for sending thousands of 0.5 GB documents over HTTP, one at a time, is not an appropriate scenario for the Tika Server. It will not be memory efficient and the server will likely crash with some kind of memory leak or bugs.
Although as of v1.19 there is now a -spawnChild option to periodically restart the process after it has processed -maxFiles. From v2.x, this is now the default.
For your needs, you should simply use the tika-app in batch mode, which:
Runs locally, using an input and output directory that you specify
Sets up parent/child processes to robustly handle hangs/OOMEs
Runs multiple parser threads in parallel
Can restart child every x minutes or after y files to avoid memory leaks
Logs failures
java -jar tika-app.jar -i <input_directory> -o <output_dir>
We have 20 different websites which are on two servers on AWS. All websites are using a web application called DesignMaker (MVC application using ImageMagick to do image composition and alterations) to do heavy image processing for users images. Users can upload images to that application and start to do some design with their uploaded images. you can assume that all the image processing is optimized in the code.
My concerns here is to remove load of heavy image process from Cpu of the main servers and put it on another server. So the first things that comes to my mind is to separate that application and convert it to a web service that runs on other servers. In this way we put the load of image processing on other machines. Please tell me if I have missed something.
Is calling an API to do some image processing a good approach?
What are other alternatives?
You're right to move image processing from your web thread, this is just bad practice.
If this was me (and I have done this in a few projects I've worked on) I would upload the image from the MVC app to a AWS S3 container then fire off a message using SQS or some queuing platform. Then have an Elastic Beanstalk instance listening for messages on the queue, when it gets the message it then picks up the images from S3 and processes it however you want.
(I'm an Azure guy so forgive me if I've picked the wrong AWS services, but the pattern is the same)
I have read that passenger is a multi-process server which means that it can handle multiple requests at a time.
I am running passenger in a standalone mode on my local machine and have written code to check if passenger is able to run multiple requests simultaneously or not. My code is:
class Test < ApplicationController
def index
sleep 10
end
end
I am hitting two http requests simultaneously and expecting two requests to return output after 10 seconds but one request returns output after 10 seconds and another one returns output after 20 seconds. So it proves that it is handling one request at a time and not simultaneously.
Does it means that passenger is a single process server and not multi-process server? or I am missing something.
Passenger (along with most other application servers) runs no more than one request per thread. Typically there is also only one thread per process. From the Phusion Passenger docs:
Phusion Passenger supports two concurrency models:
process: single-threaded, multi-processed I/O concurrency. Each application process only has a single thread and can only handle 1 request at a time. This is the concurrency model that Ruby applications traditionally used. It has excellent compatibility (can work with applications that are not designed to be thread-safe) but is unsuitable workloads in which the application has to wait for a lot of external I/O (e.g. HTTP API calls), and uses more memory because each process has a large memory overhead.
thread: multi-threaded, multi-processed I/O concurrency. Each application process has multiple threads (customizable via PassengerThreadCount). This model provides much better I/O concurrency and uses less memory because threads share memory with each other within the same process. However, using this model may cause compatibility problems if the application is not designed to be thread-safe.
(Emphasis my own)
try like
def index
n = params[:n].to_i
sleep n
render :text => "I should have taken #{n} seconds!"
end
I am trying to understand exactly how requests to a rails application get processed with Phusion Passenger. I have read through the Passenger docs (found here: http://www.modrails.com/documentation/Architectural%20overview.html#_phusion_passenger_architecture) and I understand how they maintain copies of the rails framework and your application code in memory so that every request to the application doesn't get bogged down by spinning up another instance of your application. What I don't understand is how these separate application instances share the native ruby process on my linux machine. I have been doing some research and here is what I think is happening:
One request hits the web server which dispatches Passenger to fulfill the request on one of Passenger's idle worker processes. Another request comes in almost simultaneously and is handled by yet another idle Passenger worker process.
At this point there are two requests being executed which are being managed by two different Passenger worker processes. Passenger creates a green thread on Linux's native Ruby thread for each worker process. Each green thread is executed using context-switching so that blocking operations on one Passenger worker process do not prevent that other worker process from being executed.
Am I on the right track?
Thanks for your help!
The application instances don't "share the native Ruby process". An application instance is a Ruby process (or a Node.js process, or a Python process, depending on what language your app is written in), and is also the same as a "Passenger worker process". It also has got nothing to do with threading.