Can't deploy my large image classification model on Heroku - machine-learning

I trained an image classification model that I productionized using FastAPI and I want to deploy it via Heroku.
My model is large (859 MB), hence I added it to my repo via GitHub LFS. However Heroku does not support GitHubs LFS by default and even if it did my model would basically saturate the slug size which is limited at 500MB.
The solution that I came up with is the request the model at the begining of the app like the following, then use to classify the image:
urll = 'https://github.com/nainiayoub/paintings-artist-classifier/releases/download/v1.0.0/artists_classifier.h5'
filename_model = urll.split('/')[-1]
urllib.request.urlretrieve(urll, filename_model)
model_file = filename_model
My API was successfuly deployed however it returns
503 Undocumented Error: Service Unavailable.
Which likely means that my model was not loaded.
At this ppoint I am stuck and I am not sure how to proceed, do you have any idea or an alternative solution to deploy my large model?

The solution that I found is to reduce my model size by converting it to tflite like the following, and using it afterwards to classify the input images:
tflite_model_name = 'model_reduced'
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open(tflite_model_name + '.tflite', 'wb').write(tflite_model)
The model size was reduced to 72MB, which was added to the size of the dependencies without saturating the slug size in Huroku, hence the api was deployed successfully.

Related

neo compilation job failed on Yolov5/v7 model

I was trying to use AWS SageMaker Neo compilation to convert a yolo model(trained with our custom data) to a coreml format, but got an error on input config:
ClientError: InputConfiguration: Unable to determine the type of the model, i.e. the source framework. Please provide the value of argument "source", from one of ["tensorflow", "pytorch", "mil"]. Note that model conversion requires the source package that generates the model. Please make sure you have the appropriate version of source package installed.
Seems Neo cannot recognize the Yolo model, is there any special requirements to the model in AWS SageMaker Neo?
I've tried both latest yolov7 model and yolov5 model, and both pt and pth file extensions, but still get the same error. Seems Neo cannot recognize the Yolo model. I also tried to downgrade pytorch version to 1.8, still not working.
But when I use the yolov4 model from this tutorial post: https://aws.amazon.com/de/blogs/machine-learning/speed-up-yolov4-inference-to-twice-as-fast-on-amazon-sagemaker/, it works fine.
Any idea if Neo compilation can work with Yolov7/v5 model?

NiftyNet 'evaluation' action output is incorrect

I'm trying to use the new 'evaluation' action after inference to generate some metrics for my output. However, the .csv files just show scores of '0' for average_distance and '1' for Jaccard and Dice for each of my data volumes. I can't seem to find any documentation for the evaluation action, so I'm not sure what I'm doing wrong. Also, the --dataset_to_infer=Validation option doesn't seem to work, both inference and evaluation are being applied to all data rather than just the validation set.
Thanks!
For the evaluation issue, we're working on the documentation. The dataset_to_infer option is only tested for the applications in NiftyNet/niftynet/application; applications from the model zoo are not upgraded to support it yet (please file an issue with more details https://github.com/NifTK/NiftyNet/issues if you believe it's a bug).
For the time being, pointing directly to the inference result in the config.ini worked for me.
e.g.
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
I believe this file is not found currently and then evaluation defaults to comparing labels to labels. See the issue on GitHub.

Loading a .trig file with inference to Fuseki using the 'tdbloader" bulk loader?

I am currently writing some Java code extracting some data and writing them as Linked Data, using the TRIG syntax. I am now using Jena, and Fuseki to create a SPARQL endpoint to query and visualize this data.
The data is written so that each source dataset gives me a .trig file, containing one named graph. So I want to load thoses files in Fuseki. Except that it doesn't seem to understand the Trig syntax...
If I remove the named graphs, and rename the files as .ttl, everything loads perfectly in the default graphs. But if I try to import trig files :
using Fuseki's webapp uploader, it either crashes ("Can't make new graphs") or adds nothing except the prefixes, as if the graphs other than the default ones could not be added (the logs say nothing helpful except the error code and description).
using Java code, the process is too slow. I used this technique : " Loading a .trig file into TDB? " but my trig files are pretty big, so this solution is not very good for me.
So I tried to use the bulk loader, the console command 'tdbloader'. This time everything seems fine, but in the webapp, there is still no data.
You can see the process going fine here : Quads are added just fine
But the result still keeps only the default graph and its original data : Nothing is added
So, I don't know what to do. The guys behind Jena and Fuseki suggested not to use the bulk loader in the Java code (rather than the command line tool), so that's one solution I guess I'd like to avoid.
Did I miss something obvious about how to load TRIG files to Fuseki? Thanks.
UPDATE :
As it seemed to be a problem in my configuration (see the comments of this post for a link to my config file; I cannot post more than 2 links), I tried to add some kind of specification for some named graphs I would like to see added to the dataset on Fuseki.
I added code to link (with ja:namedgraph) external graphs that I added via tdbloader. This seems to work. Great!
Now another problem : there's no inference, even when my config file specifies an Inference model... I set that queries should be applied with named graphs merged as the default graph, but this does not seem to carry the OWL Inference rules...So simple queries work, but I have 1/ to specify the graph I query (with "FROM") and 2/ no inference in my data.
The two methods are to use the tdb bulkloader offline or you can POST data into the dataset directly. (i.e. HTTP POST operations to http://localhost:3030/ds).
You can test where your graph are there with a query like
SELECT (count(*) AS ?C) { GRAPH ?g { ?s ?p ?o } }
The named graphs will show up when the Fuseki server is started unless your configuration of the SPARQL services only exports one graph.

Typo3 adjust processed path for processed images programmatically

I'm looking for a solution to generate the processed files for one database in an explicitly via uid defined folder. F.e.:
fileadmin/_processed/<uid>/allProcessedFilesHere
The generation of the files happens via the following code at the moment and I am not able to figure out how to adjust the config array to pass different storage.
$settings['additionalParameters'] = '-quality 80';
$settings['width'] = $imageSettings["width"];
$settings['height'] = $imageSettings["height"];
$processedImage = $file->process(\TYPO3\CMS\Core\Resource\ProcessedFile::CONTEXT_IMAGECROPSCALEMASK, $settings);
So I am looking for something similar to the following, where $uid is just the id of the entry that images shall get processed:
$storageRepository = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\\Core\\Resource\\StorageRepository');
$uidForStorageForDBEntry = getStorageUidForDBObject($uid);
$identifiedStorage = $storageRepository->findByUid($uidForStorageForDBEntry);
$settings['storage'] = $identifiedStorage->getUid()
To create one storage per uid seems not be the way to do it right, but I can't figure out another approach at the moment. As there are hundreds of objects with images in many different formats, I don't want to use a _processed folder with 100k image entries inside.
They are integrating the functionality to bind the processed folder to a storage element into the Typo3 Core. It should work in Version 7 LTS.

OMNIORB: Read current orb setting

It is possible to use CORBA::ORB_init to set the native codeset for the orb.
But if in an application an orb is retrieved in different configurations the orb is initialized only once.
"-ORBconfigFile config1.cfg"
CORBA::ORB_var orb1 = CORBA::ORB_init(orbInitParams.argc(), orbInitParams.argv());
"-ORBconfigFile config2.cfg"
CORBA::ORB_var orb2 = CORBA::ORB_init(orbInitParams.argc(), orbInitParams.argv());
But the thing is that the first one wins. So in a big application where the caller of the second ORB_init does not know of the first caller he will get the orb configured like 1.
This matters if 1. uses
nativeCharCodeSet = ISO-8859-1
while 2 uses
nativeCharCodeSet = UTF-8
Is there a way to read the ORB setting to check if settings are attached successful?
Why this shows up: I am using Omniorb in a dll (Thats where I initialize it). Now the application has a second component using omniorb which comes first. So I lost my UTF-8 configuration.
With omniorb it seems not possible to have to orbs in one process or is it possible to read the configuration.

Resources