After applying Principal Component Analysis PCA to my data set in order to achieve better model accuracy. The 13 features dimensions, I am reducing it to 10 features using PCA. Everything is fine till here.
After implementing the model in WebApp, it is building & seems fine in the studio.
In the testing phase of model prediction, Instead of displaying 10 features as an input, the UI system is showing the original features which is 13 & the output is showing 10 featuenter image description hereres which does not have any feature names for the newly generated features which are 10. And also prediction is not working at all after executing it.\
Attached are the screenshots, Please refer.
Could you please also show your diagram of the experiment? This kind of issue happens when you are not setting the input of the model correctly according to your requirement. Please double check how you define your experiment.
One thing I want to highlight is, in the portal/ quick test page, all the input data will be the same as your original import data according to the document.
https://learn.microsoft.com/en-us/azure/machine-learning/classic/tutorial-part3-credit-risk-deploy#deploy-as-a-new-web-service
Related
We are currently using Adobe Analytics API's and inside that we require validation of dimensions and metrics in order to know whether the combination of dimension and metric is possible or not, so for that earlier we were using v1.4 Report.Validate API which use to give us the desired result but now we are currently using v2 version api, so does anybody have an idea how we can achieve this in v2 version API.
#AdobeSwagger #AdobeAnalytics
Thank You!
In the 2.0 API you generally shouldn't have issues with bad metric or dimension combinations. When you get a list of dimensions for a report you can see if the dimension is valid in a report or a segment, but I believe that by default it only gives you valid dimensions and metrics for reporting.
The validation option could also be used to check if your request was well formed, but for that use case you should get an error pretty quickly if you try and run the report.
First, you need to know that I'm a beginner in this subject. Initially, I'm an Embedded System Developpers but I never worked with image recognition.
Let me expose my main goal:
I would like to create my own database of Logos and be able to
recognize them in a larger image. Typical application would be, for
example, to make a database of pepsi logos and coca-cola logos and
when I take a photo of a bottle of Soda, it tells me if it one of
them or an another.
So, here is my problem:
I first wanted to use the Auto ML Kit of Google. I gave him my
databases so it could train itself on it. My first attempt was to
take photos of bottle entirely and then compare. It was ok but not
too efficient. I then tried to give him only logos but after
training, it couldnt recognize anything in the whole image of a
bottle.
I think I didn't give enough images in the first case. But I'd prefer to use the second case (by giving only logo) so that the machine would search something similar in the image.
Finally, my questions:
If you've worked with ML Kit from Google, were you able to train a
model by giving images that should be recognized in a larger image?
If yes, do you have any hints to give me?
Do you know reliable software that could help me to perform tests of this kind? I thought about Azure Machine Learning Studio from
Microsoft (since I develop on Visual Studio).
In a first time, I'd like to code as few as I can just for testing. Maybe later I could try to code my own Machine Learning System but I think it's a big challenge.
I also thought that I would need to split my image in smaller image and then send each of this images into the Machine but it would be time consuming and I need a fast reaction (like < 2 seconds).
Thanks in advance for your answer. I don't need complete answer with full tutorial (Stack Overflow is not intended for that anyway ^^) but just some advices would already be good.
Have a good day!
Azure’s Custom Vision is great for this: https://www.customvision.ai
Let’s say you want to detect a pepsi logo. Upload 70 images of products with the logo on them. Use Custom Vision to draw a box around the logo for each photo. Click “train”, and you get a tensorflow model with code.
Look up any tutorial for it, it’s pretty incredible and really easy to use.
I'm using Googles Vision API to analyze screenshots of error messages from a product of ours. The OCR part is easy with these managed services, but is there any best practices tools to use on the acual text?
More specifically an error screenshot will contain things like product name, product version, version of underlying operating system, if OS is 32 or 64 bit and the actual error message (C# Stacktrace)
So all the text is there from the OCR Scan but since screenshots are taken by user one cannot assume the different info above is in specific areas of the screenshot.
How to go about analyzing this data? Are we talking simple string manipulation and custom domain knowledge (Tried this and it get me pretty far), or is this the job of some sort of machine learning text analysis offered by google/microsoft (or is that overkill)?
So all the text is there from the OCR Scan but since screenshots are taken by user one cannot assume the different info above is in specific areas of the screenshot.
Using simple Template matching find error message window you are looking for in screenshot.
Use Googles Vision API in specific areas relative to position you find in step 1, to obtain specific informations.
I would like to use style transfer (example) in CoreML. Since CoreML support converting Keras my first thought was to convert one of their samples like this one or this one but it seems there's few issue with this approach based on this thread.
How can I use Style Transfer in CoreML? any examples will help.
Edit:
Thanks for the link #twerdster, I was able to test it and it's working for me.
Additionally I found this (torch2coreml) repo by Prisma.
I have a drawing application that renders images (using Quartz2D). I want to be able to run regression tests to determine if a file is rendered consistently the same, so I can determine if anything broke following code changes. Are there any API's which allow me to compare screenshots (or image files), and get some similarity score.?
I am not sure if this will suite your needs (It doesn't return a score).
But this software let you compare image and decide which parts to ignore
Visual CI