I have a machine learning notebook that I want to upload to Datalab so I can train my model faster. My training data is not large and is sitting in a Google Cloud Storage bucket, accessible to Datalab. My model is costly to train though so I want to use cloud compute resources. It seems if I could get my notebook into Datalab, I could read in the data from the GCS bucket location instead of from local storage and I'd be all set for fast model training!
On the Datalab interface there is an 'Upload' button which does nothing when I click it.
Many thanks for your guidance!
I've noticed the click target for the upload button is a little misleading. If you click the icon it doesn't work, you need to click on the word, look at the click target highlighted in the screenshot below. Does this work for you? If not, what browser are you using?
Related
First, you need to know that I'm a beginner in this subject. Initially, I'm an Embedded System Developpers but I never worked with image recognition.
Let me expose my main goal:
I would like to create my own database of Logos and be able to
recognize them in a larger image. Typical application would be, for
example, to make a database of pepsi logos and coca-cola logos and
when I take a photo of a bottle of Soda, it tells me if it one of
them or an another.
So, here is my problem:
I first wanted to use the Auto ML Kit of Google. I gave him my
databases so it could train itself on it. My first attempt was to
take photos of bottle entirely and then compare. It was ok but not
too efficient. I then tried to give him only logos but after
training, it couldnt recognize anything in the whole image of a
bottle.
I think I didn't give enough images in the first case. But I'd prefer to use the second case (by giving only logo) so that the machine would search something similar in the image.
Finally, my questions:
If you've worked with ML Kit from Google, were you able to train a
model by giving images that should be recognized in a larger image?
If yes, do you have any hints to give me?
Do you know reliable software that could help me to perform tests of this kind? I thought about Azure Machine Learning Studio from
Microsoft (since I develop on Visual Studio).
In a first time, I'd like to code as few as I can just for testing. Maybe later I could try to code my own Machine Learning System but I think it's a big challenge.
I also thought that I would need to split my image in smaller image and then send each of this images into the Machine but it would be time consuming and I need a fast reaction (like < 2 seconds).
Thanks in advance for your answer. I don't need complete answer with full tutorial (Stack Overflow is not intended for that anyway ^^) but just some advices would already be good.
Have a good day!
Azure’s Custom Vision is great for this: https://www.customvision.ai
Let’s say you want to detect a pepsi logo. Upload 70 images of products with the logo on them. Use Custom Vision to draw a box around the logo for each photo. Click “train”, and you get a tensorflow model with code.
Look up any tutorial for it, it’s pretty incredible and really easy to use.
I'm totally new to iOS development and was playing around with some ideas to learn the ropes. One thing I am trying to do requires me to run through the device file system in order to e.g. show info on file type occurrence (basic storage analytics so to say) etc. Or only access local text files (like emails) to analyse them.
After doing some research it seems to me that the system is pretty restricted. Is it possible to access files directly or ask the user for permission to do so?
Any direct help, hint or link would be much appreciated! :)
I have trained a tensorflow model in Google Cloud using instructions from this link and have generated a Binary (application/octet-stream) file having .pb extension. However instead of deploying the model in the cloud, I want to use the model locally in my android device. How can I do the same?
You can do that and the easiest way of doing it right now is following this code lab: Tensorflow for Poets 2: TFLite
In the code lab you'll have to embed the model as an asset but an evolution you can make is download the model from the Cloud Storage whenever there's a new version of it.
If your model uses operations that are not yet supported by TFLite, you can use Tensorflow Mobile. It probably won't be as fast but it still works fine (There's also a code lab to understand it better).
I'd like to know how to create a target for architectural large scale AR on a real site.In other words, I need that Google superimposed my 3d model on a specific place.
I have tried Google tango Area Learning tutorials (https://developers.google.com/tango/apis/unity/unity-codelab-area-learning), but after showing the message WALK AROUND TO RELOCALIZE the tablet does nothing, although I walk around to detect the real space, then after few minutes the message Unity project has stopped appears on the Google Tango tablet screen.
Could ADF file used instead of relocalizing the environment?
I've detected some interior scenes by Tango explorer and saved them,but I'm not able to use them for environment recognition purpose
I work on Unity and Google Tango tablet.
Thank you in advance for your response.
For anyone else facing this problem - the likely cause is not having a recent ADF file already on the device.
You need to first create a Area Description file (ADF) by scanning, and then you can separately Localise to that ADF - so you cannot "use an ADF instead of relocalising."
The tutorial you link above needs you to have separately created an ADF for your location - it simply chooses the most recent one you have.
You can use the Area Learning example to create your ADFs, and try localising to them. It also shows superimposing 3D models.
Also, look at the augmented reality one to see how to have objects load already in a specific place.
I have a sequence of IplImage objects coming from a webcam, apply some processing, and I would like this video to be shown on a webpage. What is the best way to do this?
rossb As far as I know opencv has no support for streaming of opencv videos though there has been attempts to stream video over TCP using Sockets but that would not be the best way to implement for a webapp
I was able to do this with the following "hack"
1) set up an amazon aws account to use their S3 service.
2) Create an s3 "bucket". And continuously update the file (use the same name each time) into the bucket. Make sure you set the metadata attribute for no-cache and permissions for everyone to view.
3)create a simple web page where the JavaScript updates the every second (or what u prefer).
This is pretty bandwidth heavy and haven't tested at any scale. Since it's amazon, I'm not worried that things will fall apart when I scale traffic. However, users won't be happy with their bandwidth consumption. But it is free for up to 2000 puts and 20000 gets per month.
Next I want to figure out how to stream properly w codecs, etc, and am pulling my hair out figuring out a solution.
I'm happy to provide my source iOS client and JavaScript) but I'm on a train now. If you are truly interested ping me so I remember when I'm at my desk...
You could make your own webserver. Implementing just the basic GET command should be very simple. If you're using a .net language, things should be very easy for you.