I have the following problem. I am working for a company that orders a lot of trading cards from different shops.
We order a trading card after a customer ask us to find a particular one based on a picture. The following process is happening :
Customer send us a request for a trading card he wants: usually, he sends us a picture
We find and order the trading card
When the trading car is received a staff has to manually find the corresponding trading card from a list of pictures of all pending orders. I spare you the details but the picture of the image is the only relevant info that allows us to find the order.
What I wish to build is a setup where the admin will pass the trading card under a camera. The camera will take a picture, hash the image, and compare it to a hash of the product image we already have on the database from the user order request.
Our web app is running with Laravel. I found this package that could do the work :
https://github.com/jenssegers/imagehash
So I would hash the picture of the item received and compare it to the image hash already store in the DB from customer request. And retrieve similar results.
I am blocked with the hardware part. Not sure what kind of camera I should use.
Any suggestions are welcome.
Related
I have an app I'm working on that is a credits system for a store. A customer brings in items and receives a credit and then can turn around and use that credit towards certain goods in the store. I've set it up so every time a credit holder or credit is created,updated, or destroyed the event is logged. I'm wondering if there is an easy way to use the event data from the logs to create a dashboard displaying things such as X number of credits created and Y number of credits used today. This may not be the right way to go about doing this at all and if so feel free to guide me in another direction. Thanks in advance!
You should save the information into a database (in addition) to the log and operate on it in this fashion.
So for example, maybe you have a User it should be a Model and have credits which should be an integer. You can modify this value every time a transaction happens.
You can also create an associated model 'transactions' which belong_to the user and to find out transactions that happened on a certain day, you would be able to pull up all of the transactions of that user in a certain time range.
If your credits work similar to dollars and money. And your transactions are like orders, you may want to look into using the Spree gem. https://github.com/spree/spree
You definitely do not want to be reading from the logs to do very usual actions like you're describing.
This question is very similar to this one, however there are no answers on that one. I posted this one with more clarity in hopes of receiving an answer.
According to this presentation, Twitter incorporates a fanout method to push Tweets to each individual user's timeline in Redis. Obviously, this fanout only takes place when a user you're following Tweets something.
Suppose a new user, who has never followed anyone before (and conversely has no Tweets in their timeline), decides to follow someone. Using just the above method, they would have to wait until the user they followed Tweeted something for anything to show up on their timeline. After some observation, this is not the case. Twitter pulls in the latest Tweets from the user.
Now suppose that a new user follows 5 users, how does Twitter organize and push those Tweets into the user's timeline in Redis?
Suppose a user already follows 5 users and they have a fair amount of Tweets from these users in their timeline. When they follow another 5 users, how are these user's individual Tweets pushed into the initial user's timeline in Redis in the correct order? More importantly, how is it able to calculate how many to bring in from each user (seeing that they cap timelines at 800 Tweets).
Here is a way of how I would try to implement it this if I understand well your question.
Store each tweet in a hash. The key of the hash could be something like: tweet:<tweetID>.
Store the IDs of the tweets of a given user in a sorted set named user:<userID>:tweets. You set the score of the tweet as a unix timestamp, so they appear in the correct order. You can then get a list of the 800 most recent tweet IDs for the user with the instruction ZREVRANGEBYSCORE
ZREVRANGEBYSCORE user:<userID>:tweets +inf -inf LIMIT 0 800
When a user follows a new person, you copy the list of ids returned by this instruction in the timeline of the follower (either in the application code, or using a LUA script). This timeline is once again represented by a sorted set, with unix timestamps as scores. If you do the copy in the application code, which is perfectly acceptable with Redis, don't forget to use pipelining to perform your multiples writes in the sorted set in a unique network operation. It will greatly improve the performances.
To get the timeline content, use pipelining too. Request the tweets ID, using ZREVRANGEBYSCORE with a limit option and/or a timestamp as lower limit if you don't want tweets posted before a certain date.
I am looking for basically a repository of information regarding stores. Say I have an app, how would I find all the local stores that sell 'cellphones'? I have been using Google search to solve this but no luck. I know this is possible because through the use of Google maps or i-maps, you are able to find stores and public locations near you. I want to be able to find the store Items of a store. So i could for example say in my search in-app, "What stores will be selling PS4?". This will then display the location of all stores who will be selling PS4. I am not looking for code, I'm looking for data where this would be stored, like Data.gov etc...
Edit*
That's what I believed (#Naomi Owens), in regards to the item stock of a store. So since not knowing the stores current Item stock information, a go around would be to find all the stores that would seemingly sell that particular item and then based off assumptions and factual information regarding the release date of 'said item', notify the user that the queried item will be sold at those retailers based off assumptions. I guess you could then use a percentage system based off of 'logical assumption' the likelihood of a store selling that item. Example Walmart would have a higher likelihood (percentage) of selling a PS4 than would Fye (or some other smaller electronic store).
Locating nearby stores selling a particular product should be simple enough using Google API. The stock in these shops is constantly changing however and if such a database existed may not be very accurate. Each shop's stock, orders, customers etc. are located on their own private database which you wont have access to for obvious reasons. Many apps that do this type of thing do so through html scraping or xml parsing rss feeds. Neither of those would work in this case given that most shops do not use rss feeds and the large amount of html scraping that would be required otherwise.
Edit - I doubt that a database exists containing a list of all of a stores assumed stock, since the majority of people are only interested in live information. Sounds like a database that you would have to create yourself or pay someone else to create for you.
I have a strage requirement from a client, he needs to display a ramdom selection (100 - 200 items from mixed categories) of products for sale on & shipped by Amazon but ordered by price. The idea is to allow people find gift ideas based a user input price point.
I have been looking through the API docs but cannot see an obvious way to find search by price, I am thinking of writing a script to "copy" large parts of the amazon product catalogue into a local database & have it update every few weeks, then use this for user searches, but this does not feel right / their must be a better way.
Has anyone any experience with this type of problem? Thanks!
You would want to use the Amazon Product Advertising API. Using this API you would want to perform a SearchIndex-ItemSearch query. Possible parameters to ItemSearch are available on the API Docs here
You can see in the docs that you cannot query by MinimumPrice and MaximumPrice on SearchIndex: All. However, if you search specific indexes, it allows you to do a price related search.
I would guess that you can agree with your client which categories should the items be from. Then you can just query them one by one.
Amazon's database changes very often. Hence, caching data for a week without updating may not be desirable.
I've been looking at the APIs for UPS, FedEx, USPS, etc, and I can't seem to find any method of pulling all tracking info for a given user. I only see methods that pull info via a tracking number. Has anyone been able to find a way to get at this data? Seems silly to me that these huge carriers wouldn't supply this info in an easy method.
I'm trying to accomplish this in Rails.
We were able to integrate with UPS Quantum View and even with FedEx Insight. These services will give you a list of all inbound and outbound shipments that are billed to your UPS/FedEx account. You can get info on every piece of each shipment: tracking numbers, weight, shipper and recipient info (Company name, city, state, country).
To pull information from UPS Quantum View using their API you will need to obtain a so-called Access Key, and you'll need to create Subscriptions: one for inbound shipments, one for outbound ones. This can be done on ups.com if you already have a UPS shipping account. You don't have to wait, it's provided instantly. We have a video of how to get the key and set up the subscriptions. It's on easytag.net in the Help section. The video title is Obtaining a UPS Access Key.
When creating API requests to UPS, you'll need to include a key and a subscription name.
UPS has a Quantum View API. Quantum View is their service that allows tracking, etc by account, not just individual tracking number. I assume that will get you what you need. I don't have an account so I can't see their detailed API documentation, so I'm just guessing.