I have 100+ GB of photos going back 25 years. They are arranged in a directory tree by category, with nested sub-directories.
I want to a search for all photos taken in a given month, say April, in any of those directories.
I don't think that a Windows search will work as that will probably be the file creation data, which could be a month or two later wen I finally more the files from SD card to PC.
Perhaps searching the EXIF data? Is there a free VCL component which can help me to do that?
If your EXIF data is good, Windows Search (at least in Vista/7, not as sure about Windows Search 4 in XP) should index it and allow you to query by it once you learn the correct syntax. In Windows 7's Search something like "Date Taken:2011-04-01..2011-04-30" would probably work.
That said, for a more SO-specific answer to your question, CCR Exif is a Delphi Class library for read/edit/delete of EXIF/IPTC/XMP metadata in pictures. It's made available under the MPL 1.1.
You'll still have to write all the code to walk your directory tree and do your searching, but this can handle all the metadata work.
Related
I have this timeline from a newspaper produced by my Native American tribe. I was trying to use AWS Textract to produce some kind of table from this. AWS Textract does not recognize any tables in this. So I don't think that will work (perhaps more can happen there if I pay, but it doesn't say so).
Ultimately, I am trying to sift through all the archived newspapers and download all the timelines for all of our election cycles (both "general" and "special advisory") to find number of days between each item in timeline.
Since this is all in the public domain, I see no reason I can't paste a picture of the table here. I will include the download URL for the document as well.
Download URL: Download
I started off by using Foxit Reader on individual documents to find the timelines on Windows.
Then I used a tool 'ocrmypdf' on ubuntu to ensure all these documents are searchable (ocrmypdf --skip-text Notice_of_Special_Election_2023.pdf.pdf ./output/Notice_of_Special_Election_2023.pdf).
Then I just so happened to see an ad for AWS Textract this morning in my Google Newsfeed. Saw how powerful it is. But when I tried it, it didn't actually find these human-readable timelines.
I'm hopefully wondering if any ML tools or even other solutions exist for this type of problem.
I am namely trying to keep my tech knack up to par. I was sick the last two years and this is a fun problem to tackle that I think is pretty fringe.
I want to display pictures stored in an MS Access database in a currently running program, where the person running the program will be able to see all of the pictures at the same time (maybe scroll up and down) and choose one of their choice.
I don't know the code
Please help, I'm still a high school student.
Unless all of the pictures are BMPs, Delphi won't help you much here. Although it has a TDBImage component, it only supports BMPs in your version(s) of Delphi, and it can only show one picture at a time anyway.
To do what you are asking, you will have to load the pictures manually. Do your query, such as with TADOQuery, and then loop through the results, using TDataSet.CreateBlobStream() and TGraphic.LoadFromStream() to load each picture. You would have to look at a picture's raw data header to decide which TGraphic class to use (TBitmap, TJPEGImage, TGifImage, etc), load it from the database blob, and then display it as needed, such as in a TImage, an owner-drawn TListView, etc. Repeat for each picture.
There are tons of examples and tutorials about this, if you look around. This is off-topic for StackOverflow.
A little background on my problem...
I play an online text RPG I will keep the name to myself as I dont want to pull people away from this community for another..... but that is off topic.
In our game, programmers come and go, quite frequently and they leave behind a legacy of programs that serve the community in the simplest of ways.
My question is, How do I open a XPI file and fix coding issues that are no longer relevant? How do I open a file and read it so that it is not in "Wingding"
What programs can i use to de bunk the issues that I will share below?
Sample of what I see when I open the file in notepad++; which is what every firefox add-on site has told me to use....
PK ¡>†¡mŠ install.rdf”[s¢0ÇŸÛ™~ƾíE[u¬]Ðj-Òz©®ö-#€ ´`DÛÙï¾:E·ÝÙ¼0œóÿÎ…“Öí&„5¤|S*KJé¶}qÞšt{BêÁì¦äs5e9I)Q%B=¹Üh4d¥"W*"u\‘m1³ËRÎ4ax„…äÈØŠ¢TeîÀËRú¥‹s!=.d6EO³€Eb~SŠ)nîÉ&ÂŒƒ C€‘ß‘Âþ´`ØDNû)æ £ f?-Š<ŸÃH6 [rî?0aÉD™íX¶oL[‘*’’‰–‹ó³³Â¦pBÛÐ$™ÿð¾óç‚9EV¼3>û€—ûñ6W_ª‡i>S€0ÿ?ùÔIB/ôÅ¢œÏ^·ï°°
Cˆ9\BîC!ï‹`Q’0H/ÍCÊbÉã˜> a<8›Û…ÉMBÎb•‡&Æ‘ø¿Y’È$K+JÇ%瀔þDYÌOþ8«ÜnÀmÛô]SSúéª?Xjw|§k㙦UûZ·££±¡{ãnù°ˆ¼å|9±µ vlÏ7ß´àI›ãûZä¾ô‚`€ëv̬Žæ^‡QïÙñ:á6꘳»÷7dÆúk]®)#à Kuô‚x2ZÌ{3E,7áÄBúkÔ™ù`¼ùCݸ~Ö™3])+R«q45ÊÜ\÷-ïêöôÇûÎÊÁÈpØÊ)ÏëldŽüź‚f]-tµ±¦Êß•yاC8 äZÈ'c;Ý»Waµ> ]WTŽíŠÄêUՖŲ
TЀkµ^ýý÷¢Ã„Ï÷«sX¥‚él‚ªô#g>M'•È_—’QßµájÉé½—>ÿ PK
±Å: chrome/PK
”9¬< chrome/content/PK
º“>¯ /}Ÿ W8 chrome/content/browser.xul½[yoÛ8ÿ»Ìwàxi
Ôñ!ŸM2ƒØiÒ™6m¶ö´;
Z¢%Ö©¥¨8ΧßGJv|2¬“6#“”~ïä;(ùø÷Û˜¡"S*øI©vX-ýþÛÏ?ÿrö¡?üûê5°Æð
þ__¢’I“W•
I know this may mean nothing and if it does just please respect my question and dont negatively rate it because of my lack of knowledge about this topic...
What this add-on is, is a raid bar, that organizes accounts into "bookmarks" as an accessible and ready to join raids while retaining the link of the raid in the browser to reduce steps in the process of raiding.
The owner of the add-on is not me.... it is not logical to track down the person that made it because it was crafted over 10 years ago and the means of getting in touch with the person are not reliable anymore.
I dont want to take his add-on and claim it as my own.... I want to update it and carry on what he started as a respect for the player and programmers that have come and gone from the outwar community
Thank you for your time
Rename it from .xpi to .zip then you can open it and extract the contents. Do look into WebExtensions API though, as old XUL addons are going away by the end of 2017, you seem to have an overlay there. You can read more here - https://blog.mozilla.org/addons/2016/11/23/add-ons-in-2017/
Webextensions here - https://developer.mozilla.org/en-US/Add-ons/WebExtensions/
I want record screen (by capturing 15 screenshots per second). This part I know how to do. But I don't know how to write this to some popular video format. Best option which I found is write frames to separated PNG files and use commandline Mencoder which can convert them to many output formats. But maybe someone have another idea?
Requirements:
Must be multi-platform solutions (I'm using Free Pascal / Lazarus). Windows, Linux, MacOS
Exists some librarys for that?
Could be complex commandline application which record screen for me too, but I must have possibility to edit frames before converting whole raw data to popular video format
All materials which could give me some idea are appreciated. API, librarys, anything even in other languages than FPC (I would try rewrite it or find some equivalent)
I considered also writting frames to video RAW format and then use Mencoder (he can handle it) or other solution, but can't find any API/doc for video RAW data
Regards
Argalatyr mentioned ffmpeg already.
There are two ways that you can get that to work:
By spawning an new process. All you have to do is prepare the right input (could be a series of jpeg images for example), and the right commandline parameters. After that you just call ffmpeg.exe and wait for it to finish.
ffmpeg makes use of some dll's that do the actual work. You can use those dll's directly from within your Delphi application. It's a bit more work, because it's more low-level, but in the end it'll give you a finer control over what happens, and what you show the user while you're processing.
Here are some solutions to check out:
FFVCL Commercial. Actually looks quite good, but I was too greedy to spend money on this.
Open Source Delphi headers for FFMpeg. I've tried it, but I never managed to get it to work.
I ended up pulling the DLL wrappers from an open source karaoke program (UltraStar Deluxe). I had to remove some dependencies, but in the end it worked like a charm. The relevant (pascal) code can be found here:
http://ultrastardx.svn.sourceforge.net/viewvc/ultrastardx/trunk/src/lib/ffmpeg-0.10/
There was some earlier discussion with a Delphi component here. It's a very simple component that sometimes generates some weird movies. Maybe a start.
I am a university student and it's time to buy textbooks again. This quarter there are over 20 books I need for classes. Normally this wouldn't be such a big deal, as I would just copy and paste the ISBNs into Amazon. The ISBNs, however, are converted into an image on my school's book site. All I want to do is get the ISBNs into a string so I don't have to type each one by hand. I have used GOCR to convert the images into text, but I want to use it with a Ruby script so I can automate the process and do the same for my classmates.
I can navigate to the site. How can I save the image to a file on my computer (running UBUNTU), convert the image with GOCR, and finally save it to a file so I can then access them again with my Ruby script?
GOCR seems to be a good choice at first, but from what I can tell from my own "research", quality isn't quite sufficient for daily use. Maybe this could lead to a problem, depending on the image input. If it doesn't work out for you, try the "new" feature of Google Docs, which allows you to upload images for OCR. You can then retrieve the results using some google api ( there are tons out there, I'm using gdata-ruby-util which requires some hacking, though.
You could also use tesseract-ocr for the OCR part, it's also open source and in active development.
For the retrieval part, I would as well stick with hpricot, super-powerful and flexible.
Sounds like a cool project, and shouldn't be too hard if the ISBN images are stored in individual files.
This all can be run in the background:
download web page (net/http)
save metadata + image file for each book (paperclip)
run GOCR on all the images
All you need is a list of urls or a crawler (mechanize) and then you probably need to spend a few minutes writing a parser (see joe's post) for the university html pages.