CNN based copy move forgery localisation - localization

i am trying to make a forgery detection model using deep learning that basically detects a type of forgery called copy move forgery and so far I've build a (binary)CNN model that detects wether an image is forged or not but I am kinda stuck now like I am trying to find out the region that's forged,can someone please help I am very new to this.

Try reading the file as plain text and finding 'file identifiers' like Title, Subject, Tags, Comments, Program name and Copyright etc.
or
also check out https://en.wikipedia.org/wiki/List_of_file_signatures
This is a list of file signatures, data used to identify or verify the content of a file. Such signatures are also known as magic numbers or Magic Bytes.
A google search for 'file identifiers wiki' gives alternate keywords searches such as 'file format', 'list of file signatures', 'file format examples', 'how to find file signature', 'file signature database', 'file magic number list'.

Related

List of System.IO.Packaging ContentType strings?

I'm updating some older code that uses System.IO.Packaging to programmatically create Excel files. It ultimately calls CreatePackage on various bits of internal state to build out the documents. CreatePackage takes a ContentType parameter, and the existing code contains a list of constants like:
Const cxl07WorksheetContentType = "application/vnd...."
I'm trying to add support for PivotTables, which require a PivotCache. I cannot find the appropriate ContentType. I thought I might be able to discern it from within the file, looking in _Rels for instance, but these are always in URI form and bear no obvious relationship to these constants.
So...
is this even required? I passed Nothing and "" but that did not work.
does anyone know where these might be defined? I looked on the mime database, and the MS website, but nothing came up on either for "pivot"
where are these even used? they do not appear in the resulting Package as far as I can see.

MWeb failed publishing to Evernote with "Error Domain=com.evernote.sdk Code=11"

I use MWeb to write markdown documents. Recently I met a problem when I publish markdown document to Evernote, this is the error code:
Error Domain=com.evernote.sdk Code=11
"Content of submitted note was malformed"
UserInfo={NSLocalizedDescription=Content of submitted note was malformed, parameter=Element type "row" must be declared.}
mweb-error
Root cause:
I used "Raw" in the doc, I think word "Raw" maybe a keyword in Evernote API. So if I add "Raw" in the doc, a raw definition must be declared.
My solution
surrounding the word "Raw" with ``. For example:
change "Java API, users need to use Dataset to represent a DataFrame." to "Java API, users need to use `Dataset` to represent a DataFrame."

Using Adwords, is it possible to create url specific to each keyword for thousands of keywords?

Every URL ends with the same pattern, "Part-123456789" where the "Part" is a constant and the "123456789" is a part number. I want to run an adwords campaign targetting every part # and directing to the unique url for that part. Is there a simple way to do this?
Note: Adwords Editor is giving me ambiguous rowtype errors whenever I try to upload the keywords and URL's together in the same line of a .csv file.
The available value track parameters for AdWords tracking templates offers both the keyword ID and matched keyword:
https://support.google.com/adwords/answer/6305348?co=ADWORDS.IsAWNCustomer%3Dfalse&hl=en
You could use this to determine the part number in the URL but it may require some mapping in the content management system.

How to confirm partial line of text in Ruby

I'm writing a test to confirm that a csv file has hit my downloads folder. As the title of the csv file is set to include the date and time of the download, it's impractical to keep changing the name of the file in my feature. Example filename: fleet_123456_20140707_103015.csv
Can I include in my ruby code, something that will just confirm that the "fleet_123456" is present as it's the only generic part of the name that will appear on every download?
At the moment I have:
Then /^I should get a download with the filename "(.*?)"$/ do |file_name|
page.response_headers['Content-Disposition'].should include("filename=\"#{file_name}\"")
end
I'm thinking that the "#{file_name}\"") needs tweeking, just not sure where.
Any help would be great, thank you
You asked:
Can I include in my ruby code, something that will just confirm that the "fleet_123456" is present as it's the only generic part of the name that will appear on every download?
Yes, you can. One way would be to replace the include matcher with a regex-based one. For example, instead of
page.response_headers['Content-Disposition'].should include("filename=\"#{file_name}\"")
you could write
page.response_headers['Content-Disposition'].should match(/filename="fleet_[\d_]+.csv"/)
, which would match "fleet_123456" followed by any combination of numbers and underscores, followed ".csv". Another possibility, if you want to be a little more specific, is
page.response_headers['Content-Disposition'].should match(/filename="fleet_123456_\d+_\d+.csv"/)
which matches the specific arrangement of groups of numbers separated by underscores. You can read about regular expressions in Ruby here and play with them here.

When use Jena to read RDF(N-Triple),it throws out "com.hp.hpl.jena.shared.InvalidPropertyURIException "

I download an N-Triple file from dbpedia,but when I wanted to read it in to Jena model,some exceptions throw out.Below is a part of this file:
<http://dbpedia.org/resource/Jacky_Cheung>
<http://dbpedia.org/resource/Template:%E8%97%9D%E4%BA%BA> "\u9AD4\u91CD"#zh .
<http://dbpedia.org/resource/Jacky_Cheung> <http://dbpedia.org/resource/Template:%E8%97%9D%E4%BA%BA> "\u8EAB\u9AD8"#zh .
<http://dbpedia.org/resource/Jacky_Cheung> <http://dbpedia.org/resource/Template:%E8%97%9D%E4%BA%BA> "\u8840\u578B"#zh .
<http://dbpedia.org/resource/Jacky_Cheung> <http://dbpedia.org/resource/Template:%E8%97%9D%E4%BA%BA> "\u8A9E\u8A00"#zh .
The exception throws out is:
Exception in thread "main" com.hp.hpl.jena.shared.InvalidPropertyURIException: http://dbpedia.org/resource/Template:%E8%97%9D%E4%BA%BA
at com.hp.hpl.jena.xmloutput.impl.BaseXMLWriter.splitTag(BaseXMLWriter.java:393)
at com.hp.hpl.jena.xmloutput.impl.BaseXMLWriter.startElementTag(BaseXMLWriter.java:368)
at com.hp.hpl.jena.xmloutput.impl.Unparser$3.wTypeStart(Unparser.java:671)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wPropertyEltValueString(Unparser.java:488)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wPropertyEltValue(Unparser.java:473)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wPropertyElt(Unparser.java:339)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wPropertyEltStar(Unparser.java:811)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wTypedNodeOrDescriptionLong(Unparser.java:797)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wTypedNodeOrDescription(Unparser.java:727)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wDescription(Unparser.java:686)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wObj(Unparser.java:642)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wObjStar(Unparser.java:317)
at com.hp.hpl.jena.xmloutput.impl.Unparser.wRDF(Unparser.java:298)
at com.hp.hpl.jena.xmloutput.impl.Unparser.write(Unparser.java:200)
at com.hp.hpl.jena.xmloutput.impl.Abbreviated.writeBody(Abbreviated.java:143)
at com.hp.hpl.jena.xmloutput.impl.BaseXMLWriter.writeXMLBody(BaseXMLWriter.java:500)
at com.hp.hpl.jena.xmloutput.impl.BaseXMLWriter.write(BaseXMLWriter.java:472)
at com.hp.hpl.jena.xmloutput.impl.Abbreviated.write(Abbreviated.java:128)
at com.hp.hpl.jena.xmloutput.impl.BaseXMLWriter.write(BaseXMLWriter.java:458)
at com.hp.hpl.jena.rdf.model.impl.ModelCom.write(ModelCom.java:277)
at jena.ReadRDF.main(ReadRDF.java:45)
Java Result: 1
The problem is caused by "%E8%97%9D%E4%BA%BA",when use URIref.decode() to decode URI with this string,"%E8%97%9D%E4%BA%BA" represents tow Chinese characters.
But when I use Sesame to read this N-Triple file,it is OK without any problem.
My questions are that whether any way to solve this problem in Jena,and why dbpedia choose N-Triple to be the default RDF syntax?.It works bad with Non-ASCII languages.
Also ,I want to know that,if I want to publish my RDF data as Linked data,but the URIs of resources come with some Chinese and Japanese,should I decode the URIs at first?
Well, your question isn't completely clear because you asked about "reading in a Jena model" but the stacktrace you quoted actually starts with a call to the writer.
Jena, in general, tries very hard to conform to the relevant RDF recommendations from W3C and IETF. In particular, it tries to not generate any URI's which do not conform to the rules for valid URI's. This is compounded in the case of writing XML, because most RDF identifiers are not legal XML element ID's, meaning that you have to split the URI somewhere and use XML namespaces to make the full identifier. Not all RDF toolkits are as particular as Jena is about conforming to some of the rules in the standards.
Things you can try:
do you need to call Model.write() as part of your loading process? You should be able to load and process a model, without the check for legal URI's being invoked.
try writing the output using Turtle format, rather than XML. Turtle doesn't have the same restrictions as XML, and it's a heck of a lot easier for humans to read as well.
if there are particular ill-formed URI's in the data you are loading, look to see if there is a newer version of the data. Illegal URI's in dbpedia has been an issue in the past. If the illegal URI's are still there in the latest version, notify the dbpedia team about them.
try pre-processing your data to remove triples containing illegal URI's before they enter your processing chain.
As for URI's containing Chinese and Japanese characters, Jena conforms to the IRI spec, so as long as your URI's conform to that you should be OK.

Resources