Print NSLocalizedString key instead of value - ios

I need to print the keys of Localizable.strings in my App, instead of their values (for a debugging purpose). Is there a fast way to override the NSLocalizedString() method or redefine the macro, something like:
#define NSLocalizedString(key, comment) NSLocalizedString(key, key)

One option would be to export your app for localizations via Product Menu > Export Localizations within Xcode, then save the xcloc file to your Desktop.
After which you could use a python script to parse the inner xliff (xml) to find the file elements with original attributes which contain Localizable.strings and print the trans-unit's source element text within the body of them. Here's an example of a python script which should do it localizationKeys.py:
import sys
import os.path
from xml.etree import ElementTree as et
import argparse as ap
import re
if __name__ == '__main__':
parser = ap.ArgumentParser()
# filename argument ex: de.xliff
parser.add_argument('filename', help="filename of the xliff to find keys ex:de.xliff")
# verbose flag
parser.add_argument('-v', '--verbose', action='store_true', default=False, help='Show all the output')
args = parser.parse_args()
if (os.path.isfile(args.filename)):
tree = et.parse(args.filename)
root = tree.getroot()
match = re.match(r'\{.*\}', root.tag)
ns = match.group(0) if match else ''
files = root.findall(ns + 'file')
for file in files:
originalAttr = file.attrib['original']
# find all files which contain Localizable.strings
if originalAttr != None and 'Localizable.strings' in originalAttr:
if args.verbose == True:
print("----- Localizations for file: " + originalAttr + " -----")
# grab the body element
bodyElement = file.find(ns + 'body')
# get all the trans-units
transUnits = bodyElement.findall(ns + 'trans-unit')
for transUnit in transUnits:
# print all the source values (keys)
print(transUnit.find(ns + 'source').text)
else:
print("No file found with the specified name: " + args.filename)
Which you could then use as follows:
python3 localizationKeys.py en.xcloc/Localized\ Contents/en.xliff
Or if you'd prefer to print to to a file instead
python3 localizationKeys.py en.xcloc/Localized\ Contents/en.xliff > output.txt
This could almost definitely be more concise using xpath instead, but this is just what I came up with quickly.

Ok, this is how I obtained what I needed
// Overriding NSLocalizedString to print keys instead of values
#ifdef NSLocalizedString
#undef NSLocalizedString
#endif
#define NSLocalizedString(key, comment) key
In this way the App use the keys instead of the values

Related

Monitor Changing file in python

How to create a program that monitors a file (for example, text, and when you wrote something new to this file, the program should output that something was added to the file, and when, on the contrary, part of the text was deleted from it, it should write that something was deleted
And it should print to the console exactly which words were deleted or added?
Explanation
I use watchdog to follow the file.
On instantiation of the handler, I read the file's size.
When the file is modified, watchdog calls the on_modified function.
When this method is called, I compare the file's current size to its previous size to determine if the change was additive or subtractive.
You have a few other options when it comes to tracking the file. For example, you could also compare:
the number of lines
the number of words
the number of characters
the exact contents of the file
import os
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class EventHandler(FileSystemEventHandler):
def __init__(self, file_path_to_watch):
self.file_path_to_watch = file_path_to_watch
self._file_size = self._read_file_size()
def _read_file_size(self):
return os.path.getsize(self.file_path_to_watch)
def _print_change(self, new_file_size):
if new_file_size > self._file_size:
print('File modified with additions')
elif new_file_size < self._file_size:
print('File modified with deletions')
def on_modified(self, event):
if event.src_path != self.file_path_to_watch:
return
new_file_size = self._read_file_size()
self._print_change(new_file_size)
self._file_size = new_file_size
if __name__ == "__main__":
file_to_watch = '/path/to/watch.txt'
event_handler = EventHandler(file_to_watch)
observer = Observer()
observer.schedule(event_handler, path=file_to_watch, recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()

Saving SEC 10-K annual report text to files (trouble with decoding)

I am trying to bulk-download the text visible to the "end-user" from 10-K SEC Edgar reports (don't care about tables) and save it in a text file. I have found the code below on Youtube, however I am facing 2 challenges:
I am not sure if I am capturing all text, and when I print the URL from below, I receive very weird output (special characters e.g., at the very end of the print-out)
I can't seem to save the text in txt files, not sure if this is due to encoding (I am entirely new to programming).
import re
import requests
import unicodedata
from bs4 import BeautifulSoup
def restore_windows_1252_characters(restore_string):
def to_windows_1252(match):
try:
return bytes([ord(match.group(0))]).decode('windows-1252')
except UnicodeDecodeError:
# No character at the corresponding code point: remove it.
return ''
return re.sub(r'[\u0080-\u0099]', to_windows_1252, restore_string)
# define the url to specific html_text file
new_html_text = r"https://www.sec.gov/Archives/edgar/data/796343/0000796343-14-000004.txt"
# grab the response
response = requests.get(new_html_text)
page_soup = BeautifulSoup(response.content,'html5lib')
page_text = page_soup.html.body.get_text(' ',strip = True)
# normalize the text, remove characters. Additionally, restore missing window characters.
page_text_norm = restore_windows_1252_characters(unicodedata.normalize('NFKD', page_text))
# print: this works however gives me weird special characters in the print (e.g., at the very end)
print(page_text_norm)
# save to file: this only gives me an empty text file
with open('testfile.txt','w') as file:
file.write(page_text_norm)
Try this. If you take the data you expect as an example, it will be easier for people to understand your needs.
from simplified_scrapy import SimplifiedDoc,req,utils
url = 'https://www.sec.gov/Archives/edgar/data/796343/0000796343-14-000004.txt'
html = req.get(url)
doc = SimplifiedDoc(html)
# text = doc.body.text
text = doc.body.unescape() # Converting HTML entities
utils.saveFile("testfile.txt",text)

How to check if a file is a text file?

Does Perl6 have something like the Perl5 -T file test to tell if a file is a text file?
There's nothing built in, however there is a module Data::TextOrBinary that does that.
use Data::TextOrBinary;
say is-text('/bin/bash'.IO); # False
say is-text('/usr/share/dict/words'.IO); # True
That's a heuristic that has not been translated to Perl 6. You can simply read it in UTF8 (or ASCII) to do the same:
given slurp("read-utf8.p6", enc => 'utf8') -> $f {
say "UTF8";
}
(substitute read-utf8.p6 by the name of the file you want to check)
we can make use of the File::Type with the following code.
use strict;
use warnings;
use File::Type;
my $file = '/path/to/file.ext';
my $ft = File::Type->new();
my $file_type = $ft->mime_type($file);
if ( $file_type eq 'application/octet-stream' ) {
# possibly a text file
}
elsif ( $file_type eq 'application/zip' ) {
# file is a zip archive
}
Source: https://metacpan.org/pod/File::Type

How to use LaTeX section numbers in Pandoc cross-reference

The Pandoc documentation says that cross references can be made to section headers in a number of ways. For example, you can create your own ID and reference that ID. For example:
# This is my header {#header}
Will create an ID with value '#header' that can be refenced in the text, as such:
[Link to header](#header)
Which will display the text 'Link to header' with a link to the header.
I couldn't find anywhere how to make the text of the link be the section number when compiled as a LaTeX document.
For example, if my header is compiled to '1.2.3 Section Header', I want my cross-reference to text to display as '1.2.3'.
This can be achieved by defining the ID as done previously. eg:
# This is my header {#header}
Then in the text, the cross reference can be written as:
\ref{header}
When this compiles to LaTeX, the cross-reference text will be the section number of the referenced heading.
You can use the pandoc-secnos filter, which is part of the pandoc-xnos filter suite.
The header
# This is my header {#sec:header}
is referenced using #sec:header. Alternatively, you can reference
# This is my header
using #sec:this-is-my-header.
Markdown documents coded in this way can be processed by adding --filter pandoc-secnos to the pandoc call. The --number-sections option should be used as well. The output uses LaTeX's native commands (i.e., \label and \ref or \cref).
The benefit to this approach is that output in other formats (html, epub, docx, ...) is also possible.
A general solution which works with all supported output formats can be build by leveraging pandoc Lua filters: The function pandoc.utils.hierarchicalize can be used to get the document hierarchy. We can use this to associate section IDs with section numbers, which can later be used to add these numbers to links with no link description (e.g., [](#myheader)).
local hierarchicalize = (require 'pandoc.utils').hierarchicalize
local section_numbers = {}
function populate_section_numbers (doc)
function populate (elements)
for _, el in pairs(elements) do
if el.t == 'Sec' then
section_numbers['#' .. el.attr.identifier] = table.concat(el.numbering, '.')
populate(el.contents)
end
end
end
populate(hierarchicalize(doc.blocks))
end
function resolve_section_ref (link)
if #link.content > 0 or link.target:sub(1, 1) ~= '#' then
return nil
end
local section_number = pandoc.Str(section_numbers[link.target])
return pandoc.Link({section_number}, link.target, link.title, link.attr)
end
return {
{Pandoc = populate_section_numbers},
{Link = resolve_section_ref}
}
The above should be saved to a file and then passed to pandoc via the --lua-filter option.
Example
Using the example from the question
# This is my header {#header}
## Some subsection
See section [](#header), especially [](#some-subsection)
Using the above filter, the last line will render as "See section 1, especially 1.1".
Don't forget to call pandoc with option --number-sections, or headers will not be numbered.
Since pandoc version 2.8 the function pandoc.utils.hierarchicalize has been replaced with make_sections. Here is an updated version of the #tarleb's answer which works with newer ´pandoc´ versions.
local make_sections = (require 'pandoc.utils').make_sections
local section_numbers = {}
function populate_section_numbers (doc)
function populate (elements)
for _, el in pairs(elements) do
if el.t == 'Div' and el.attributes.number then
section_numbers['#' .. el.attr.identifier] = el.attributes.number
populate(el.content)
end
end
end
populate(make_sections(true, nil, doc.blocks))
end
function resolve_section_ref (link)
if #link.content > 0 or link.target:sub(1, 1) ~= '#' then
return nil
end
local section_number = pandoc.Str(section_numbers[link.target])
return pandoc.Link({section_number}, link.target, link.title, link.attr)
end
return {
{Pandoc = populate_section_numbers},
{Link = resolve_section_ref}
}

Merge tab delimited text files into a single file

What is the easiest method for joining/merging all files in a folder (tab delimited) into a single file? They all share a unique column (primary key). Actually, I only need to combine a certain column and link on this primary key, so the output file would contain a new column for each file. Ex:
KEY# Ratio1 Ratio2 Ratio3
1 5.1 4.4 3.3
2 1.2 2.3 3.2
etc....
There are many other columns in each file that I don't need to combine in the output file, I just need these "ratio" columns linked by the unique key column.
I am running OS X Snow Leopard but have access to a few Linux machines.
use the join(1) utility
I actually spent some time learning Perl and solved the issue on my own. I figured I'd share the source code if anyone has a similar problem to solve.
#!/usr/bin/perl -w
#File: combine_all.pl
#Description: This program will combine the rates from all "gff" files in the current directory.
use Cwd; #provides current working directory related functions
my(#handles);
print "Process starting... Please wait this may take a few minutes...\n";
unlink"_combined.out"; #this will remove the file if it exists
for(<./*.gff>){
#file = split("_",$_);
push(#files, substr($file[0], 2));
open($handles[#handles],$_);
}
open(OUTFILE,">_combined.out");
foreach (#files){
print OUTFILE"$_" . "\t";
}
#print OUTFILE"\n";
my$continue=1;
while($continue){
$continue=0;
for my$op(#handles){
if($_=readline($op)){
my#col=split;
if($col[8]) {
$gibberish=0;
$col[3]+=0;
$key = $col[3];
$col[5]+=0; #otherwise you print nothing
$col[5] = sprintf("%.2f", $col[5]);
print OUTFILE"$col[5]\t";
$continue=1;
} else {
$key = "\t";
$continue=1;
$gibberish=1;
}
}else{
#do nothing
}
}
if($continue != 0 && $gibberish != 1) {
print OUTFILE"$key\n";
} else {
print OUTFILE"\n";
}
}
undef#handles; #closes all files
close(OUTFILE);
print "Process Complete! The output file is located in the current directory with the filename: _combined.out\n";

Resources