How bb translation works together?
When I used bb b -l 1 worked fine but there is still needed to rewrite all strings for other languages.
bb t -a adds new language, e.g. "cs-CZ" and creates json file with language code.
The question is how can I export/import all strings into json file to translation?
bb t -e - fileName is json or js in dist? Export doesn't work in my case no strings are exported.
bb t -e filename.txt -l cs-CZ is correct way to export untranslated strings to text file with very simple structure. After it will get back from translation agency you can just import it by bb t -i filename.txt -l cs-CS.
Before exporting always update translation files by bb b -l 1 -u 1 as you already find out. Actual JSON files in translations directory contains array of arrays of 3 or 4 items [original, hint, 0/1 - with/out parameters, translation]. So you can directly translate them if you will create some editor for these...
Also please update bobril-build to 0.56.1, I just fixed wrong error message in export even-though everything was ok. Maybe that confuse you so you have to ask, sorry for that.
Related
I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.
In my Docusaurus project my internal links work on my local environment, but when I push to GitLab they no longer work. Instead of replacing the original doc title with the new one it adds it to the url at the end ('https://username.io/test-site/docs/overview/add-a-category.html'). I looked over my config file, but I do not understand why this is happening.
I tried updating the id in the front matter for the page, and making sure it matches the id in the sidebars.json file. I have also added customDocsPath and set it to 'docs/' in the config file, though that is supposed to be the default.
---
id: "process-designer-overview"
title: "Process Designer Overview"
sidebar_label: "Overview"
---
# Process Designer
The Process Designer is a collaborative business process modeling and
design workspace for the business processes, scenarios, roles and tasks
that make up governed data processes.
Use the Process Designer to:
- [Add a Category](add-a-category.html)
- [Add a Process or Scenario](Add%20a%20Process%20or%20Scenario.html)
- [Edit a Process or Scenario](Edit%20a%20Process%20or%20Scenario.html)
I updated the add a category link in parenthesis to an md extension, but that broke the link on my local and it still didn't work on GitLab. I would expect that when a user clicks on the link it would replace the doc title in the url with the new doc title ('https://username.gitlab.io/docs/add-a-category.html') but instead it just tacks it on to the end ('https://username.gitlab.io/docs/process-designer-overview/add-a-category.html') and so the link is broken as that is not where the doc is located.
There were several issues with my links. First, I converted these files from html to markdown using Pandoc and did not add front matter - relying instead on the file name to connect my files to the sidebars. This was fine, except almost all of the file names had spaces in them, which you can see in my code example above. This was causing real issues, so I found a Bash script to replace all of the spaces in my file names with underscores, but now all of my links were broken. I updated all of the links in my files with a search and replace in my code editor, replacing "%20" with "_". I also needed to replace the ".html" extension with ".md" or my project would no longer work locally. Again, I did this with a search and replace in my code editor.
Finally, I ended up adding the front matter because otherwise my sidebar titles were all covered in underscores. Since I was working with 90 files, I didn't want to do this manually. I looked for a while and found a great gist by thebearJew and adjusted it so that it would take the file name and add it as the id, and the first heading and add it as the title and sidebar_label, since as it happens that works for our project. Here is the Bash script I found online to convert the spaces in my file names to underscores if interested:
find $1 -name "* *.md" -type f -print0 | \
while read -d $'\0' f; do mv -v "$f" "${f// /_}"; done
Here is the script I ended up with if anyone else has a similar setup and doesn't want to update a huge amount of files with front matter:
# Given a file path as an argument
# 1. get the file name
# 2. prepend template string to the top of the source file
# 3. resave original source file
# command: find . -name "*.md" -print0 | xargs -0 -I file ./prepend.sh file
filepath="$1"
file_name=$("basename" -a "$filepath")
# Getting the file name (title)
md='.md'
title=${file_name%$md}
heading=$(grep -r "^# \b" ~/Documents/docs/$title.md)
heading1=${heading#*\#}
# Prepend front-matter to files
TEMPLATE="---
id: $title
title: $heading1
sidebar_label: $heading1
---
"
echo "$TEMPLATE" | cat - "$filepath" > temp && mv temp "$filepath"
I wrote this grep command:
grep -- "^[0-9a-zA-Z\.-]\+$" file.txt
To get all lines containing only numbers, letters and dashes (legal domains).
This is the result of diff on both files
1,3c1,3
< test.xcom
< hi-th6ere.co.k
< 54
---
> test.xcom
> hi-th6ere.co.k
> 54
I wrote a file with some domains to test and it works great!
But, when I download a file (with the same content!) from the web, and then run this command, grep doesn't return anything.
I've tried to set full permissions on this file, but it still doesn't work.
Any ideas?
Thanks,
What makes you think the file content as the same as the one you've tested?
You can run 'diff filename1 filename2' to see if there are any differences between the two files.
It could be the the file you're downloading is in unicode format, so in a web browser it looks to be have the same content as the file you've tested, but the binary content of the file itself is different.
I am using Neo4j on my browser on Ubuntu. I got over 1 million nodes and I want to export them as csv file.
When return data size is small like "match n return n limit 3" there is a big fat "download csv" button I could use. But when it comes to big result set like over 1000 the shell just says "Resultset too large(over 1000 rows)" and the button doesnt show up.
How can I export csv files for large resultset?
You can also use my shell extensions to export cypher results to CSV.
See here: https://github.com/jexp/neo4j-shell-tools#cypher-import
Just provide an -o output.csv file to the import-cypher command.
Well, I just used linux shell to do all the job.
neo4j-shell -file query.cql | sed 's/|/;/g' > myfile.csv
In my case, I had also to convert from UTF-8 to ISO-8859-1 so I typed:
neo4j-shell -file query.cql | sed 's/|/;/g' | iconv -f UTF-8 -t ISO-8859-1 -o myfile.csv
PS: sed performs the replace: 's/|/;/g' means, substitute (s) all "|" to ";" even though there is more than one per line (g)
Hope this can help.
Regards
We followed the approach below using mentioned. It works very well for us.
data is formatted properly in csv format.
https://github.com/jexp/neo4j-shell-tools#cypher-import
import-cypher command from neo4J shell.
neo4j-sh (?)$ import-cypher -o test.csv MATCH (m:TDSP) return m.name
I know this is an old post, but maybe this will help someone else. For anyone using Symfony Framework, you can make a pretty simple controller to export Neo4j Cypher Queries as CSV. This uses the Graphaware NEO4J PHP OGM ( https://github.com/graphaware/neo4j-php-ogm ) to run raw cypher queries. I guess this can also easily be implemented without Symfony only using plain PHP.
Just make a form (with twig if you want):
<form action="{{ path('admin_exportquery') }}" method="get">
Cypher:<br>
<textarea name="query"></textarea><br>
<input type="submit" value="Submit">
</form>
Then set up the route of "admin_exportquery" to point to the controller. And add a controller to handle the export:
public function exportQueryAction(Request $request)
{
$query = $request->query->get('query');
$em = $this->get('neo4j.graph_manager')->getClient();
$response = new StreamedResponse(function() use($em,$query) {
$result = $em->getDatabaseDriver()->run($query);
$handle = fopen('php://output', 'w');
fputs( $handle, "\xEF\xBB\xBF" );
$header = $result->getRecords()[0]->keys();
fputcsv($handle, $header);
foreach($result->getRecords() as $r){
fputcsv($handle, $r->values());
}
fclose($handle);
});
$response->headers->set('Content-Type', 'application/force-download');
$response->headers->set('Cache-Control', 'no-store, no-cache');
$response->headers->set('Content-Disposition','attachment; filename="export.csv"');
return $response;
}
This lets you download a CSV with utf-8 characters directly from your browser and gives you all the freedom of Cypher.
IMPORTANT: This has no query check what so ever and it is a very good idea to set up appropriate security or query check before using :)
The cypher-shell:
https://neo4j.com/docs/operations-manual/current/tools/cypher-shell/
that is included in the latest versions of neo4j does this easily:
cat query.cql | cypher-shell > answer.csv
The limit 1000 is due to the browser MaxRows setting. You can change it to e.g. 10000 and thus be able to download those 10000 in one go via the download/export csv button described in the original post. On my laptop, the limit for the download button is somewhere between 10000 and 20000.
By setting the MaxRows limit to 50000 or 300000, I have been able to get the data on screen after waiting a while. Manual select, copy, and paste works. However, doing anything else than closing the browser after each query has not been possible as the browser becomes very slow.
There is another option to export the data as CSV using the cURL command with http/https neo4j transaction/commit endpoint.
here is an example on how to do that.
curl -H accept:application/json -H content-type:application/json \
-d '{"statements":[{"statement":"MATCH (p1:PROFILES)-[:RELATION]-(p2) RETURN ... LIMIT 4"}]}' \
http://localhost:7474/db/data/transaction/commit \
| jq -r '(.results[0]) | .columns,.data[].row | #csv'
and this command uses jq. make sure jq is installed in your machine and this converts the results to the CSV format.
Note: You may need to pass Authorization in the header for authentication.
Just pass -H 'Authorization: Basic XXXXX=' \ to avoid 401
here is the blog with a detailed explanation.
https://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/
I am using TCPDF 6.0.20, PHP 5.3.8.
Since my PDF will contain some Chinese Character (Simplified and Traditional), I decide to use font name: cid0cs.
Now, since a lot of users replied that they don't have that font installed, I need to embed this font to the PDF.
However, in http://www.tcpdf.org/fonts.php it says how to embed a TTF file using addTTFfont(), but NO WAY to create that font.
The only file that I found in tcpdf/fonts/ is cid0cs.php, which is NOT a TTF file.
I tried to run the script in tcpdf/tools/tcpdf_addfont.php and use the command line below:
c:\tcpdf>php tools/tcpdf_addfont.php -t CID0CS -o . -i fonts/cid0cs.php
and it returns:
--- ERROR: can't add fonts/cid0cs.php
How can I create the cid0cs.ttf file so that I can embed it using addTTFfont()?
I found the solution and record the process in my blog post below:
http://alucard-blog.blogspot.hk/2013/06/tcpdf-how-to-display-chinese-character.html
The key point is: I found the font that can display Traditional and Simplified Chinese.
cmd and run php
run this command : \ABSOLUTE PATH TO\tcpdf_addfont.php -i \ABSOLUTE PATH TO\myfont.ttf
if it works fine, you will find 3 files of myfont in /TCPDF/fonts
you might find some help here : http://queirozf.com/entries/adding-a-custom-font-to-tcpdf