Get words count in PO file - translation

How to calculate a number of words in PO file?
Poedit shows only lines count.
Any help will be appreciated.

Try pocount
pocount fi.po
fi.po
type strings words (source) words (translation)
translated: 47 (100%) 137 (100%) 123
fuzzy: 0 ( 0%) 0 ( 0%) n/a
untranslated: 0 ( 0%) 0 ( 0%) n/a
Total: 47 137 123
unreviewed: 47 (100%) 137 (100%) 123

Related

Combining string arguments from input file and command string with GNU parallel

I am trying to pass both a command string and separate arguments from an input file to GNU parallel. My script looks like this:
parallel="parallel --delay 0.2 -j 100 --joblog remaining_runs_$1.log --resume "
$srun $parallel {python3 scaling.py {1} {2} {3}} < missing_runs_$1.txt
The python script takes 3 separate integers as arguments, each listed in missing_runs_$1.txt like so:
1 1 153
1 1 154
1 1 155
1 1 156
1 1 157
1 1 158
...
I have tried using --colsep but it results to only the file arguments being passed to parallel missing the python3 scaling.py part. Without --colsep each file line is interpreted as a string which is not what I want either (e.g., python3 scaling.py '1 1 153'). Any ideas?
With base in your input sample, I created a reproducible example to test this issue:
A simple python script:
#!/usr/bin/python
import sys
for i in range(1, len(sys.argv)):
print(f'The argument number {i} is {sys.argv[i]}.')
And a simplified command line:
parallel --dry-run -j 100 --colsep ' ' ./python.py {1} {2} {3} :::: < missing_runs_1.txt
./python.py 1 1 153
./python.py 1 1 154
./python.py 1 1 155
./python.py 1 1 156
./python.py 1 1 157
./python.py 1 1 158
without --dry-run:
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 153.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 154.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 155.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 156.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 157.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 158.
Using all arguments from your parallel command, in the file remaining_runs_1.log, I got:
Seq Host Starttime JobRuntime Send Receive Exitval Signal Command
1 : 1630591288.009 0.021 0 86 0 0 ./python.py 1 1 153
2 : 1630591288.220 0.040 0 86 0 0 ./python.py 1 1 154
3 : 1630591288.422 0.035 0 86 0 0 ./python.py 1 1 155
4 : 1630591288.649 0.041 0 86 0 0 ./python.py 1 1 156
5 : 1630591288.859 0.042 0 86 0 0 ./python.py 1 1 157
6 : 1630591289.081 0.040 0 86 0 0 ./python.py 1 1 158
I think this can solve the problem or at least give new ideas for the definitive solution.
If
parallel --delay 0.2 -j 100 --joblog curtailment_scaling_remaining_$1.log --resume python3 scaling.py {1} {2} {3} :::: < missing_runs_$1.txt
gives you:
python3 curtailment_scaling.py '1 1 163'
and you want:
python3 curtailment_scaling.py 1 1 163
you can do (version > 20190722):
parallel --delay 0.2 -j 100 --joblog curtailment_scaling_remaining_$1.log --resume python3 scaling.py {=uq=} < missing_runs_$1.txt
(uq runs uq(); which causes the replacement string not to be quoted.)
or:
parallel --delay 0.2 -j 100 --joblog eval curtailment_scaling_remaining_$1.log --resume python3 scaling.py {} < missing_runs_$1.txt

Finding the memory consumption of each redis DB

The problem
One of my Python Redis clients fails with the following exception:
redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
I have checked the redis machine, and it seems to be out of memory:
free
total used free shared buffers cached
Mem: 3952 3656 295 0 1 9
-/+ buffers/cache: 3645 306
Swap: 0 0 0
top
top - 15:35:03 up 14:09, 1 user, load average: 0.06, 0.17, 0.16
Tasks: 114 total, 2 running, 112 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 st
KiB Mem: 4046852 total, 3746772 used, 300080 free, 1668 buffers
KiB Swap: 0 total, 0 used, 0 free. 11364 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1102 root 20 0 3678836 3.485g 736 S 1.3 90.3 10:12.53 redis-server
1332 ubuntu 20 0 41196 3096 972 S 0.0 0.1 0:00.12 zsh
676 root 20 0 10216 2292 0 S 0.0 0.1 0:00.03 dhclient
850 syslog 20 0 255836 2288 124 S 0.0 0.1 0:00.39 rsyslogd
I am using a few dozens Redis DBs in a single Redis instance. Each DB is denoted by numeric ids given to redis-cli, e.g.:
$ redis-cli -n 80
127.0.0.1:6379[80]>
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
You CANNOT get the used memory for each DB. With INFO command, you can only get the totally used memory for Redis instance. Redis records the newly allocated memory size, each time it dynamically allocates some memory. However, it doesn't do such record for each DB. Also, it doesn't have any record for the largest keys.
Normally, you should config your Redis instance with the maxmemory and maxmemory-policy (i.e. eviction policy when the maxmemory is reached).
You can write some sh-script like to this (show element count in each DB):
#!/bin/bash
max_db=501
i=0
while [ $i -lt $max_db ]
do
echo "db_nubner: $i"
redis-cli -n $i dbsize
i=$((i+1))
done
Example output:
db_nubner: 0
(integer) 71
db_nubner: 1
(integer) 0
db_nubner: 2
(integer) 1
db_nubner: 3
(integer) 1
db_nubner: 4
(integer) 0
db_nubner: 5
(integer) 1
db_nubner: 6
(integer) 28
db_nubner: 7
(integer) 1
I know that we can have a one database with large key, but anyway, in some cases this script can help.

to many files when starting new jhipster project

i follow the tutorial from matt on:
http://jhipster.github.io/video-tutorial/
when i do cloc . i see i have much and much more files i would expect:
$ cloc .
66717 text files.
20401 unique files.
24466 files ignored.
http://cloc.sourceforge.net v 1.60 T=128.46 s (115.7 files/s, 15523.0 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Javascript 13322 222956 357190 1266221
HTML 676 6984 1047 44885
CSS 76 1883 932 22029
Java 262 3548 1854 15641
XML 53 3383 1395 11307
LESS 79 1388 1546 7269
C/C++ Header 18 1032 300 5109
YAML 190 221 346 3466
CoffeeScript 47 783 699 2467
make 58 417 523 1271
Bourne Shell 31 234 202 1097
Maven 1 12 34 824
Perl 2 87 170 584
DTD 1 179 177 514
SASS 5 42 25 273
C++ 4 43 26 260
IDL 6 38 0 167
Bourne Again Shell 3 28 36 140
D 6 0 0 118
Scala 1 16 7 118
JavaServer Faces 3 3 0 109
Smarty 6 17 30 91
DOS Batch 1 24 2 64
Python 1 7 7 36
XSLT 1 5 0 32
C# 2 3 1 27
ASP.Net 2 5 0 23
C 1 7 4 23
OCaml 1 5 15 6
Lisp 1 0 0 6
PowerShell 1 2 2 4
Lua 1 0 0 2
--------------------------------------------------------------------------------
SUM: 14862 243352 366570 1384183
--------------------------------------------------------------------------------
why is that?
in total it is 610 mb large!
it seems there are a lot of node modules:
$ du -h -d1
584M ./node_modules
24K ./gulp
26M ./src
64K ./.mvn
610M .
is this correct?
and what do i need to add to source control?
thanks
This is normal. Most of those files are NPM dependencies, as you mentioned.
The generated .gitignore should already be configured properly and will ignore node_modules.

travis " Segmentation fault " but works fine locally

there, I ran into a 'Segmentation fault' error when using travis-ci for my project : IPython-Dashboard
there is no error msg and it works fine on local, I feel a little confusing. any one can give any idea on fixing this, thanks.
here is the travis build log on cloud:
travis-log
$ nosetests --with-coverage --cover-package=dashboard
../home/travis/build.sh: line 45: 3187 Segmentation fault (core dumped)
nosetests --with-coverage --cover-package=dashboard
The command "nosetests --with-coverage --cover-package=dashboard" exited with 139.
here is the build log on local [osx]
taotao#mac007:~/Desktop/github/IPython-Dashboard$sudo nosetests --with-coverage --cover-package=dashboard
.../Users/chenshan/Desktop/github/IPython-Dashboard/dashboard/tests/testCreateData.py:78: Warning: Can't create database 'IPD_data'; database exists
conn.cursor().execute('CREATE DATABASE IF NOT EXISTS {};'.format(config.sql_db))
/Library/Python/2.7/site-packages/pandas/io/sql.py:599: FutureWarning: The 'mysql' flavor with DBAPI connection is deprecated and will be removed in future versions. MySQL will be further supported with SQLAlchemy engines.
warnings.warn(_MYSQL_WARNING, FutureWarning)
...
Name Stmts Miss Cover Missing
---------------------------------------------------------------------
dashboard.py 13 0 100%
dashboard/client.py 1 0 100%
dashboard/client/sender.py 11 3 73% 26-27, 33
dashboard/conf.py 0 0 100%
dashboard/conf/config.py 29 0 100%
dashboard/server.py 0 0 100%
dashboard/server/resources.py 0 0 100%
dashboard/server/resources/dash.py 35 10 71% 36, 55-56, 67-69, 86-89
dashboard/server/resources/home.py 40 12 70% 25, 28-30, 83-91
dashboard/server/resources/sql.py 27 11 59% 30, 52-75
dashboard/server/resources/status.py 8 1 88% 19
dashboard/server/resources/storage.py 13 5 62% 26-28, 43-47
dashboard/server/utils.py 79 18 77% 20-24, 78-80, 82-83, 86, 96, 99-100, 126-127, 140-142
dashboard/server/views.py 21 1 95% 16
---------------------------------------------------------------------
TOTAL 277 61 78%
----------------------------------------------------------------------
Ran 6 tests in 4.600s
OK
taotao#mac007:~/Desktop/github/IPython-Dashboard$

GFS2 flags 0x00000005 blocked,join

I have cluster RHEL6,
cman, corosync, pacemaker.
After adding new memebers I got error in GFS mounting. GFS never mounts on servers.
group_tool
fence domain
member count 4
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3 4
dlm lockspaces
name clvmd
id 0x4104eefa
flags 0x00000000
change member 4 joined 1 remove 0 failed 0 seq 1,1
members 1 2 3 4
gfs mountgroups
name lv_gfs_01
id 0xd5eacc83
flags 0x00000005 blocked,join
change member 3 joined 1 remove 0 failed 0 seq 1,1
members 1 2 3
In processes:
root 2695 2690 0 08:03 pts/1 00:00:00 /bin/bash /etc/init.d/gfs2 start
root 2702 2695 0 08:03 pts/1 00:00:00 /bin/bash /etc/init.d/gfs2 start
root 2704 2703 0 08:03 pts/1 00:00:00 /sbin/mount.gfs2 /dev/mapper/vg_shared-lv_gfs_01 /mnt/share -o rw,_netdev,noatime,nodiratime
fsck.gfs2 -yf /dev/vg_shared/lv_gfs_01
Initializing fsck
jid=1: Replayed 0 of 0 journaled data blocks
jid=1: Replayed 20 of 21 metadata blocks
Recovering journals (this may take a while)
Journal recovery complete.
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
RGs: Consistent: 183 Cleaned: 1 Inconsistent: 0 Fixed: 0 Total: 184
2 blocks may need to be freed in pass 5 due to the cleaned resource groups.
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Block 11337799 (0xad0047) bitmap says 1 (Data) but FSCK saw 0 (Free)
Fixed.
Block 11337801 (0xad0049) bitmap says 1 (Data) but FSCK saw 0 (Free)
Fixed.
RG #11337739 (0xad000b) free count inconsistent: is 65500 should be 65502
RG #11337739 (0xad000b) Inode count inconsistent: is 15 should be 13
Resource group counts updated
Pass5 complete
The statfs file is wrong:
Current statfs values:
blocks: 12057320 (0xb7fae8)
free: 9999428 (0x989444)
dinodes: 15670 (0x3d36)
Calculated statfs values:
blocks: 12057320 (0xb7fae8)
free: 9999432 (0x989448)
dinodes: 15668 (0x3d34)
The statfs file was fixed.
Writing changes to disk
gfs2_fsck complete
gfs2_edit -p 0xad0047 field di_size /dev/vg_shared/lv_gfs_01
10 (Block 11337799 is type 10: Ext. attrib which is not implemented)
Howto drop flag blocked,join from GFS?
I solved it by reboot all servers which have GFS,
it is one of the unpleasant behavior of GFS.
GFS lock based on kernel and in the few cases it solved only with reboot.
there is very usefull manual - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Global_File_System_2/index.html

Resources