How can I access ink levels of printers programmatically? - printing

Okay, this is a Windows specific question.
I need to be able to access the ink levels of a printer connected to a computer. Possibly direct connection, or a network connection.
I recognize that it will likely be different for each printer (or printer company at least) but where can I find the information of how they reveal ink levels to a PC. Also, what is the best language to read this information in?

Okay, this is a OS agnostic answer... :-)
If the printer isn't a very cheapo model, it will have built-in support for SNMP (Simple Network Management Protocol). SNMP queries can return current values from the network devices stored in their MIBs (Management Information Bases).
For printers there's a standard defined called Printer MIB. The Printer MIB defines standard names and tree locations (OIDs == Object Identifiers in ASN.1 notation) for prtMarkerSuppliesLevel which in the case of ink marking printers map to ink levels.
Be aware that SNMP also allows private extensions to the standard MIBs. Most printer vendors do hide many additional pieces of information in their "private MIBs", though the standard info should always be available through the queries of the Printer MIB OIDs.
Practically every programming language has standard libraries which can help you to make specific SNMP queries from your own application.
One such implementation is Open Source, called Net-SNMP, which also comes with a few powerfull commandline tools to run SNMP queries.
I think the OID to query all levels for all inks is .1.3.6.1.2.1.43.11.1.1.9 (this webpage confirms my believe) but I cannot verify that right now, because I don't have a printer around in my LAN at the moment. So Net-SNMP's snmpget command to query ink levels should be something like:
snmpget \
-c public \
192.168.222.111 \
".1.3.6.1.2.1.43.11.1.1.9"
where public is the standard community string and 192.168.222.111 your printer's IP address.

I have an SNMP-capable HP 8600 pro N911a around to do some digging, so the following commands may help you a bit. Beware that this particular model has some firmware problems, you can't query "magenta" with snmpget, but you see a value with snmpwalk (which does some kind of recursive drill-down).
OLD: You can query the names and sequence of values, but I couldn't find the "max value" to calculate a clean percentage so far ;(. I'm guessing so far the values are relative to 255, so dividing by 2.55 yields a percentage.
Update: Marcelo's hint was great! From Registers .8.* you can read the max level per cartridge, and I was totally wrong assuming the max value can only be an 8-bit value. I have updated the sample script to read the max values and calculate c
There is also some discussion over there at Cacti forums.
One answer confirms that the ink levels are measured as percent (value 15 is "percent" in an enumeration):
# snmpwalk -v1 -c public 192.168.100.173 1.3.6.1.2.1.43.11.1.1.7
SNMPv2-SMI::mib-2.43.11.1.1.7.0.1 = INTEGER: 15
SNMPv2-SMI::mib-2.43.11.1.1.7.0.2 = INTEGER: 15
SNMPv2-SMI::mib-2.43.11.1.1.7.0.3 = INTEGER: 15
SNMPv2-SMI::mib-2.43.11.1.1.7.0.4 = INTEGER: 15
You need to install the net-snmp package. If you're not on Linux you might need some digging for SNMP command line tools for your preferred OS.
# snmpwalk -v1 -c public 192.168.100.173 1.3.6.1.2.1.43.11.1.1.6.0
SNMPv2-SMI::mib-2.43.11.1.1.6.0.1 = STRING: "black ink"
SNMPv2-SMI::mib-2.43.11.1.1.6.0.2 = STRING: "yellow ink"
SNMPv2-SMI::mib-2.43.11.1.1.6.0.3 = STRING: "cyan ink"
SNMPv2-SMI::mib-2.43.11.1.1.6.0.4 = STRING: "magenta ink"
# snmpwalk -v1 -c public 192.168.100.173 1.3.6.1.2.1.43.11.1.1.9.0
SNMPv2-SMI::mib-2.43.11.1.1.9.0.1 = INTEGER: 231
SNMPv2-SMI::mib-2.43.11.1.1.9.0.2 = INTEGER: 94
SNMPv2-SMI::mib-2.43.11.1.1.9.0.3 = INTEGER: 210
SNMPv2-SMI::mib-2.43.11.1.1.9.0.4 = INTEGER: 174
# snmpwalk -v1 -c praxis 192.168.100.173 1.3.6.1.2.1.43.11.1.1.8.0
SNMPv2-SMI::mib-2.43.11.1.1.8.0.1 = INTEGER: 674
SNMPv2-SMI::mib-2.43.11.1.1.8.0.2 = INTEGER: 240
SNMPv2-SMI::mib-2.43.11.1.1.8.0.3 = INTEGER: 226
SNMPv2-SMI::mib-2.43.11.1.1.8.0.4 = INTEGER: 241
On my Linux box I use the following script to do some pretty-printing:
#!/bin/sh
PATH=/opt/bin${PATH:+:$PATH}
# get current ink levels
eval $(snmpwalk -v1 -c praxis 192.168.100.173 1.3.6.1.2.1.43.11.1.1.6.0 |
perl -ne 'print "c[$1]=$2\n" if(m!SNMPv2-SMI::mib-2.43.11.1.1.6.0.(\d) = STRING:\s+"(\w+) ink"!i);')
# get max ink level per cartridge
eval $(snmpwalk -v1 -c praxis 192.168.100.173 1.3.6.1.2.1.43.11.1.1.8.0 |
perl -ne 'print "max[$1]=$2\n" if(m!SNMPv2-SMI::mib-2.43.11.1.1.8.0.(\d) = INTEGER:\s+(\d+)!i);')
snmpwalk -v1 -c praxis 192.168.100.173 1.3.6.1.2.1.43.11.1.1.9.0 |
perl -ne '
my #c=("","'${c[1]}'","'${c[2]}'","'${c[3]}'","'${c[4]}'");
my #max=("","'${max[1]}'","'${max[2]}'","'${max[3]}'","'${max[4]}'");
printf"# $c[$1]=$2 (%.0f)\n",$2/$max[$1]*100
if(m!SNMPv2-SMI::mib-2.43.11.1.1.9.0.(\d) = INTEGER:\s+(\d+)!i);'

An alternative approach could be using ipp. While most of the printers I tried support both, I found one which only worked with ipp and one that only worked for me with snmp.
Simple approach with ipptool:
Create file colors.ipp:
{
VERSION 2.0
OPERATION Get-Printer-Attributes
GROUP operation-attributes-tag
ATTR charset "attributes-charset" "utf-8"
ATTR naturalLanguage "attributes-natural-language" "en"
ATTR uri "printer-uri" $uri
ATTR name "requesting-user-name" "John Doe"
ATTR keyword "requested-attributes" "marker-colors","marker-high-levels","marker-levels","marker-low-levels","marker-names","marker-types"
}
Run:
ipptool -v -t ipp://192.168.2.126/ipp/print colors.ipp
The response:
"colors.ipp":
Get-Printer-Attributes:
attributes-charset (charset) = utf-8
attributes-natural-language (naturalLanguage) = en
printer-uri (uri) = ipp://192.168.2.126/ipp/print
requesting-user-name (nameWithoutLanguage) = John Doe
requested-attributes (1setOf keyword) = marker-colors,marker-high-levels,marker-levels,marker-low-levels,marker-names,marker-types
colors [PASS]
RECEIVED: 507 bytes in response
status-code = successful-ok (successful-ok)
attributes-charset (charset) = utf-8
attributes-natural-language (naturalLanguage) = en-us
marker-colors (1setOf nameWithoutLanguage) = #00FFFF,#FF00FF,#FFFF00,#000000,none
marker-high-levels (1setOf integer) = 100,100,100,100,100
marker-levels (1setOf integer) = 6,6,6,6,100
marker-low-levels (1setOf integer) = 5,5,5,5,5
marker-names (1setOf nameWithoutLanguage) = Cyan Toner,Magenta Toner,Yellow Toner,Black Toner,Waste Toner Box
marker-types (1setOf keyword) = toner,toner,toner,toner,waste-toner
marker-levels has current toner/ink levels, marker-high-levels are maximus (so far I've only seen 100s here), marker-names describe meaning of each field (tip: for colors you may want to strip everything after first space, many printers include cartridge types in this field).
Note: the above is with cups 2.3.1. With 2.2.1 I had to specify the keywords as one string instead ("marker-colors,marker-h....). Or it can be left altogether, then all keywords are returned.
More on available attributes (may differ between printers): https://www.cups.org/doc/spec-ipp.html
More on executing ipp calls (including python examples): https://www.pwg.org/ipp/ippguide.html

I really liked tseeling's approach!
Complementarily, I found out that the max value for the OID ... .9 is not 255 as guessed by him, but it actually varies per individual cartridge. The values can be obtained from OID .1.3.6.1.2.1.43.11.1.1.8 (the results obtained by dividing by these values match the ones obtained by running hp-inklevels command from hplip.
I wrote my own script that output CSVs like below (suppose printer IP addr is 192.168.1.20):
# ./hpink 192.168.1.20
black,73,366,19.9454
yellow,107,115,93.0435
cyan,100,108,92.5926
magenta,106,114,92.9825
values are in this order: <color_name>,<level>,<maxlevel>,<percentage>
The script source (one will notice I usually prefer awk over perl when the puzzle is simple enough):
#!/bin/sh
snmpwalk -v1 -c public $1 1.3.6.1.2.1.43.11.1.1 | awk '
/.*\.6\.0\./ {
sub(/.*\./,"");
split($0,TT,/[ "]*/);
color[TT[1]]=TT[4];
}
/.*\.8\.0\./ {
sub(/.*\./,"");
split($0,TT,/[ "]*/);
maxlevel[TT[1]]=TT[4];
}
/.*\.9\.0\./ {
sub(/.*\./,"");
split($0,TT,/[ "]*/);
print color[TT[1]] "," TT[4] "," maxlevel[TT[1]] "," TT[4] / maxlevel[TT[1]] * 100;
}'

Related

Matching TOTP implementation with Google Authenticator

(Solution) TL;DR: Google assumes the key string is base32 encoded; replacing any 1 with I and 0 with O. This must be decoded prior to hashing.
Original Question
I'm having difficulty having my code match up with GA. I even went chasing down counters +/- ~100,000 from the current time step and found nothing. I was very excited to see my function pass the SHA-1 tests in the RFC 6238 Appendix, however when applied to "real life" it seems to fail.
I went so far as to look at the open source code for Google Authenticator at Github (here). I used the key for testing: "qwertyuiopasdfgh". According to the Github code:
/*
* Return key entered by user, replacing visually similar characters 1 and 0.
*/
private String getEnteredKey() {
String enteredKey = keyEntryField.getText().toString();
return enteredKey.replace('1', 'I').replace('0', 'O');
}
I believe my key would not be modified. Tracing through the files it seems the key remains unchanged through calls: AuthenticatorActivity.saveSecret() -> AccountDb.add() -> AccountDb.newContentValuesWith().
I compared my time between three sources:
(erlang shell): now()
(bash): date "+%s"
(Google/bash): pattern="\s*date\:\s*"; curl -I https://www.google.com 2>/dev/null | grep -iE $pattern | sed -e "s/$pattern//g" | xargs -0 date "+%s" -d
They are all the same. Despite that, it appears my phone is a bit off from my computer. It will change steps not in sync with my computer. However me trying to chase down the proper time step by +/- thousands didn't find anything. According to the NetworkTimeProvider class, that is the time source for the app.
This code worked with all the SHA-1 tests in the RFC:
totp(Secret, Time) ->
% {M, S, _} = os:timestamp(),
Msg = binary:encode_unsigned(Time), %(M*1000000+S) div 30,
%% Create 0-left-padded 64-bit binary from Time
Bin = <<0:((8-size(Msg))*8),Msg/binary>>,
%% Create SHA-1 hash
Hash = crypto:hmac(sha, Secret, Bin),
%% Determine dynamic offset
Offset = 16#0f band binary:at(Hash,19),
%% Ignore that many bytes and store 4 bytes into THash
<<_:Offset/binary, THash:4/binary, _/binary>> = Hash,
%% Remove sign bit and create 6-digit code
Code = (binary:decode_unsigned(THash) band 16#7fffffff) rem 1000000,
%% Convert to text-string and 0-lead-pad if necessary
lists:flatten(string:pad(integer_to_list(Code),6,leading,$0)).
For it to truly match the RFC it would need to be modified for 8-digit numbers above. I modified it to try and chase down the proper step. The goal was to figure out how my time was wrong. Didn't work out:
totp(_,_,_,0) ->
{ok, not_found};
totp(Secret,Goal,Ctr,Stop) ->
Msg = binary:encode_unsigned(Ctr),
Bin = <<0:((8-size(Msg))*8),Msg/binary>>,
Hash = crypto:hmac(sha, Secret, Bin),
Offset = 16#0f band binary:at(Hash,19),
<<_:Offset/binary, THash:4/binary, _/binary>> = Hash,
Code = (binary:decode_unsigned(THash) band 16#7fffffff) rem 1000000,
if Code =:= Goal ->
{ok, {offset, 2880 - Stop}};
true ->
totp(Secret,Goal,Ctr+1,Stop-1) %% Did another run with Ctr-1
end.
Anything obvious stick out?
I was tempted to make my own Android application to implement TOTP for my project. I did continue looking at the Java code. With aid of downloading the git repository and grep -R to find function calls I discovered my problem. To get the same pin codes as Google Authenticator the key is assumed to be base32 encoded and must be decoded prior to passing it to the hash algorithm.
There was a hint of this in getEnteredKey() by replacing the 0 and 1 characters as these are not present in the base32 alphabet.

How to get a csv with all pcap packet details?

I want to create a CSV to import it on excel, containing all the packet details shown in wireshark.
Each row should correspond to a packet and the columns to the field details.
Using the following tshark command:
tshark -r mycapturefile.cap -E -V
I can show the information I need like:
Frame 1077: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
Encapsulation type: Ethernet (1)
Arrival Time: Aug 15, 2017 14:02:27.095521000 EDT
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1502820147.095521000 seconds
and other packet details...
What I want is that information provided with -V, so the -T fields option in wireshark is discarded. Wireshark export options also don't provide the data I need, only the pdml format, but I think is more tedius to parse.
I have searched for a tool, a script or parser with no results. Since each packet is different, make a personal parser may be difficult/tedious and considering people can extract this information but provide no sources of how to do it, there must be a method or tool that can do it.
Do you know any tool, script or method that already do this?
Thanks in advance.
There is a ton of information coming down. You gotta use that -Y display filter to whittle it down. The resulting text can then be parsed.
Try -Y "frame.number == 1077" -V and then parse the text that is returned.
In my case I wanted certificate information.
Function GetCertsFromWireSharkPackets2 ($CERTTEXT){
foreach($Cert in($CERTTEXT|?{$_ -match "Source:.*\d{1,3}\.\d{1,3}\.\d{1,3}\.|Destination:.*\d{1,3}\.\d{1,3}\.\d{1,3}\.|Certificate:"} | %{$_.trim() -replace 'Source:','|Source:' -replace ":",'=' }) -join "`n"| %{$_.split('|')}|?{$_}) {
$Cert|%{$Props = [regex]::matches($_,"(?sim)(?<=^).*?(?=\=)").value ; $Dups = [regex]::matches($Props,"(?sim)\b(\w+)\s+\1\b").value.split(' ') ; $values = [regex]::matches($_,"(?sim)(?<=\=).*?(?=$)").value.trim()}
$PropsNoDups = ($Props -join "`n").replace(($Dups|select -first 1),'').split(10)|?{$_} ;
if(($PropsNoDups.count + $Dups.count) -ne $Props.count){$dups+=($dups|select -First 1)}
for($X=1;$X -lt $Dups.count;$X++){$dups[$X] +=$X}
$ValidProps = $PropsNoDups+$Dups ; $StitchCount = $Values.Count
$ValidP_V = For($x=0;$x -lt $StitchCount;$x++){ '"'+$ValidProps[$x] + '"="' + $Values[$x] +'"'} ;$ValidP_V =($ValidP_V -join "`n")|?{$_} ; $ExpText = "New-Object psobject -Property #{`n"+$ValidP_V+"`n}"
Invoke-Expression($ExpText)|select Source, Destination, Certificate, Certificate1, Certificate2, Certificate3
} }
#Click refresh on a few browser tabs to generate traffic.
$CERTTEXT = .\tshark.exe -i 'Wi-Fi' -Y "ssl.handshake.certificate" -V -a duration:30
GetCertsFromWireSharkPackets2 $CERTTEXT
Source : cybersandwich.com (107.170.193.139)
Destination : KirtCarson.com (222.168.3.118)
Certificate : 3082057e30820466a0030201020212030e2782075e8f90f5... (id-at-commonName=multi.zeall.us)
Certificate1 : 308204923082037aa00302010202100a0141420000015385... (id-at-commonName=Let's Encrypt Authority
X3,id-at-organizationName=Let's Encrypt,id-at-countryName=US)
Certificate2 :
Certificate3 :

how to build complete certificate name, include serial number using openssl's index.txt

Problem Description:
I need to build a regular expression / pattern to find a value that can either be decimal or hex
Background Information:
I am trying to build a lua function that will lookup a cert in index.txt and return the serial number. Ultimately, I need to be able to take the full cert name and run the following command:
openssl x509 -noout -in
/etc/ssl/cert/myusername.6A756C65654063616E2E77746274732E6E6574.8F.crt
-dates
I have the logic to build the file name, all the way up to the serial number... which in the above example, is 8F.
Here's what the index.txt file looks like:
R 140320154649Z 150325040807Z 8E unknown /CN=test#gmail.com/emailAddress=test#gmail.com
V 160324050821Z 8F unknown /CN=test#yahoo.com/emailAddress=test#yahoo.com
V 160324051723Z 90 unknown /CN=test2#yahoo.com/emailAddress=test2#yahoo.com
The serial number is field 4 in the first record, and field 3 in the rest of the records.
According to the documentation https://www.openssl.org/docs/apps/x509.html, serial number can either be hex or decimal.
I'm not quite sure yet how / who determines whether it's hex or decimal (i'm modifying someone else's code that uses openssl)... but I'm wondering if there's a way to check for both. I'll only be checking the value for records that are not Revoked ...aka. ones that do not have "R" in the first column.
Thanks.
Lua unfortunately does not support grouping of patterns, so that you could make the pattern for the second timestamp optional. What you could do is check for the two-timestamp pattern first, and if no match was found (which means that match returns nil), repeat for the one-timestamp pattern:
sn = string.match(line, "^%a%s+%d+Z%s+%d+Z%s+(%x+)")
if not sn then
sn = string.match(line, "^%a%s+%d+Z%s+(%x+)")
end
Note that you could do this all in one line if you're eager:
sn = string.match(line, "^%a%s+%d+Z%s+%d+Z%s+(%x+)") or string.match(line, "^%a%s+%d+Z%s+(%x+)")
Each set of parentheses captures what is matched inside and adds a return value. For more information on patterns in Lua, see the reference manual.
local cert = {
'R 140320154649Z 150325040807Z 8E unknown /CN=test#gmail.com/emailAddress=test#gmail.com',
'V 160324050821Z 8F unknown /CN=test#yahoo.com/emailAddress=test#yahoo.com',
'V 160324051723Z 90 unknown /CN=test2#yahoo.com/emailAddress=test2#yahoo.com'
}
-- for Lua 5.1
for _, crt in ipairs(cert) do
local n3, n4 = crt:match'^%S+%s+%S+%s+(%S+)%s+(%S+)'
local serial = n3:match'^%x+$' or n4:match'^%x+$'
print(serial)
end
-- for Lua 5.2
for _, crt in ipairs(cert) do
local serial = crt:match'^%S+%s+%S+.-%f[%S](%x+)%f[%s]'
print(serial)
end

How to Choose a Port Number?

I'm writing a program which uses ZeroMQ to communicate with other running programs on the same machine. I want to choose a port number at run time to avoid the possibility of collisions. Here is an example of a piece of code I wrote to accomplish this.
#!/usr/bin/perl -Tw
use strict;
use warnings;
my %in_use;
{
local $ENV{PATH} = '/bin:/usr/bin';
%in_use = map { $_ => 1 } split /\n/, qx(
netstat -aunt |\
awk '{print \$4}' |\
grep : |\
awk -F: '{print \$NF}'
);
}
my ($port) = grep { not $in_use{$_} } 50_000 .. 59_999;
print "$port is available\n";
The procedure is:
invoke netstat -aunt
parse the result
choose the first port on a fixed range which doesn't appear on netstat list.
Is there a system utility better suited to accomplishing this?
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port_selected = socket.bind_to_random_port('tcp://*', min_port=6001, max_port=6004, max_tries=100)
First of all, from your code it looks like you are trying to choose a port between 70000 and 79999. You do know that port numbers only go up to 65535, right? :-)
You can certainly do it this way, even though there are a couple of problems with the approach. The first problem is that netstat output differs between different operating systems so it's hard to do it portably. The second problem is that you still need to wrap the code in a loop which tries again to find a new port number in case it was not possible to bind to the chosen port number, because there's a race condition between ascertaining that the port is free and actually binding to it.
If the library you are using allows you to specify the port number as 0 and allows you to call getsockname() on the socket after it is bound, then you should just do that. Using 0 makes the system choose any free port number, and with getsockname() you can find out which port it chose.
Failing that, it would probably actually be more efficient to not bother calling netstat and just try to find to different port numbers in a loop. If you succeed, break from the loop. If you fail, increment the port number by 1, go back, and try again.

Bad Result And Evaluation From Giza++

I have tried to work with giza++ on window (using Cygwin compiler).
I used this code:
//Suppose source language is French and target language is English
plain2snt.out FrenchCorpus.f EnglishCorpus.e
mkcls -c30 -n20 -pFrenchCorpus.f -VFrenchCorpus.f.vcb.classes opt
mkcls -c30 -n20 -pEnglishCorpus.e -VEnglishCorpus.e.vcb.classes opt
snt2cooc.out FrenchCorpus.f.vcb EnglishCorpus.e.vcb FrenchCorpus.f_EnglishCorpus.e.snt >courpuscooc.cooc
GIZA++ -S FrenchCorpus.f.vcb -T EnglishCorpus.e.vcb -C FrenchCorpus.f_EnglishCorpus.e.snt -m1 100 -m2 30 -mh 30 -m3 30 -m4 30 -m5 30 -p1 o.95 -CoocurrenceFile courpuscooc.cooc -o dictionary
But the after getting the output files from giza++ and evaluate output, I observed that the results were too bad.
My evaluation result was:
RECALL = 0.0889
PRECISION = 0.0990
F_MEASURE = 0.0937
AER = 0.9035
Dose any body know the reason? Could the reason be that I have forgotten some parameters or I should change some of them?
in other word:
first I wanted train giza++ by huge amount of data and then test it by small corpus and compare its result by desired alignment(GOLD STANDARD) , but I don't find any document or useful page in web.
can you introduce useful document?
Therefore I ran it by small courpus (447 sentence) and compared result by desired alignment.do you think this is right way?
Also I changed my code as follows and got better result but It's still not good:
GIZA++ -S testlowsf.f.vcb -T testlowde.e.vcb -C testlowsf.f_testlowde.e.snt -m1 5 -m2 0 -mh 5 -m3 5 -m4 0 -CoocurrenceFile inputcooc.cooc -o dictionary -model1dumpfrequency 1 -model4smoothfactor 0.4 -nodumps 0 -nsmooth 4 -onlyaldumps 1 -p0 0.999 -diagonal yes -final yes
result of evaluation :
// suppose A is result of GIZA++ and G is Gold standard. As and Gs is S link in A And G files. Ap and Gp is p link in A and G files.
RECALL = As intersect Gs/Gs = 0.6295
PRECISION = Ap intersect Gp/A = 0.1090
FMEASURE = (2*PRECISION*RECALL)/(RECALL + PRECISION) = 0.1859
AER = 1 - ((As intersect Gs + Ap intersect Gp)/(A + S)) = 0.7425
Do you know the reason?
Where did you get those parameters? 100 iterations of model1?! Well, if you actually manage to run this, I strongly suspect that you have a very small parallel corpus. If so, you should consider adding more parallel data in training. And how exactly do you calculate the recall and precision measures?
EDIT:
With less than 500 sentences you're unlikely to get any reasonable performance. The usual way to do it is not find a larger (unaligned) parallel corpus, run GIZA++ on both together and then evaluate the small part for which you have the manual alignments. Check Europarl or MultiUN, these are freely available corpora, both contain a relatively large amount of English-French parallel data. The instructions on preparing the data can be found on the websites.

Resources