epson send data tools HOW to - printing

i want to send binary data (ESC/POS command) via EPSON Send Data Tools (senddat.exe)
according to there website / Manual from command prompt
If the printer is set as a USB printer class:
senddat.exe scriptfile USBPRN
(C:\senddat.exe sample.txt ESDPRT001)
file:sample.txt
' Sample script of senddat
' Version 0.01
'Comment line is starting ' character
!Display line is starting ! character
.Pause line is starting . character
'Decimal data
48 49 50 51 CR LF
'Hexadecimal data
30h 31h 32h CR LF
0x33 0x34 0x35 CR LF
$36 $37 $38 CR LF
'String data 1
string1 CR LF
'String data 2
"string2" CR LF
'Special characters
"\"" CR LF
"\'" CR LF
"\\" CR LF
"\0" CR LF
which should be printing.
0123
012
345
678
String1
String2
“
‘
BUT it does not print any thing only creating out put in file (file name is same as PORT name in same directory) like my above command is making file c:\ESDPRT001
can any body help me in this.

To out put the data to USB Printer Class printer, you need to set like below
senddat.exe sample.txt USBPRN0
"USBPRN0" is just example, you need to set correct # with your test PC environment

Related

Print double quotes in Forth

The word ." prints a string. More precisely it compiles the (.") and the string up to the next " in the currently compiled word.
But how can I print
That's the "question".
with Forth?
In a Forth-2012 System (e.g. Gforth) you can use string literals with escaping via the word s\" as:
: foo ( -- ) s\" That's the \"question\"." type ;
In a Forth-94 system (majority of standard systems) you can use arbitrary parsing and the word sliteral as:
: foo ( -- ) [ char | parse That's the "question".| ] sliteral type ;
A string can be also extracted up to the end of the line (without printable delimiter); a multi-line string can be extracted too.
Specific helpers for particular cases can be easily defined.
For example, see the word s$ for string literals that are delimited by any arbitrary printable character, e.g.:
s$ `"test" 'passed'` type
Old school:
34 emit
Output:
"
Using gforth:
: d 34 emit ;
cr ." That's the " d ." question" d ." ." cr
Output:
That's the "question".

Powershell 7 Byte encoding an image file

I'm using PowerShell to upload files to a web site through an API.
In PS5.1, this would get the image in the correct B64 encoding to be processed by the API at the other end:
$b64 = [convert]::ToBase64String((get-content $image_path -encoding byte))
In PS7, this breaks with the error:
Get-Content: Cannot process argument transformation on parameter 'Encoding'. 'byte' is not a supported encoding name. For information on defining a custom encoding, see the documentation for the Encoding.RegisterProvider method. (Parameter 'name')
I've tried reading the content in other encoding then using [system.Text.Encoding]:GetBytes() to convert, but the byte array is always different. Eg
PS 5.1> $bytes = get-content -Path $image -Encoding byte ; Write-Host "bytes:" $bytes.count ; Write-Host "First 11:"; $bytes[0..10]
bytes: 31229
First 11:
137
80
78
71
13
10
26
10
0
0
0
But on PowerShell7:
PS7> $enc = [system.Text.Encoding]::ASCII
PS7> $bytes = $enc.GetBytes( (get-content -Path $image -Encoding ascii | Out-String)) ; Write-Host "bytes:" $bytes.count ; Write-Host "First 11:"; $bytes[0..10]
bytes: 31416 << larger
First 11:
63 << diff
80 << same
78 <<
71
13
10
26
13 << new
10
0
0
I've tried other combinations of encodings without any improvement.
Can anyone suggest where I'm going wrong?
With PowerShell 6 Byte is not a valid argument for the Enconding-Parameter anymore. You should try the AsByteStream-Parameter in combination with the Parameter Raw like so:
$b64 = [convert]::ToBase64String((get-content $image_path -AsByteStream -Raw))
There is even an example in the help for Get-Content that explains how to use these new parameters.
The problem turned out to be with Get-Content. I bypassed the problem using:
$bytes = [System.IO.File]::ReadAllBytes($image_path)
NOTE: the $image_path needs to be absolute, not relative.
So my Base64 line became:
$b64 = [convert]::ToBase64String([System.IO.File]::ReadAllBytes($image_path))

Lua string.format and use of newline or control characters

I'm trying to string.format for raw output to the uart using NodeMCU.
I'm trying the function
uart.write(0,string.format("loop %03d local: %02d | gmt %02d:%02d:%02d local %02d/%02d/%04d\n",loops,timezonetime,gmthours,gmtmins,gmtsecs,Nmonth,Nday,Nyear))
but the \n is ignored, and text is concatenated.
print(string.format("loop %03d local: %02d | gmt %02d:%02d:%02d local %02d/%02d/%04d",loops,timezonetime,gmthours,gmtmins,gmtsecs,Nmonth,Nday,Nyear))
works as expected, but I can't control the newline always added by print()
How can I use uart.write and string.format to control the output including the placement and use of newline and other control characters?
The issue a result of newline handling in the LuaLoader that was used for accessing the NodeMCU board. When used with PUTTY, the output is as expected.
Here are the results of more detailed testing. It appears that \r does not work in the string parameter passed to uart.write()
-- uart.write Test
print("______first test____________") -- prime the output with a line and newline
uart.write(0,"asdfasdfasdfasdfasdf") -- no newline
print("______should be at end of same line as asdf...______")
uart.write(0,"asdfasdfasdfasdfasdf(newline)\n") -- with newline
print("______should be on line following asdf...____________")
uart.write(0,"asdfasdfasdfasdfasdf(CR)\r") -- with return only
uart.write(0,"OVERWRITE\n") -- overwrite the first part of asdf line, then newline
print("______should be on newline below OVERWRITE line ____________")
Output results:
dofile("uwtest.lua")
______first test____________
asdfasdfasdfasdfasdf______should be at end of same line as asdf...______
asdfasdfasdfasdfasdf(newline)
______should be on line following asdf...____________
asdfasdfasdfasdfasdf(CR)
OVERWRITE
______should be on newline below OVERWRITE line ____________
>
The expected result is the string "asdfasdfasdfasdfasdf(CR)\r" will be followed by a CR but not LF, causing the terminal cursor to move to the left
This appears to be an issue with the terminal emulation in LuaLoader.
When I connect to the NodeMCU with Putty, I get this output:
> dofile("uwtest.lua")
______first test____________
asdfasdfasdfasdfasdf______should be at end of same line as asdf...______
asdfasdfasdfasdfasdf(newline)
______should be on line following asdf...____________
OVERWRITEsdfasdfasdf(CR)
______should be on newline below OVERWRITE line ____________
>
The Putty output is as expected.

How to grep umlauts and other accented text characters via AppleScript

I have a problem trying to execute shell scripts from apple script. I do a "grep", but as soon as it contains special characters it doesn't work as intended.
(The script reads a list list ob subfolders in a directory and checks if any of the subfolders appear in a file.)
Here is my script:
set searchFile to "/tmp/output.txt"
set theCommand to "/usr/local/bin/pdftotext -enc UTF-8 some.pdf" & space & searchFile
do shell script theCommand
tell application "Finder"
set companies to get name of folders of folder ("/path/" as POSIX file)
end tell
repeat with company in companies
set theCommand to "grep -c " & quoted form of company & space & quoted form of searchFile
try
do shell script theCommand
set CompanyName to company as string
return CompanyName
on error
end try
end repeat
return false
The problem is e.g. with strings with umlauts. "theCommand" is somehow differently encoded that when I do it on the CLI directly.
$ grep -c 'Württemberg' '/tmp/output.txt' --> typed on command line
3
$ grep -c 'Württemberg' '/tmp/output.txt' --> copy & pasted from AppleScript
0
$ grep -c 'rttemberg' '/tmp/output.txt' --> no umlauts, no problems
3
The "ü" from the first and the second line are different; a echo 'Württemberg' | openssl base64 shows this.
I tried several encoding tricks at different places, basically everything I could find or think of.
Does anyone have any idea? How can I check which encoding a string has?
Thanks in advance!
Sebastian
Overview
This can work by escaping each character that has an accent in each company name before they are used in the grep command.
So, you'll need to escape each one of those characters (i.e. those which have an accent) with double backslashes (i.e. \\). For example:
The ü in Württemberg will need to become \\ü
The ö in Königsberg will need to become \\ö
The ß in Einbahnstraße will need to become \\ß
Why is this necessary:
These accented characters, such as a u with diaeresis, are certainly getting encoded differently. Which type of encoding they receive is difficult to ascertain. My assumption is that the encoding pattern used begins with a backslash - hence why escaping those characters with backslashes fixes the issue. Consider the u with diaeresis in the previous link, it shows that for the C/C++ language the ü is encoded as \u00FC.
Solution
In the complete script below you'll notice the following:
set accentedChars to {"ü", "ö", "ß", "á", "ė"} has been added to hold a list of all characters that will need to be escaped. You'll need to explicitly state each one as there doesn't seem to be a way to infer whether the character has an accent.
Before assigning the grepcommand to theCommand variable we firstly escape the necessary characters via the line reading:
set company to escapeChars(company, accentedChars)
As you can see here we are passing two arguments to the escapeChars sub-routine, (i.e. the non-escaped company variable and the list of accented characters).
In the escapeChars sub-routine we iterate over each char in the accentedChars list and invoke the findAndReplace sub-routine. This will escape any instances of those characters with backslashes found in the company variable.
Complete script:
set searchFile to "/tmp/output.txt"
set accentedChars to {"ü", "ö", "ß", "á", "ė"}
set theCommand to "/usr/local/bin/pdftotext -enc UTF-8 some.pdf" & ¬
space & searchFile
do shell script theCommand
tell application "Finder"
set companies to get name of folders of folder ("/path/" as POSIX file)
end tell
repeat with company in companies
set company to escapeChars(company, accentedChars)
set theCommand to "grep -c " & quoted form of company & ¬
space & quoted form of searchFile
try
do shell script theCommand
set CompanyName to company as string
return CompanyName
on error
end try
end repeat
return false
(**
* Checks each character of a given word. If any characters of the word
* match a character in the given list of characters they will be escapd.
*
* #param {text} searchWord - The word to check the characters of.
* #param {text} charactersList - List of characters to be escaped.
* #returns {text} The new text with the item(s) replaced.
*)
on escapeChars(searchWord, charactersList)
repeat with char in charactersList
set searchWord to findAndReplace(char, ("\\" & char), searchWord)
end repeat
return searchWord
end escapeChars
(**
* Replaces all occurances of findString with replaceString
*
* #param {text} findString - The text string to find.
* #param {text} replaceString - The replacement text string.
* #param {text} searchInString - Text string to search.
* #returns {text} The new text with the item(s) replaced.
*)
on findAndReplace(findString, replaceString, searchInString)
set oldTIDs to text item delimiters of AppleScript
set text item delimiters of AppleScript to findString
set searchInString to text items of searchInString
set text item delimiters of AppleScript to replaceString
set searchInString to "" & searchInString
set text item delimiters of AppleScript to oldTIDs
return searchInString
end findAndReplace
Note about current counts:
Currently your grep pattern only reports the number of lines that the word was found on. Not how many instances of the word were found.
If you want the actual number of instances of the word then use the -o option with grep to output each occurrence. Then pipe that to wc with the -l option to count the number of lines. For example:
grep -o 'Württemberg' /tmp/output.txt | wc -l
and in your AppleScript that would be:
set theCommand to "grep -o " & quoted form of company & space & ¬
quoted form of searchFile & "| wc -l"
Tip: If your want to remove the leading spaces in the count/number that gets logged then pipe it to sed to strip the spaces: For example via your script:
set theCommand to "grep -o " & quoted form of company & space & ¬
quoted form of searchFile & "| wc -l | sed -e 's/ //g'"
and the equivalent via the command line:
grep -o 'Württemberg' /tmp/output.txt | wc -l | sed -e 's/ //g'

How to use the load command with MySQL in Ruby/Rails?

In MySQL, I can run:
LOAD DATA LOCAL INFILE "/home/pt/test/bal.csv" INTO TABLE bal FIELDS
TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '"' IGNORE 1
LINES;
However, in my Ruby program:
str="LOAD DATA LOCAL INFILE "/home/pt/test/bal.csv" INTO TABLE bal
FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '"'
IGNORE 1 LINES;"
puts str
dbh.query(str)
The output is:
LOAD DATA LOCAL INFILE "/home/pt/test/bal.csv" INTO TABLE bal FIELDS
TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '"' IGNORE 1
LINES;
/home/pt/test/ptb.rb:34:in `query': Field separator argument is not what
is expected; check the manual (Mysql::Error)
from /home/pt/test/ptb.rb:34:in `<main>'
What's wrong with this code?
Remove the space in ENCLOSED BY ' \"'
str="LOAD DATA LOCAL INFILE \"/home/pt/test/bal.csv\" INTO TABLE bal FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 LINES;"
You may also find useful the %Q[] Ruby syntax. It's analog of "", but you don't need to escape " inside the string:
str=%Q[LOAD DATA LOCAL INFILE "/home/pt/test/bal.csv" INTO TABLE bal FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 LINES;]

Resources