Printting on dot matrix printer with only Tx, Rx and Ground - printing

My hardware communicate with PC serial port using Tx, Rx and ground lines for sending and receiving data, I wrote a program which can send some text to PC via serial line, now I wanted to alter it in such a way that I should be able to print that data on dot matrix printer which has a serial port.
I am using an Epson LQ 1150 dot matrix printer which has a db25 serial port. I tried connecting in following manner and sent data over serial line, the printer prints some garbage character and hangs.
Tx = Rx
Rx = Tx
Ground = Ground
I tried searching other posts explaining(db9-M to db25-F) :
Receive Data (RxD) 2 = 2 Transmit Data (TxD)
Transmit Data (TxD) 3 = 3 Receive Data (RxD)
Data Terminal Ready (DTR) 4 = 6 Data Set Ready (DSR)
System Ground (SG) 5 = 7 System Ground (SG)
Data Set Ready (DSR) 6 = 20 Data Terminal Ready (DTR)
Clear To Send (CTS) 8 = 4 Request To Send (RTS)
But I don't have DTR, DSR,and CTS. Is there any way I can make it possible using only 3 lines i.e. Tx, Rr and ground?

Related

Rasberry Pi Pico saves data in multiple txt files when connected to an external power source

I have a temperature sensor connected to rasberry pi pico and have a main.py, which records the temperature data into .txt file. When I connect the rasberry pi to PC via USB port, it starts recording and saves the data in one .txt file. However, when I connect it to an external power supply (AC DC adapter with output 5V), the data is saved in multiple .txt files of different sizes (named logfile_0001.txt, logfile_0002.txt, ... as expected).
Main.py starts running whenever the rasberry pi is connected to the external power supply, and I can confirm that by the LED, which I programmed it to blink when it starts collecting data and after writing the first three lines.
The weird thing is, I never observe LED blinking more than the initial blink right when it is connected to the external power, even though it should whenever it writes the three header lines. Each separate txt file does contain those three lines as headers. I am very confused how the file can have the three lines without LED blinking, too.
Here is the code after setting up the i2c using machine module in micropython:
#naming the file
num = 0
for file in os.ilistdir():
file_name = file[0]
if not file[0].startswith('logfile'):
continue
temp = int(file_name[8:13])
if temp > num:
num = temp
num += 1
file_name = f'logfile_{num:05d}.txt'
with open(file_name,'w') as f:
f.write('Humidity and temperature data taken using T9602 Humidity & Temperature Sensor, on Rasberry Pi Pico\n')
f.write(str(time.localtime(start_time)) +'\n')
f.write('unixTime Humidity1(%) Temperature1(C) Humidity2(%) Temperature2(C)\n')
#LED blinks when connected
if True:
led_onboard.value(1)
for i in range(3):
led_onboard.value(0)
utime.sleep(0.3)
led_onboard.value(1)
utime.sleep(0.3)
led_onboard.value(0)
while True:
#turn on LED when recording data
led_onboard.value(1)
#communicate with two i2c devices (sensors)
i2c.writeto(address[0],b'1')
i2c2.writeto(address2[0],b'1')
#calculating temperatures based on the data read
RH1, TH1 = data_calc(i2c.readfrom(address[0],4))
with open(file_name, 'a') as f:
f.write(f'{time.time()} {RH1} {TH1} {RH2} {TH2}\n')
print(f'{time.localtime(time.time())[:-2]} {RH1} {TH1} {RH2} {TH2}')
time.sleep(1)
If someone have any insights, I would greatly appreciate it! Thank you so much.

pyserial issues with high baudrate FTDI

I have the following setup:
A FPGA sending out data on UART at a baudrate of 3Mbps. The data transmitted is a chunk of 1024 bytes sent at a variable periodicity ranging from 20ms to 200ms. (So even in the worst case, datarate is far from 3Msps)
A FTDI 232RG
A piece of python running on my computer (Windows), doing basically : opening a COM port with pyserial, 3Msps, polling the in_waiting until it reaches the size of a packet (1024 bytes), formatting the packet received and print it on the screen
The script works well for low repetition frequency, but I face issues with higher repetitions (typically 20ms). When the periodicity in 20ms I eventually end up getting a buffer overflow somewhere before the in_waiting. I checked the timing of my python loop and it takes about 4ms. So it looks like there is something upstream (in the FTDI or Windows) that feeds the pyserial buffer with more than one packet within the 4ms following a packet.
I tried changing the FTDI latency in the driver (from 16ms default down to a few ms) but it does not seem to help.
I am currently clueless about what is happening. Would you have any advice about how to understand better what is happening?
Thanks for your help!
You could create a "loop" between TX and RX and run the following code (tested with a FT2232H, so mostlikely you need to change the identifier string):
import time
import serial
import serial.tools.list_ports
print([(x[0],x[2]) for x in serial.tools.list_ports.comports()])
port = [x[0] for x in serial.tools.list_ports.comports() if "FT4Q1LJFB" in x[2]][0]
ser = serial.Serial(port,12000000)
while True:
t0 = time.time()
counter = 0
for i in range(1000):
ser.write([1]*3000)
recv = ser.read(ser.inWaiting())
delta_t = time.time() - t0
counter += len(recv)
print(counter / delta_t)
For me the following output is shown
[('COM7', 'USB VID:PID=0403:6010 SER=FT4Q1LJFA'), ('COM8', 'USB VID:PID=0403:6010 SER=FT4Q1LJFB')]
0.0
0.0
0.0
0.0
96787.81184093593
1201991.0268273412
1201197.0857713912
1201166.9350959768
1201445.4072856384
You will notice that it is 0.0 in the beginning. This is because I connected RX and TX after starting the program resulting in a ramping up of the received bytes. This is the "default" mode meaning 8 bits + 1 start bit + 1 stop bit = 10 bits per word which explains why "only" 1.2 Mbytes per second are transmitted.

Reassembling packets in a Lua Wireshark Dissector

I'm trying to write a dissector for the Safari Remote Debug protocol which is based on bplists and have been reasonably successful (current code is here: https://github.com/andydavies/bplist-dissector).
I'm running into difficultly with reassembling packets though.
Normally the protocol sends a packet with 4 bytes containing the length of the next packet, then the packet with the bplist in.
Unfortunately some packets from the iOS simulator don't follow this convention and the four bytes are either tagged onto the front of the bplist packet, or onto the end of the previous bplist packet, or the data is multiple bplists.
I've tried reassembling them using desegment_len and desegment_offset as follows:
function p_bplist.dissector(buf, pkt, root)
-- length of data packet
local dataPacketLength = tonumber(buf(0, 4):uint())
local desiredPacketLength = dataPacketLength + 4
-- if not enough data indicate how much more we need
if desiredPacketLen > buf:len() then
pkt.desegment_len = dataPacketLength
pkt.desegment_offset = 0
return
end
-- have more than needed so set offset for next dissection
if buf:len() > desiredPacketLength then
pkt.desegment_len = DESEGMENT_ONE_MORE_SEGMENT
pkt.desegment_offset = desiredPacketLength
end
-- copy data needed
buffer = buf:range(4, dataPacketLen)
...
What I'm attempting to do here is always force the size bytes to be the first four bytes of a packet to be dissected but it doesn't work I still see a 4 bytes packet, followed by a x byte packet.
I can think of other ways of managing the extra four bytes on the front, but the protocol contains a lookup table thats 32 bytes from the end of the packet so need a way of accurately splicing the packet into bplists.
Here's an example cap: http://www.cloudshark.org/captures/2a826ee6045b #338 is an example of a packet where the bplist size is at the start of the data and there are multiple plists in the data.
Am I doing this right (looking other questions on SO, and examples around the web I seem to be) or is there a better way?
TCP Dissector packet-tcp.c has tcp_dissect_pdus(), which
Loop for dissecting PDUs within a TCP stream; assumes that a PDU
consists of a fixed-length chunk of data that contains enough information
to determine the length of the PDU, followed by rest of the PDU.
There is no such function in lua api, but it is a good example how to do it.
One more example. I used this a year ago for tests:
local slicer = Proto("slicer","Slicer")
function slicer.dissector(tvb, pinfo, tree)
local offset = pinfo.desegment_offset or 0
local len = get_len() -- for tests i used a constant, but can be taken from tvb
while true do
local nxtpdu = offset + len
if nxtpdu > tvb:len() then
pinfo.desegment_len = nxtpdu - tvb:len()
pinfo.desegment_offset = offset
return
end
tree:add(slicer, tvb(offset, len))
offset = nxtpdu
if nxtpdu == tvb:len() then
return
end
end
end
local tcp_table = DissectorTable.get("tcp.port")
tcp_table:add(2506, slicer)

Scapy - retrieving RSSI from WiFi packets

I'm trying to get RSSI or signal strength from WiFi packets.
I want also RSSI from 'WiFi probe requests' (when somebody is searching for a WiFi hotspots).
I managed to see it from kismet logs but that was only to make sure it is possible - I don't want to use kismet all the time.
For 'full time scanning' I'm using scapy. Does anybody know where can I find the RSSI or signal strength (in dBm) from the packets sniffed with scapy? I don't know how is the whole packet built - and there are a lot of 'hex' values which I don't know how to parse/interpret.
I'm sniffing on both interfaces - wlan0 (detecting when somebody connects to my hotspot), and mon.wlan0 (detecting when somebody is searching for hotspots).
Hardware (WiFi card) I use is based on Prism chipset (ISL3886). However test with Kismet was ran on Atheros (AR2413) and Intel iwl4965.
Edit1:
Looks like I need to access somehow information stored in PrismHeader:
http://trac.secdev.org/scapy/browser/scapy/layers/dot11.py
line 92 ?
Anybody knows how to enter this information?
packet.show() and packet.show2() don't show anything from this Class/Layer
Edit2:
After more digging it appears that the interface just isn't set correctly and that's why it doesn't collect all necessary headers.
If I run kismet and then sniff packets from that interface with scapy there is more info in the packet:
###[ RadioTap dummy ]###
version= 0
pad= 0
len= 26
present= TSFT+Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
notdecoded= '8`/\x08\x00\x00\x00\x00\x10\x02\x94\t\xa0\x00\xdb\x01\x00\x00'
...
Now I only need to set the interface correctly without using kismet.
Here is a valuable scapy extension that improves scapy.layers.dot11.Packet's parsing of present not decoded fields.
https://github.com/ivanlei/airodump-iv/blob/master/airoiv/scapy_ex.py
Just use:
import scapy_ex
And:
packet.show()
It'll look like this:
###[ 802.11 RadioTap ]###
version = 0
pad = 0
RadioTap_len= 18
present = Flags+Rate+Channel+dBm_AntSignal+Antenna+b14
Flags = 0
Rate = 2
Channel = 1
Channel_flags= 160
dBm_AntSignal= -87
Antenna = 1
RX_Flags = 0
To summarize:
signal strength was not visible because something was wrong in the way that 'monitor mode' was set (not all headers were passed/parsed by sniffers). This monitor interface was created by hostapd.
now I'm setting monitor mode on interface with airmon-ng - tcpdump, scapy show theese extra headers.
Edited: use scapy 2.4.1+ (or github dev version). Most recent versions now correctly decode the « notdecoded » part
For some reason the packet structure has changed. Now dBm_AntSignal is the first element in notdecoded.
I am not 100% sure of this solution but I used sig_str = -(256 - ord(packet.notdecoded[-2:-1])) to reach first element and I get values that seems to be dBm_AntSignal.
I am using OpenWRT in a TP-Link MR3020 with extroot and Edward Keeble Passive Wifi Monitoring project with some modifications.
I use scapy_ex.py and I had this information:
802.11 RadioTap
version = 0
pad = 0
RadioTap_len= 36
present = dBm_AntSignal+Lock_Quality+b22+b24+b25+b26+b27+b29
dBm_AntSignal= 32
Lock_Quality= 8
If someone still has the same issue, I think I have found the solution:
I believe this is the right cut for the RSSI value:
sig_str = -(256-ord(packet.notdecoded[-3:-2]))
and this one is for the noise level:
noise_str = -(256-ord(packet.notdecoded[-2:-1]))
The fact that it says "RadioTap" suggests that the device may supply Radiotap headers, not Prism headers, even though it has a Prism chipset. The p54 driver appears to be a "SoftMAC driver", in which case it'll probably supply Radiotap headers; are you using the p54 driver or the older prism54 driver?
I have similar problem, I set up the monitor mode with airmon-ng and I can see the dBm level in tcpdump but whenever I try the sig_str = -(256-ord(packet.notdecoded[-4:-3])) I get -256 because the returned value from notdecoded in 0. Packet structure looks like this.
version = 0
pad = 0
len = 36
present = TSFT+Flags+Rate+Channel+dBm_AntSignal+b14+b29+Ext
notdecoded= ' \x08\x00\x00\x00\x00\x00\x00\x1f\x02\xed\x07\x05
.......

How to manual set minimal value for dynamic buffer in Netty 3.2.6? For example 2048 bytes

I need to receive full packet from other IP (navigation device) by TCP/IP.
The device has to send 966 bytes periodically (over one minute), for example.
In my case first received buffer has length 256 bytes (first piece of packet), the second is 710 bytes (last piece of packet), the third is full packet (966 bytes).
How to manual set minimal value for first received buffer length?
This is piece of my code:
Executor bossExecutors = Executors.newCachedThreadPool();
Executor workerExecutors = Executors.newCachedThreadPool();
NioServerSocketChannelFactory channelsFactory =
new NioServerSocketChannelFactory(bossExecutors, workerExecutors);
ServerBootstrap bootstrap = new ServerBootstrap(channelsFactory);
ChannelPipelineFactory pipelineFactory = new NettyServerPipelineFactory(this.HWController);
bootstrap.setPipelineFactory(pipelineFactory);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setOption("child.receiveBufferSizePredictorFactory",
new FixedReceiveBufferSizePredictorFactory(2048)
);
bootstrap.bind(new InetSocketAddress(this.port));
No matter what receiveBufferSizePredictorFactory you specify, you will see a message is split into multiple MessageEvents. It's because TCP/IP is not a message-oriented protocol but a stream-oriented one. Please read the user guide that explains how to write a proper decoder that deals with this common issue.

Resources