I am using a SIM800L module with a Texas Instrument Launchpad, with a MSP430G2553 microcontroller, not using an external library for SIM800L.
Problem Statement:
A Simple text message (SMS with Text Mode) is sent, received as a blank message on cellphone.
SIM details:
1. SIM 1 : Location: India. Operator: AirTel, 4G compatible SIM Card.
2. SIM 2 : Location: India. Operator: Tata Docomo, 3G compatible SIM card.
What I know already:
UART Drivers in firmware are tested and working, non-polling, interrupt driven.
No blocking time delays added as a substitute to read responses of AT commands. I read the response and proceed only if positive acknowledgement is received, <CR><LF>OK<CR><LF> for most of the commands.
I have confirmed the data bits transmitted and received on Tx-Rx pins by means of an oscilloscope. Everything seems as expected, including voltage levels.
What I have read:
Some speculation through unofficial sources (Of course forums) that SIM800L is only 2G compatible.
(Shallow reading from wikipedia) I have read through GSM 3.38 and GSM 3.40, and the Data Coding Scheme section for understanding how the encoding of text is handled in suited/relevant AT command (AT+CSMP).
Various forums including the ones for arduino with which SIM800L modules are very popularly used.
Related posts on Stackoverflow:
Recieving Blank SMS SIM800 using AT Commands and Python on Raspberry Pi 2
How to send SMS with GSM module SIM800 and Arduino Uno?
Sending GSM Character Set in SMS with SIM800L Module
The answer in the first one seemed to have worked for him, it didn't work for me.
What have I tried:
I have used the same module with an instance of Docklight serial terminal. SMS sent from Docklight are received on my cellphone and appear as expected, not blank.
On day 0, before integrating module with Launchpad hardware, I have tested the overall firmware state machine with exact copy of expected responses from SIM800L.
The results for both the SIM cards are same, except for some of the initial configurations, but I load a typical set of configuration in both of them before I initiate any SMS related task.
Typical values that I use are:
Echo Off
CSMP: 17, 167, 0, 0 (I have tried 17, 167, 0, 0, but no luck). Default from SIM 1 is 17,11,0,246, and that from SIM 2 is 17, 255, 0, 0.
CSCS: "IRA"
Failed combinations on serial port: (SIM 1 and SIM 2)
CSMP: 17, 11, 0, 246 | CSCS: "IRA" - Sends a blank SMS
CSMP: 17, 11, 0, 246 | CSCS: "GSM" - Sends a blank SMS
CSMP: 17, 11, 0, 246 | CSCS: "HEX" - Sends a blank SMS
Successful combinations on serial port: (SIM 1 and SIM 2)
CSMP: 17, 167, 0, 0 | CSCS: "IRA"
CSMP: 17, 167, 0, 8 | CSCS: "IRA"
CSMP: 17, 11, 0, 0 | CSCS: "GSM"
CSMP: 17, 167, 0, 0 | CSCS: "GSM"
CSMP: 17, 167, 0, 8 | CSCS: "GSM"
To be honest, I played hunch with these combinations before I studied what field reflects what change these combinations (which are poorly documented in the SIM800L User guide).
Any idea what I might be missing here? I am open to thinking that it is more of a RTFM (Read The Fat Manual) issue.
Ok, managed to resolve the issue.
It was not about the SIM800L at all.
The whole payload was followed by a '\0' which is unexpected (I know, very poor on my side). The serial term has no issues with it whatsoever.
Debugging was fun!
Related
I have two Roland midi devices that behave the same when I try to send a bank and program change. It always sets it to the first patch of the bank. It won't change the patch I choose in the bank. Pro Logic can, however, switch to different banks.
The following example cause the devices to change to the bank but the program (patch) on the device defaults to the first in that bank and not number 9.
var event = AKMIDIEvent(controllerChange: 0, value: 89, channel: 0)
midiOut.sendEvent(event)
event = AKMIDIEvent(controllerChange: 32, value: 64, channel: 0)
midiOut.sendEvent(event)
event = AKMIDIEvent(programChange: 9, channel: 0)
midiOut.sendEvent(event)
Anyone have experience with sending this MIDI messages?
I was going through the same issue and was about to go crazy. It turns out the Program Change values in various MIDI data specifications, from various vendors, are 1 based. Not 0. Or perhaps it is the AudioKit implementation that is wrong?
So, instead of a programChange value of 9 you should use a value of 8. Here is my code for changing the current instrument on channel 0 to the Bösendorfer grand piano on a Yamaha Clavinova keyboard, where the programChange value in the MIDI data specification is designated as 1.
midiOut.sendControllerMessage(0, value: 108) // MSB sound bank selection
midiOut.sendControllerMessage(32, value: 0) // LSB sound bank selection
midiOut.sendEvent(AKMIDIEvent(programChange: 0, channel: 0)) // Initiate program change based on MSB and LSB selections
While reading various documentation about how MIDI works I also saw some forum posts describing keyboards that expect the LSB bank selection before the MSB bank selection. That is however not my understanding of how MIDI should work, but worth a try if you still cannot make it work with your Roland keyboards.
I was trying to connect peripherals over the SPI bus and it didn't work. So checked the outputs with the oscilloscope and discovered the chip doesn't respond to spi library commands.
The only thing I get is the noise on the TX and RX, other pins voltages do not change at all. I tested it on two NodeMCUs (unofficial LoLin and Amica) with both master and dev firmwares. Here are the commands for the spi:
spi.setup(1, spi.MASTER, spi.CPOL_LOW, spi.CPHA_LOW, 20, 8)
spi.send(1, 170, 170, 170, 170) -- 170 == 0b10101010
What could be the problem?
Edit
TX/RX noise turned out to be a UART signal from the serial communication with the computer.
SPI bus works. It's just too fast for my crappy oscilloscope.
Also I discovered that argument databits can be in range [1,32] , clock_div - [0,~1200].
I'm developing a forwarding protocol using Contiki OS. The protocol is running on top of IEEE802.15.4 TSCH mode. The protocol requires to add a certain amount of packets during a short period of time very often I get following error:
[RLL]:Send to Parent 0 base timeslot: 40, currentTimeslot: 1, send timeslot: 45 at: asn-0.46c41d
TSCH: send packet to 255 with seqno 0, queue 0 1, len 8 120
[RLL]:Send to CS base timeslot: 40, currentTimeslot: 2, send timeslot: 50 at: asn-0.46c41e
TSCH-queue:! add packet failed: 0 #0x20003004 8 #0x0 #0x0
TSCH:! can't send packet to 255 with seqno 0, queue 1 1
While it adds the first packet, it can't add the second packet. The queue is not full, i checked that.
The error simply says, its not possible to allocate memory for another packet, while there should be more than enough space.
Probably its just a simple setting i oversea but I can't find it.
If anyone has a suggestion, please let me know.
Conrad
I puzzled by some behaviour of ruby and how it manages memory.
I understand the Ruby GC (major or minor) behaviour i.e if the any objects count goes above there threshold value or limit (i.e heap_available_slots,old_objects_limit, remembered_shady_object_limit, malloc_limit). Ruby run/trigger a GC(major or minor).
And after GC if it can't find the enough memory Ruby allocate (basically malloc I assuming) more memory for the running program.
Also, It's a known fact that by does release back memory to the OS immediately.
Now ..
What I fail to understand how come Ruby releases memory (back to the OS) without triggering any GC.
Example
require 'rbtrace'
index = 1
array = []
while(index < 20000000) do
array << index
index += 1
end
sleep 10
print "-"
array=nil
sleep
Here is my example. If run the above code on ruby 2.2.2p95.
htop display the RSS count of the process (test.rb PID 11483) reaching to 161MB.
GC.stat (captured via rbtrace gem) look like (pay close attention to attention to GC count)
rbtrace -p 11843 -e '[Time.now,Process.pid,GC.stat]'
[Time.now,Process.pid,GC.stat]
=> [2016-07-27 13:50:28 +0530, 11843,
{
"count": 7,
"heap_allocated_pages": 74,
"heap_sorted_length": 75,
"heap_allocatable_pages": 0,
"heap_available_slots": 30162,
"heap_live_slots": 11479,
"heap_free_slots": 18594,
"heap_final_slots": 89,
"heap_marked_slots": 120,
"heap_swept_slots": 18847,
"heap_eden_pages": 74,
"heap_tomb_pages": 0,
"total_allocated_pages": 74,
"total_freed_pages": 0,
"total_allocated_objects": 66182,
"total_freed_objects": 54614,
"malloc_increase_bytes": 8368,
"malloc_increase_bytes_limit": 33554432,
"minor_gc_count": 4,
"major_gc_count": 3,
"remembered_wb_unprotected_objects": 0,
"remembered_wb_unprotected_objects_limit": 278,
"old_objects": 14,
"old_objects_limit": 10766,
"oldmalloc_increase_bytes": 198674592,
"oldmalloc_increase_bytes_limit": 20132659
}]
*** detached from process 11843
GC count => 7
Approximately 25 minutes later. Memory has drop down to 6MB but GC count is still 7.
[Time.now,Process.pid,GC.stat]
=> [2016-07-27 14:16:02 +0530, 11843,
{
"count": 7,
"heap_allocated_pages": 74,
"heap_sorted_length": 75,
"heap_allocatable_pages": 0,
"heap_available_slots": 30162,
"heap_live_slots": 11581,
"heap_free_slots": 18581,
"heap_final_slots": 0,
"heap_marked_slots": 120,
"heap_swept_slots": 18936,
"heap_eden_pages": 74,
"heap_tomb_pages": 0,
"total_allocated_pages": 74,
"total_freed_pages": 0,
"total_allocated_objects": 66284,
"total_freed_objects": 54703,
"malloc_increase_bytes": 3248,
"malloc_increase_bytes_limit": 33554432,
"minor_gc_count": 4,
"major_gc_count": 3,
"remembered_wb_unprotected_objects": 0,
"remembered_wb_unprotected_objects_limit": 278,
"old_objects": 14,
"old_objects_limit": 10766,
"oldmalloc_increase_bytes": 198663520,
"oldmalloc_increase_bytes_limit": 20132659
}]
Question: I was under the impression that Ruby Release memory whenever GC is triggered. But clearly that not the case over here.
Anybody can provide a detail on how (as in who triggered the memory releases surely its not GC.) the memory is released back to OS.
OS: OS X version 10.11.12
You are correct, it's not GC that changed the physical memory requirements, it's the OS kernel.
You need to look at the VIRT column, not the RES column. As you can see VIRT stays exactly the same.
RES is physical (resident) memory, VIRT is virtual (allocated, but currently unused) memory.
When the process sleeps it's not using its memory or doing anything, so the OS memory manager decides to swap out part of the physical memory and move it into virtual space.
Why keep an idle process hogging physical memory for no reason? So the OS is smart, and swaps out as much unused physical memory as possible, that's why you see a reduction in RES.
I suspect you would see the same effect even without array = nil, by just sleeping long enough. Once you stop sleeping and access something in the array, then RES will jump back up again.
You can read some more discussion through these:
What is RSS and VSZ in Linux memory management
http://www.darkcoding.net/software/resident-and-virtual-memory-on-linux-a-short-example/
What's the difference between "virtual memory" and "swap space"?
http://www.tldp.org/LDP/tlk/mm/memory.html
I was told to ask this here:
10:53:04.042608 IP 172.17.2.12.42654 > 172.17.2.6.6000: Flags [FPU], seq 3891587770, win 1024, urg 0, length 0
10:53:04.045939 IP 172.17.2.6.6000 > 172.17.2.12.42654: Flags [R.], seq 0, ack 3891587770, win 0, length 0
This states that the flags set are FPU and R. What flags do these stand for and what kind of exchange is this?
The flags are:
F - FIN, used to terminate an active TCP connection from one end.
P - PUSH, asks that any data the receiving end is buffering be sent to the receiving process.
U - URGENT, indicating that there is data referenced by the urgent "pointer."
R - RESET, indicating that a packet was received that was NOT part of an existing connection.
It looks like the first packet was manufactured, or possibly delayed. The argument for it being manufactured is the urgent flag being set, with no urgent data. If it was delayed, it indicates the normal end of a connection between .12 and .6 on port 6000, along with a request that the last of any pending data sent across the wire be flushed to the service on .6.
.6 has clearly forgotten about this connection, if it even existed. .6 is indicating that while it got the FIN packet, it believes that the connection that FIN packet refers to did not exist.
If .6 had a current matching connection, it would have replied with a FIN-ACK instead of RST, acknowledging the termination of the connection.