Yes, that’s what I meant. Thanks
May ask why are you using the peak value for the rssi and not the following sequence for the IQ samples: complex to magnitude square -> averaging -> 10log10(.)
to calculate the dBm?
Just to save computation. We’ve already done an FFT, so we just take the power values from that data.
I see. However, the peak is not the RSSI as far as I know for a signal with large BW
I’m currently logging the DoA output to a CSV file and have noticed an interesting pattern. In the first column, which represents Unix Epoch time with 13 digits for milliseconds, I’ve observed that there’s a consistent time difference between adjacent lines. For instance, the time difference between the first and second lines is 0 ms, while the time difference between the second and third lines is approximately 1311 ms. This pattern repeats throughout the entire file, such as the time difference between the fourth and third lines being 0 ms, and between the fifth and fourth lines being around 1311 ms.
I’m curious about the reason behind this phenomenon. Since I’m aiming for the highest possible accuracy in determining the arrival time of the signal, I’d like to know whether these timestamps can be relied upon.
Is this the logging via the web GUI? Or via the Android App?
1311ms between logs seems right. It should log every 1s, but because the update rate (on a Pi 4) is around ~430ms, it will log every 430*3 = 1290 ms
But I’m unsure as to why you’re sometimes seeing the same timestamp logged.
The logging is via the web GUI. So if a signal or more arrive between the 1 sec of logging what happens?
The KrakenSDR samples by default in 436 ms blocks. If you have multiple different bursty signals arriving within 436ms on the same frequency you’ll have a problem.
If they are the same signal then it’s no problem.
The log will simply take the last reading every 1s. So if you are monitoring very short bursty signals that rarely transmit you might want to reduce the write interval to less than 436 ms.
Thank you for your help. It is very important.
If I modify the interval from 436 ms to something smaller or greater, what are the advantages and disadvantages? What exactly does this interval represent? Is it the time window within which the DoA algorithm integrates the signal and makes decisions? Therefore, if I’m not using the GUI and only printing the value in the code, each decision will be printed at a minimum of every 436 ms? Lastly, can the logging time be adjusted (via the GUI or the code) to log more or less frequently?
KrakenSDR processes data in blocks. So a block is recorded, then the data in that block is processed into a bearing.
436ms is the default time taken to record one block.
If you make the logging time longer you stop the log file from growing huge too fast. Better if you are logging over a long period of time.
If you make the logging time less than the block time, then you won’t miss any processed bearings. This is better if you are expecting signals to come in short bursts.
Therefore, If I get a log every 1 sec I may lose some signals. How can I change the logging time is there any parameter in UI?
Yes in the “Local Data Recording” box just change the “write interval” setting to less than the 436ms interval to capture every interval.
What is the minimum (safe) time for the block size? This time, I need to capture signals that have time on air ~50 msec. Also multiple messages (2 or 3) are transmitted during the 436 ms so I want to decrease this window in order to detect as many as possible.
Edit: Let me extend a little bit the question. Given that a system with 5 antennas can detect maximum 4 distinct signals (in the same frequency) using MUSIC algorithm. In a scenario where I set the block size to 200 msec and the logging time to 200 msec. What happens if:
- 3 signals (S1, S2, S3) with duration 30msec each one arrive in series at the 200 msec window. Will all three signals logged after the 200 msec logging time? (so 3 new rows in the csv file with same or different timestamp?). Or just the S3 will be logged?
- 3 signals (S1, S2, S3) with duration 30msec each one arrive simultaneously at the 200 msec window. Will all three signals logged after the 200 msec logging time? (so 3 new rows in the csv file with almost same timestamp?). Or just the S3 will be logged?
In a second scenario where I set the block size to 200 msec and the logging time to 600 msec. What happens if:
- 3 signals (S1, S2, S3) with duration 30msec each one arrive in series at the 200 msec window and then in the second 200 msec time window arrive 3 more signals (S4, S5, S6). Will all six signals logged after the 600 msec logging time? (so 6 new rows in the csv file with same or different timestamp?). Or just the S6 will be logged?
Thank you
50ms should be fine, but you might need a different computing system other than a Pi 4.
436 ms was chosen as this balances the compute on the DAQ and DSP sides with the limited processing power on the Pi 4. With a more powerful system that should not be an issue.
For point 1), all three signals within the 200ms will be fed into the DOA DSP algorithm, and the results will be combined and the average result (with weighting on signal strength) will be computed.
For point 2), it’s the same. If they arrive simultaneously you’ll just end up with the average.
For point 3), only the last 200ms window will be collected if you set the logging time to 600ms. The first two 200ms windows will be discarded.
I see. Thank you. Thus, I get the new lines in the csv. The “average result (with weighting on signal strength)” is taking into account in the rssi or in the confident level?
Only the signal strength.
So the RSSI. So the 3 signals that can be in the same 200ms window will have the same timestamp (end of processing time) when logged to csv or three different timestamps (the arrival times)?
Same timestamp. If the signals all arrive within the same processing window, then the signals will be sort of mashed together, you won’t be able to differentiate between them.
Think of is this way:
By default we collect any signals during a 436ms window and blend them all together. The blended result is then measured for DOA bearings. It’s not possible to differentiate between the different signals collected within 436ms.
There has to be some chunk of time to collect signals otherwise the algorithm has nothing to work with, it can’t work on a individual sample basis.
You can reduce the window, but you loose some processing gain SNR from that (irrelevant if the signal is shorter that the window though).
Also you need to take into account the processing requirements. If there is limited CPU like on a Pi4 we want to make sure the window is balanced. Otherwise the DAQ side could struggle to pump out many small windows, or the DSP side could struggle with a too large window. The GUI might also struggle if the update rate is too fast as then the graphs need to redraw faster.
I’m feeling a bit perplexed. Suppose I have three distinct signals arriving within a 436-millisecond time window. The first signal originates from angle A degrees, the second from angle B degrees, and the third from angle C degrees. Could you clarify what will be recorded in the CSV file concerning the timestamp, RSSI, and arrival angle for these signals?
One reading will be recorded, a blending of signal A,B and C. The signal with the strongest RSSI will dominate the resulting single record.