Signal level/baseband tuning questions

Having used “rotating antenna” types of DF systems for many years now (I completely rewrote the code of the VE2EMM system using simple DSP techniques) I’m familiar with the challenges and techniques used in direction finding. Also very familiar with the hardware and capabilities of the R820T tuner and RTL2832, I’ve found the idea of leveraging this hardware (in the form of the KrakenSDR) for the purposes of RDF to be intriguing - which is why I have been testing it recently.

Using an array similar to that proven with the TDOA style, the five element array is on an aluminum plate atop my vehicle: I have eight 19" radials (at the corners of the plates) that extend the size of the virtual ground plane, permitting it to sit above the rooftop, attached to the roof rack. This permits the array to be used on any convenient vehicle - whether or not it has a metal roof - and it has a fairly low tendency to skew the pattern. (I can provide photos of the arrangement if desired.)

While sparsely mentioned in the set up, I’ve been able to properly calibrate the DOA bearing so that it correlates within a few degrees of my vehicle to help minimize uncertainty once many bearings are accumulated and using transmitters at known locations, and found that it is equal to or better than my older VE2EMM-type TDOA system under typical conditions in terms of indicating a useful bearing: Of course, the inclusion of the heat map - and just the map in general - reduces the CPU load of the “Grey matter DSP” and simplifies the entire operation.

As the simultaneous dynamic range of the RTL-SDR receiver hardware is on the order of “about” 65 dB (given 2 MHz sample rate and 15 kHz detection bandwidth) I need to do further testing to see how to best-tolerate an environment with signals of widely disparate strength. For example, here in the Salt Lake City area it’s not uncommon to have multiple signals of around -40dBm or greater at the terminals of an omnidirectional antenna (from 100+ watt mountain-top repeaters) which, due to the limits of the 8 bit sampling, would submerge microvolt-level signals into the noise once the input front-end gain control has readjusted to prevent ADC overload - a common situation if one is considering the case of a repeater jammer where the miscreant’s signal is weak but the repeater is strong: I suspect that judicious and careful adjustment of sample rate and center frequency could mitigate this - which is the basis of one of the questions.

This brings up some questions:

  • Is is possible to arbitrarily adjust the LO frequency of the receiver to offset the desired signal to an edge of the baseband? As noted, this might be useful for helping to strategically move a strong signal (e.g. repeater output) outside the baseband to reduce the energy impinging on the ADC - even if done at the expense of aliasing. (For that matter, goes the Kraken software adjust the R820T’s “bandpass” filter based on the setting of sample rate and/or decimation?)

  • I note in the .csv that the signal level numbers do not seem to correlate at all with known dBm readings at the antenna terminals: I presume that this value also has something to do with the pitch of the tone that is to indicate signal strength? In an environment where the signal changes by 40 dB along a route, I’m seeing a fraction of this amount in the .csv log, and the tone pitch does not change much. I’m not sure what I’m seeing, but it looks as though the signal reading may not have incorporated the input front end gain setting (e.g. reading = processed signal level plus input gain). In the past, logging signal levels (e.g. an addition to the Agrelo data format on my older system - the signal level coming from a logarithmic detector in the receiver I.F.) has proven very useful in analysis - both for finding miscreants as well as the analysis of the pattern of a repeater’s antenna - and I would like to understand what I might be doing wrong in terms of why the values that I see aren’t useful.

  • A minor quibble - and one that seems to have caught others based on my perusal in this forum: It would be nice if the current frequency(ies) appeared somewhere on the app’s UI - even if it was only on the pop-up where the selector button appears. It’s easy to forget the frequency(ies) to which the system is tuned and this might save confusion/going back into the control UI.

So far I’m impressed - thanks!

CT

Yes the center frequency is the LO frequency, and you can try and move it to block out strong signals. But the filters on the R860 are not that sharp, and the rolloff at the edges of the bandwidth is quite wide, so doing so might not help a lot.

The power readings are only relative, they are not absolute, because the KrakenSDR is not calibrated in terms of power readings.

You can see the frequencies by pressing the ‘select frequency’ button (just above the nav button). It will open a popup showing the current VFO’s that have been logged.

Thanks for replying.

Regarding a portion of the original question related to the dBm readings, I suspect that there may be something amiss with the way the signal level is represented - but without uploading a multi-megabyte log file somewhere, I’ll try to explain it here.

For this test, I set up a transmitter at my house (2 meters) and drove several km away to where the signal was barely audible. At this point, the received signal was likely around -117dBm or so.

Now, let’s consider testing on an RTL-SDR Dongle (V3) that I just did using a calibrated signal generator: I suspect this range to be generally representative of the Kraken hardware if it uses a chipset similar to the the R820T and RTL2832U.

Under this condition, the RF gain setting of the R820T converters should be at maximum - presuming that there are no other signals within the passband causing AGC action to occur: For the RTL-SDR, maximum gain could be as high as 49.6dB. At this gain setting, the R820T is pretty noisy and I would expect that the lower LSBs of the RTL2832U would be being lit up - particularly in the presence of a vehicle and devices within (e.g. Android tablet).

Meanwhile, when parked in my driveway - only a few meters from where the signal is emanating - the signal level at my antenna port is around -40dBm. To prevent A/D clipping of the RTL2832U, I would expect that the gain of the R820T would likely be around 20dB - about 30dB less than what would be a “no signal” situation.

In reviewing the log I see that the RSSI level from the unit typically varies between -38 and -20, regardless of the fact that signal impinging on the antenna was known to vary by more than 70dB - going from around -117 at a distance to -40dBm while sitting in my driveway.

What appears to be happening is this:

As you note, the Kraken is not calibrated in terms of absolute dBm, but this is not important as we could use “dBFS” (dB below full-scale) as the energy reading: This could be taken as our “0dB” point in our admittedly arbitrary signal level meter: Let’s call this level “0dBK” (0dB Kraken) to set our own scale that is relative to the hardware itself. As the definition for column 4 in the CSV (log) file refers to “0” as maximum, this would seem appropriate.

With the R820T gain set to zero, A/D overload occurs at around -16dBm
With the R820T gain set to maximum (49.6db) A/D overload occurs at around -69dBm

BOTH of these settings yield the same dBFS reading on the RTL2832U - just below 0 dBFS - but they are clearly 50dB apart (53dB in this case - likely within the specs of the R820T’s VGA).

The dBFS reading should therefore be taking into account the R820T’s gain setting, so, should really be representing this as:

For the strong signal of -16dBm = (0dBFS - (0dB gain setting) = 0dBK - This is the STRONGEST signal that can be represented by the hardware and NOT overload the A/D converter, the R820T’s gain set to zero.
For the signal level of -69dBm = (0dBFS - (49.6dB gain setting) = -49dBK - This is the strongest signal that can be represented with the R820T’s gain set to maximum.

For reference, I measured - with the R820T’s gain set to 49.6dB - the sensitivity of the test RTL-SDR dongle as being -118dBm at 12dB SINAD in a 15kHz bandwidth using an NBFM signal modulated at +/-3kHz by a 1kHz tone. This signal is about 49dB below the dBFS level - which would be about what I would expected with this hardware - and based on the calculates above, I would expect that the Kraken SDR software should report this signal accordingly - at about our arbitrary signal level of “-98dBK” (e.g. -98dB Kraken reference).

Not seeming to take the R820T gain into account would also explain the observations of the “beep” tone: As noted in the earlier message, this doesn’t seem to change pitch much despite a huge variation in signal level - and if that tone is based on the levels that I’m seeing in the CSV files, I can certainly see why.

It would make sense, therefore, that if the calculations of signal level were to include the gain setting of the R820T (and whatever other attenuation/gain) might be in the actual Kraken hardware the pitch of the tone would be the absolute lowest at, say, -120dBK (again, using our arbitrary Kraken units) and highest at 0dBk.

In looking at the graphing of the log, I see that this, too, seems to indicate that the signal level - represented by the color - doesn’t properly reflect the actual signal level, but rather that reported in the log.

Having many years of experience in finding signal sources, I know that having a reliable indication of signal level - even if it is relative (uncalibrated) is extremely useful information in addition to bearing information - as long as it is consistent which is why I’ve always included a wide-range selective field strength meter (-120 to -30dBm) to my took kit: Hearing the tone when one is very close to the transmitter is arguably more useful than the bearing itself.

Added to that, it’s possible to include absolute signal strength in weighting the validity of measurements for statistical purposes (e.g. stronger signal may indicate being closer) as well as possibly using it to differentiate multiple signal sources (e.g. multiple transmitters/jammers) - particularly if these differences are depicted graphically.

Comments?

CT

Follow-up:

I decided to do a few more measurements using an RTL-SDR Blog V3 dongle. As the receivers of the KrakenSDR are presumed to be generally similar in design, this should provide a bit of information as to how front-end gain (from the R820T tuner), its noise floor and the noise floor of the R2832U itself affect the sensitivity and signal handling capabilities.

Test conditions:

  • Calibrated signal generator modulated with FM (+/-3kHz deviation) and a 1.0 kHz tone
  • Detection using the HDSDR program and 15 kHz detection bandwith
  • “Sensitivity” readings are at approx. 12dB SINAD - an industry-standard measurement technique. This measurement tends to be a bit “fuzzy” with uncertainty of a couple dB.
  • “Overload” readings are those at which spectral artifacts start coming out of the noise (FFT Bin width 15.7 Hz) where the A/D converter begins to clip (e.g. attempts to go below 0x00 and/or above 0xff)
  • Sample rate of 1024ksps (e.g. R2832U sample rate)
  • Gain values noted are those reported in the R820T driver’s source code and are chosen in the readings below to be approx. 10 dB steps.
  • There are signal levels noted in “dBK” (dB Kraken) - This is based on an arbitrary reference where the maximum signal level without overload with an RF gain (R820T) is 0 dBK: The “dbK” term was introduced in my earlier post and was chosen as it provides a self-derived reference level in lieu of having absolute signal level calibration - say, in dBm. The dBm values are those of the RTL-SDR unit that I tested and your mileage may vary.
  • “No Signal Reading”: The level in “dBK” representing the signal level in the detection bandwidth with no other signals present. This relates to noise related to quantization and contributions in the signal path - including the noise of the R820T. The percentage of contribution of these sources varies with the gain setting of the R820T.
  • Useful range: The difference in dB between the 12dB SINAD and “just short of clipping the A/D converter” (e.g. Overload).

Starting at maximum sensitivity:

Gain=49.6:
Sensitivity: -118 dBm (-102 dBK)
Overload: -69 dBm (-53 dBK)
No-Signal reading: 109 dBK
Useful range: 49 dB

Gain = 40.2:
Sensitivity: -115 dBm (-99 dBK)
Overload: -60 dBm (-44 dBK)
No-signal reading: -105 dBK
Useful Range: 55 dB

Gain = 29.7:
Sensitivity: -110 dBm (-94 dBK)
Overload: -50 dBm (-34 dBK)
No-signal reading: -89 dBK
Useful Range: 60 dB

Gain = 19.7:
Sensitivity: -101 dBm (-85 dBk)
Overload: -40 dBm (-24 dBK)
No-signal reading: -88 dBK
Useful range: 61 dB

Gain = 8.7:
Sensitivity: -93 dBm (-77 dBK)
Overload: -31 dBm (-15 dBK)
No-signal reading: -80 dBK
Useful range: 62 dB

Gain = 3.7:
Sensitivity: -89 dBm (-73 dBK)
Overload: -27 dBm (-11 dBK)
No-signal reading: -76 dBK
Useful range: 62 dB

Gain = 0:
Sensitivity: -78 dBm (-62 dBk)
Overload: -16 dBm (0 dBk)
No-signal reading: -65 dBk
Useful range: 62 dB

(Hopefully I avoided simple math mistakes above)

Discussion:

  • There is clearly a discontinuity between the gain value of 3.7 dB and the gain of “0”: Some documentation *(source code for RTL-SDR based receivers) *show a “negative gain” (attenuation) as the lowest value rather than zero. Since the other gain values line up with each other, we can presume that the setting of “0” is the odd one out here, representing an overload signal level 11 dB greater than a setting of “3.7 dB” implying an attenuation value of approximately 7.3dB relative to the other gain settings. Care should be taken to determine the actual gain settings my occur with a specific RTL-SDR type driver in terms of how the R820T gain is set.

  • It may be observed that in gain values through about 30 dB, the noise floor appears to be largely limited by that of the RTL2832U itself (e.g. A/D converter quantization noise and noise in the '2832 preceding the converter itself.)

  • Starting at a gain level of about 30dB, the noise floor of the R820T itself starts to dominate the A/D converter’s LSBs, reducing the “useful dynamic range”. This is most evident in the “Sensitivity” readings (and the “No Signal” readings) which do not change by the same number of dB as the gain setting/overload setting. This is most obvious when one considers between 40 and 50 dB gain setting, the actual sensitivity improves by only 3dB while the overload remains proportional to the gain setting.

  • In a conventional (analog) receiver it’s considered poor design to have “recklessly excess” gain in the signal path as this can result in poorer performance - particularly in a multi-signal environment - especially in stages prior to the band-pass filtering and AGC detection. In the case of a digitizing sampler (such as what we have here) a bit of “excess” gain (while taking care to prevent A/D clipping) can actually be beneficial as it can help move weaker signals out of quantization noise (e.g. being represented by too-few A/D bits) as well as the less-tangible benefits of throwing Gaussian noise into the mix as well as a form of “dithering”. This is usually a function of AGC which, itself, should be further-informed by a thorough understanding of the signal path and the devices being used under disparate conditions.

  • The above information shows that it is possible - from the combination of the R820T and the RTL2832U - to obtain reasonable sensitivity (-118dBm - about 0.3 microvolts) and a maximum signal handling capable of -16 dBm meaning that with “decent quality signals” (e.g. equal to or better than 12dB SINAD) a range of 102dB is possible (with caveats) while a “noise floor” to “just short of clipping” range if 109 dB is possible (with caveats).

  • The caveats are that no other signals can be impinging in the A/D converter’s passband for these performance levels. As the total amount of energy reaching the A/D converter - regardless of frequency - must be kept below clipping level, the appearance of a strong, nearby signal may require a gain reduction and thus reduce ultimate sensitivity. For example, if a signal level of -40dBm appeared (e.g. a 100 watt ERP 2 meter transmitter at approx. 5km) while you were trying to find a 1 microvolt signal (-107dBm) on a nearby frequency (within the A/D converter passband) the gain would have to be decreased to around 20 dB to prevent overload - and this would put the -107dBm signal well into the noise: Without the -40dBm signal from the hypothetical repeater causing gain reduction, the -107dBm signal might be received with good quality as the gain could be set higher without A/D clipping/overload.

  • The 1024 ksps sample rate was chosen for the above tests, but spot-checking was done with a 2400 ksps rate.

Probably more than you wanted to know!

CT

1 Like