Hello all,
I’m experimenting with the KrakenSdr DOA-based transmitter localization system and currently exploring a grid-based estimation approach. In the implementation, for each azimuth angle (0°–360°)-360 DOA power values logged in the csv file. I project a ray from the receiver’s location and increment the power value in each intersected grid cell. This is repeated across all measured angles with corresponding signal strength similar implementation to how its done of yours.
However, I’ve observed a significant bias: cells in close proximity to the receiver accumulate disproportionately higher power due to the higher density of ray intersections compared to distant cells. This skews the localization result toward the receiver position, regardless of the actual transmitter location.
I understand KrakenSDR employs a more robust grid-based localization method. I’d appreciate any general insights into how such distance-related accumulation bias can be corrected or normalized in grid-based frameworks.
Thanks in advance for any guidance.