Tagged: AI

Automatic Signal Recognition with AI Machine Learning and RTL-SDR

Thank you to Trevor Unland for submitting his AI machine learning project called "RTL-ML" which automatically recognizes and classifies eight different signal types on low-power ARM processors running an RTL-SDR.

Trevor's blog post explains the machine learning architecture in detail, the accuracy he obtained, and how to try it yourself. If you try it for yourself, you can either run the pre-trained model or train your own model if you have sufficient training data.

The code is entirely open source on GitHub, and the training set data has been shared on HuggingFace

RTL-ML is an open-source Python toolkit for automatic radio signal classification using machine learning. It runs on ARM single-board computers like the Raspberry Pi 5 or Indiedroid Nova paired with an RTL-SDR Blog V4, achieving 87.5% accuracy across 8 real-world signal types including ADS-B aircraft transponders, NOAA weather satellites, ISM sensors, FM broadcast, NOAA weather radio, pagers, and APRS.

The project provides a complete pipeline from signal capture to trained classifier. Unlike academic approaches that rely on synthetic data or expensive GPU hardware, RTL-ML uses real signals captured from actual antennas and runs entirely on edge hardware with no cloud dependency. The Random Forest model is 186KB and processes signals in around 120ms on a Pi 5.

The GitHub repository includes the full capture and training scripts, a pre-trained model, 8 validated spectrograms, and documentation for adding new signal types. It works out of the box on both Raspberry Pi 5 and Indiedroid Nova with identical code and accuracy.

RTL-ML Setup: RTL-SDR Blog V4, Dipole Antenna and Indiedroid Nova ARM Computer.
RTL-ML Setup: RTL-SDR Blog V4, Dipole Antenna and Indiedroid Nova ARM Computer.

You might also be interested in some similar projects we've posted about in the past, such as this Shazam-style signal classifier, which used audio data from sigidwiki.com, and an Android app doing the same thing (which unfortunately now appears to have been removed from Google Play). There is also this deep learning based signal classifier model.

GhostHunter (Anti-LIF): Using Spiking Neural Networks to Rescue Satellite Signals Drowned in Noise

Thank you to Edwin Temporal for writing in and showing how his proprietary neuromorphic engine, GhostHunter (Anti-LIF), is being used to recover satellite data buried in the noise floor, which typical DSP methods would fail to do.

To recover the signals, Edwin uses trained Spiking Neural Networks (SNN). SNNs are artificial neural networks that draw further inspiration from nature by incorporating the 'spiking' on/off behavior of real neurons. Edwin writes:

My engine has successfully extracted and decoded structured data from high-complexity targets by mimicking biological signal processing:

Technosat: Successful decoding of GFSK modulations under extreme frequency drift and low SNR conditions.

MIT RF-Challenge: Advanced recovery of QPSK signals where traditional digital signal processing (DSP) often fails to maintain synchronization.

These missions are fully documented in the https://temporaledwin58-creator.github.io/ghosthunter-database/, which serves as a public ledger for my signal recovery operations. Furthermore, the underlying Anti-LIF architecture is academically backed by my publication on TechRxiv, proving its efficiency in processing signals buried deep within the noise floor.

Although the engine remains proprietary, I provide comprehensive statistical reports and validation metrics for each mission. I believe your audience would be thrilled to see how Neuromorphic AI (SNN) is solving real-world SIGINT challenges.

In the database, Edwin shows how his Anti-LIF system has recovered CW Morse code telemetry and QPSK data from noisy satellite signals. 

While Edwin's Anti-LIF is proprietary, he is offering proof of concept decoding. If you have a 250MB or less IQ/SigMF/Wav recording of a signal that is buried in the noise floor, you can submit it to him via his website, and he will run Anti-LIF on it for analysis.

Advanced readers interested in AI/neural network techniques for signal recovery can also check out his white paper on TechRxiv, where he shows signal recovery from signals buried in WiFi noise, as well as results from use in ECG and Healthcare applications.

An Example Signal Recovery with the Anti-LIF Spiking Neural Network
An Example Signal Recovery with the Anti-LIF Spiking Neural Network

RadioTranscriber: Real-Time Public Safety Radio Transcription with Whisper AI

Over in our new forums, user Nite has shared a new open-source project that he's created called RadioTranscriber, a real-time speech-to-text tool for public safety radio feeds using OpenAI’s Whisper large-v3 model. The idea is to take live scanner audio, such as authenticated streams from Broadcastify, and continuously turn it into readable text with minimal babysitting. The project grew out of earlier experiments with Radio Transcriptor, which we posted about back in June, but quickly evolved into a more robust, long-running setup with better audio conditioning and fewer of Whisper’s common hallucinations.

Under the hood, RadioTranscriber is a Python script that pulls in a live stream, cleans it up with filtering, normalization, and WebRTC VAD, then runs Whisper large-v3 with beam search for transcription. A set of custom “hallucination guards” strips out common junk text and replaces alert tones with simple markers, while daily log rotation and basic memory management let it run unattended for long periods, even on a modest CPU-only machine. Although it’s tuned to the author’s local dispatch style, the config and prompt are easy to adapt, and the full code is available on GitHub for anyone who wants to experiment or build on it.

How OpenAI's Whisper Works
How OpenAI's Whisper Works

AI Cloud Detection for GOES Weather Satellite Images on a Raspberry Pi

Over on his blog account at Hackser.io Justin Lutz has uploaded an article describing how he uses AI object detection to automatically detect clouds on weather satellite images that he's downloaded from GOES satellites via an RTL-SDR.

Lutz's blog post first describes and shows his RTL-SDR GOES reception setup. Then, it explains how he used Edge Impulse on his Raspberry Pi 4 to create an AI model that automatically detects the clouds in the image.

The process begins by importing 100 images into Edge Impulse, manually labelling the clouds in each image, training the model, and testing it. The result was an average detection accuracy of 90%.

Proposing a Software Defined Radio based “AI Battle Buddy”

Over on YouTube, Isaac Botkin of TREX LABS has uploaded a video discussing how he proposes to build an "AI Battle Buddy" with a built-in software-defined radio. The idea is to combine a wide frequency range software-defined radio with AI tools that automatically determine and alert the device owner when something interesting occurs in the radio spectrum.

Isaac gives example use cases for the device, such as alerts when jamming is detected, drone detection alerts, alerts when there is suddenly increased public safety radio traffic or if there are nearby public safety radio transmissions, and information about nearby aircraft and NOAA weather alerts.

The device is proposed to have no screen, but would simply give audio alerts via Bluetooth earpiece, or text alerts via smartphone or smart watch. 

Ultimately, such a device has yet to be built for the general consumer market, but Isaac notes that AI-SDR devices like the Anduril Pulsar already exist for the military consumer.

How to Make an AI Battle Buddy for Electronic Warfare