CRS-LLM: using a GPT-style model to predict which base station and beam a car should use next
This paper introduces CRS-LLM, a new way to predict next-step beam choices in millimeter-wave (mmWave) vehicle-to-everything (V2X) links. At these high frequencies, radios must point narrow beams toward a car or device. When vehicles move, are blocked by obstacles, or hand over between base stations (BSs), keeping the beam aligned becomes hard and costly. CRS-LLM tries to predict the correct base-station–and–beam pair one step ahead, so the network spends less time searching for the right beam.
The authors treat the problem as a single classification over the joint BS–beam space instead of two separate decisions (first pick a BS, then pick a beam). This avoids errors that can happen when the first decision is wrong. They assume several BSs share observations at an edge controller, so the model sees multiple views of the same moving user. To turn wireless channel data into a form a language-style model can use, they build a dual-view channel-state-information (CSI) tokenizer that keeps both frequency-domain and delay-domain views. A lightweight convolutional neural network (CNN) front end and a temporal tokenization step convert the CSI into a token sequence.
For temporal modeling, the team adapts a truncated GPT-style backbone (GPT = Generative Pre-trained Transformer) and uses parameter-efficient adaptation so the large model fits this task. For the final prediction they design a transition-aware, switch-gated head. It combines a stable branch for smooth beam evolution, a residual flip branch to emphasize abrupt changes, and a low-rank transition prior that captures typical handover patterns. A soft gate blends the branches depending on the current context, so the predictor can handle both steady motion and sudden jumps.
Why this matters: predicting the right BS–beam pair can cut the overhead of repeated beam sweeps and make mmWave links more reliable when cars move fast or a link is blocked. In simulation, CRS-LLM improved Top-1 accuracy (whether the top predicted label is correct) and normalized beam gain compared with several baselines, including a CSI-Transformer, a hierarchical BS-then-beam method, and representative CNN and recurrent neural network (RNN) approaches. The authors also report strong few-shot adaptation and promising zero-shot transfer to unseen scenarios in their simulated tests.