Global Cross-Time Attention Fusion for Enhanced Solar Flare Prediction from Multivariate Time Series

Onur Vural, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
Utah State University

Abstract

Multivariate time series classification is increasingly investigated in space weather research as a means to predict intense solar flare events, which can cause widespread disruptions across modern technological systems. Magnetic field measurements of solar active regions are converted into structured multivariate time series, enabling predictive modeling across segmented observation windows. However, the inherently imbalanced nature of solar flare occurrences, where intense flares are rare compared to minor flare events, presents a significant barrier to effective learning.

To address this challenge, we propose a novel Global Cross-Time Attention Fusion (GCTAF) architecture, a transformer-based model to enhance long-range temporal modeling. Unlike traditional self-attention mechanisms that rely solely on local interactions within time series, GCTAF injects a set of learnable cross-attentive global tokens that summarize salient temporal patterns across the entire sequence. These tokens are refined through cross-attention with the input sequence and fused back into the temporal representation, enabling the model to identify globally significant, non-contiguous time points that are critical for flare prediction. This mechanism functions as a dynamic, attention-driven temporal summarizer that augments the model’s capacity to capture discriminative flare-related dynamics. We evaluate our approach on the benchmark solar flare dataset and show that GCTAF effectively detects intense flares and improves predictive performance, demonstrating that refining transformer-based architectures presents a high-potential alternative for performing solar flare prediction tasks.

Video

Method

Framework

  • GCTAF framework comprises multiple components that combine global and local temporal information from MVTS data by leveraging transformer-based strategies.
  • GCTAF attends learnable global tokens to input sequences via cross-attention.
  • The fused representation is refined through transformer-based modules, pooled, and classified using an MLP head.
  • Transformer Encoder Module: core temporal modeling unit, designed to capture complex temporal dependencies and interactions among solar magnetic field parameters. By employing self-attention and non-linear transformations within a residual framework, it seeks to obtain representations emphasizing salient temporal regions while suppressing irrelevant noise, enabling effective flare prediction.
Framework Image

The GCTAF model for solar flare prediction takes input of shape [B, τ, N] and learnable global tokens [1, G, N] shared across batches. The global tokens attend to the input via cross-attention, producing [B, G, N]. The input and global tokens are concatenated to [B, τ+G, N] and processed by transformer encoder blocks. The output is split into local [B, τ, N] and global [B, G, N] tokens. Local tokens are pooled to [B, N], global tokens averaged to [B, N], then concatenated into [B, 2N] and passed through an MLP for final logits.

Results

Results Graph

Bar chart showing flare prediction scores across four metrics. The results demonstrate that refining transformer-based architectures holds potential in MVTS-driven solar flare prediction tasks as GCTAF is able to produce highly competitive results against state-of-the-art approaches in the literature.

Solar Flare Graph

Intense X9.3-class solar flare captured on September 6, 2017. Credit: NASA/SDO.

BibTeX

@article{vural2025globalcrosstimeattentionfusion,
  title={Global Cross-Time Attention Fusion for Enhanced Solar Flare Prediction from Multivariate Time Series},
  author={Vural, Onur and Hamdi, Shah Muhammad and Boubrahimi, Soukaina Filali},
  journal={arXiv preprint arXiv:2511.12955},
  year={2025}
}