Feedback-driven recurrent quantum neural network universality

ICLR 2026 Conference SubmissionAnonymous Authors
quantum machine learningquantum neural networksrecurrent neural networksexpressivityuniversal approximationstate-space systemsquantum reservoir computing
Abstract:

Quantum reservoir computing uses the dynamics of quantum systems to process temporal data, making it particularly well-suited for machine learning with noisy intermediate-scale quantum devices. Recent developments have introduced feedback-based quantum reservoir systems, which process temporal information with comparatively fewer components and enable real-time computation while preserving the input history. Motivated by their promising empirical performance, in this work, we study the approximation capabilities of feedback-based quantum reservoir computing. More specifically, we are concerned with recurrent quantum neural networks, which are quantum analogues of classical recurrent neural networks. Our results show that regular state-space systems can be approximated using quantum recurrent neural networks without the curse of dimensionality and with the number of qubits only growing logarithmically in the reciprocal of the prescribed approximation accuracy. Notably, our analysis demonstrates that quantum recurrent neural networks are universal with linear readouts, making them both powerful and experimentally accessible. These results pave the way for practical and theoretically grounded quantum reservoir computing with real-time processing capabilities.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper establishes approximation bounds for recurrent quantum neural networks (RQNNs) with feedback, demonstrating logarithmic qubit scaling in approximation accuracy without curse of dimensionality. It resides in the 'Universal Approximation and Complexity Analysis' leaf under 'Theoretical Foundations and Approximation Guarantees', which contains only two papers total. This sparse population suggests the work addresses a relatively underexplored theoretical niche within quantum reservoir computing, focusing on rigorous complexity guarantees rather than empirical performance or hardware constraints that dominate other branches of the taxonomy.

The taxonomy reveals neighboring theoretical work in 'Nonlinear Dynamics and Convergent Systems' examining autoregressive models and convergent substrates, while 'Feedback Mechanisms and Measurement Protocols' explores measurement-driven control strategies that enable the feedback architectures analyzed here. The paper bridges these areas by providing formal approximation guarantees for feedback-based systems, complementing empirical studies in 'Performance Analysis' that evaluate forecasting and memory capacity without theoretical bounds. Its focus on linear readouts distinguishes it from alternative architectures leveraging dissipation, non-Markovian dynamics, or higher-order structures, which pursue computational advantages through different physical mechanisms rather than approximation-theoretic foundations.

Among seventeen candidates examined across three contributions, none were identified as clearly refuting the claimed results. The first contribution (approximation bounds without curse of dimensionality) examined two candidates with no refutations; the second (universality with linear readouts) examined five candidates with no refutations; the third (feedforward QNN approximation results) examined ten candidates with no refutations. This limited search scope—covering top-K semantic matches and citation expansion rather than exhaustive field coverage—suggests the specific combination of feedback-based RQNNs, logarithmic qubit scaling, and linear readout universality has not been extensively addressed in prior accessible literature, though the analysis cannot rule out relevant work outside the examined candidate set.

Based on the seventeen candidates examined, the work appears to occupy a distinct theoretical position within quantum reservoir computing, addressing approximation complexity for feedback-driven recurrent architectures with formal guarantees. The sparse population of its taxonomy leaf and absence of refuting candidates among examined papers suggest novelty in this specific formulation, though the limited search scope means potentially relevant theoretical work in quantum complexity theory or classical recurrent network approximation may exist beyond the candidate pool. The analysis covers semantic proximity and citation links but does not exhaustively survey adjacent mathematical frameworks.

Taxonomy

Core-task Taxonomy Papers
27
3
Claimed Contributions
17
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: approximation capabilities of feedback-based quantum reservoir computing. The field explores how quantum systems with measurement-driven feedback can serve as computational reservoirs for temporal processing tasks. The taxonomy reveals several complementary perspectives: Theoretical Foundations examine universal approximation properties and complexity bounds that establish what these systems can represent in principle; Feedback Mechanisms investigate how measurement protocols and control loops shape reservoir dynamics; Performance Analysis evaluates task-specific benchmarks ranging from time-series forecasting to nonlinear system emulation; Experimental Implementations address hardware constraints across photonic, superconducting, and quantum-dot platforms; Alternative Architectures explore designs without explicit feedback or with novel coupling schemes; and Recurrent Quantum Neural Network Frameworks situate reservoir computing within broader quantum machine learning paradigms. Representative works such as Feedback Quantum Reservoir[1] and Weak Measurement Reservoir[2] illustrate how measurement strength and feedback timing critically influence memory and nonlinearity, while experimental studies like Time-Multiplexed Quantum-Dot[8] and Transmon Memory Capacity[10] demonstrate practical trade-offs between coherence, readout fidelity, and computational capacity. A central tension emerges between theoretical guarantees and experimental feasibility: many studies pursue optimal forecasting strategies (Optimal Quantum Forecasting[3]) or leverage dissipation as a computational resource (Dissipation as Resource[17]), yet hardware noise and limited coherence times constrain real-world performance (Noisy Temporal Processing[23]). Feedback Recurrent Quantum[0] contributes to the theoretical foundations by rigorously analyzing universal approximation and complexity within this feedback-driven paradigm, closely aligning with formal studies of recurrent quantum architectures (Quantum Recurrent Networks[11]) and convergent dynamics (Nonlinear Convergent Dynamics[24]). Compared to neighboring work on noisy environments (Noisy Temporal Processing[23]), Feedback Recurrent Quantum[0] emphasizes provable approximation capabilities rather than empirical robustness, offering complementary insights into what feedback-based reservoirs can achieve under idealized conditions and how complexity scales with system size and feedback depth.

Claimed Contributions

Approximation bounds for recurrent quantum neural networks without curse of dimensionality

The authors prove that recurrent quantum neural networks can approximate regular state-space systems with approximation error decaying as 1/√n, where the number of required qubits grows only logarithmically in 1/ε for target accuracy ε, avoiding exponential scaling in dimension.

2 retrieved papers
Universality of RQNNs with linear readouts for fading memory filters

The authors establish that recurrent quantum neural networks with linear readout layers are universal approximators for any causal, time-invariant filter satisfying the fading memory property, matching the expressivity of classical reservoir computing methods.

5 retrieved papers
Novel approximation results for feedforward QNNs and their derivatives

The authors develop new approximation error bounds showing that feedforward quantum neural networks can simultaneously approximate target functions and their derivatives, which is essential for analyzing recurrent architectures with feedback loops.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Approximation bounds for recurrent quantum neural networks without curse of dimensionality

The authors prove that recurrent quantum neural networks can approximate regular state-space systems with approximation error decaying as 1/√n, where the number of required qubits grows only logarithmically in 1/ε for target accuracy ε, avoiding exponential scaling in dimension.

Contribution

Universality of RQNNs with linear readouts for fading memory filters

The authors establish that recurrent quantum neural networks with linear readout layers are universal approximators for any causal, time-invariant filter satisfying the fading memory property, matching the expressivity of classical reservoir computing methods.

Contribution

Novel approximation results for feedforward QNNs and their derivatives

The authors develop new approximation error bounds showing that feedforward quantum neural networks can simultaneously approximate target functions and their derivatives, which is essential for analyzing recurrent architectures with feedback loops.