shredx.modules.rnn.MOELSTMEncoder#
- class shredx.modules.rnn.MOELSTMEncoder(input_size: int, hidden_size: int, n_experts: int, forecast_length: int, strict_symmetry: bool, num_layers: int, dropout: float, device: str = 'cpu', **kwargs)#
Bases:
Module,MOESINDyLayerHelpersMixinMixture of Experts LSTM with SINDy layer forecasting.
Combines an LSTM encoder with multiple SINDy expert layers for long-horizon forecasting. Expert outputs are combined via learned weighted averaging.
- Parameters:
- input_sizeint
Input feature dimension.
- hidden_sizeint
Hidden state dimension for LSTM and experts.
- n_expertsint
Number of SINDy expert layers.
- forecast_lengthint
Number of timesteps to forecast.
- strict_symmetrybool
If True, enforce symmetric SINDy coefficients.
- num_layersint
Number of LSTM layers.
- dropoutfloat
Dropout probability for expert weighting.
- devicestr, optional
Device on which to place the module. Default is
"cpu".- **kwargs
Additional keyword arguments (ignored).
Methods
forward(x)Forward pass through the MOE-LSTM model.
Notes
Class Methods:
initialize():
Initializes the LSTM, expert combination weights, and SINDy expert layers (called from
__init__).- Returns:
None.
forward(x):
Processes input through the LSTM, then passes the final hidden state through all SINDy experts and combines their outputs.
- Parameters:
x :
Float[torch.Tensor, "batch sequence input_size"]. Input tensor.
- Returns:
tuple. Tuple containing the final output tensor of shape
(batch_size, forecast_length, 1, hidden_size)andNonefor no auxiliary losses.