shredx.modules.mlp.MOEMLPEncoder#

class shredx.modules.mlp.MOEMLPEncoder(input_size: int, hidden_size: int, n_experts: int, forecast_length: int, strict_symmetry: bool, num_layers: int, dropout: float, device: str = 'cpu')#

Bases: Module, MOESINDyLayerHelpersMixin

Multi-Layer Perceptron (MLP) with SINDy layer forecasting.

Creates a feedforward neural network with identical layer sizes, ReLU activations between layers, and multiple SINDy expert layers. Expert outputs are combined via learned weighted averaging.

Parameters:
input_sizeint

Input feature dimension.

hidden_sizeint

Hidden state dimension for MLP and experts.

n_expertsint

Number of SINDy expert layers.

forecast_lengthint

Number of timesteps to forecast.

strict_symmetrybool

If True, enforce symmetric SINDy coefficients.

num_layersint

Number of MLP layers.

dropoutfloat

Dropout probability for expert weighting.

devicestr, optional

Device on which to place the module. Default is "cpu".

Methods

forward(x)

Forward pass through the MOE-MLP model.

Notes

Class Methods:

forward(x):

  • Processes input through the MLP, then passes the final hidden state through all SINDy experts and combines their outputs.

  • Parameters:
    • x : Float[torch.Tensor, "batch sequence input_size"]. Input tensor.

  • Returns:
    • tuple. Tuple containing the final output tensor of shape (batch_size, forecast_length, 1, hidden_size) and None for no auxiliary losses.