shredx.modules.transformer#

Transformer encoders for sequence modeling.

Implements standard and SINDy-augmented transformer encoders compatible with encoder–decoder architectures.

Classes

MultiHeadAttention(E_q, E_k, E_v, E_total, ...)

Standard multi-head attention mechanism.

MultiHeadSINDyAttention(E_q, E_k, E_v, ...)

Multi-head attention with SINDy-based latent space rollout.

SINDyAttentionSINDyLossTransformerEncoder(...)

Transformer encoder with SINDy attention and SINDy loss regularization.

SINDyAttentionTransformerEncoder(d_model, ...)

Transformer encoder with SINDy-based attention in the final layer.

SINDyLossTransformerEncoder(d_model, ...[, ...])

Transformer encoder with SINDy loss regularization.

TransformerEncoder(d_model, n_heads, ...[, ...])

Standard transformer encoder for sequence modeling.

TransformerEncoderLayer(d_model, n_heads, ...)

Single transformer encoder layer.

TransformerEncoderModule(encoder_layer, ...)

Stack of transformer encoder layers.