bijx.nn.nets¶
Complete neural network architectures for convenience and prototyping.
This module provides some common and ready-to-use neural network architectures.
Classes
Feedforward convolutional neural network. |
|
Multi-layer perceptron for general function approximation. |
|
Residual neural network with skip connections. |
- class bijx.nn.nets.ConvNet[source]¶
Bases:
Module
Feedforward convolutional neural network.
Implements a standard convolutional architecture with multiple layers. Uses periodic padding by default to respect boundary conditions.
- Parameters:
in_channels (
int
) – Number of input feature channels.out_channels (
int
) – Number of output feature channels.kernel_size (
tuple
[int
,...
]) – Spatial size of convolution kernels.hidden_channels (
list
[int
]) – Number of channels in each hidden layer.activation (
Callable
) – Activation function for hidden layers.final_activation (
Callable
) – Activation function for output layer.padding (
str
) – Padding mode (‘CIRCULAR’ for periodic boundaries).rngs (
Rngs
) – Random number generator state.
Example
>>> net = ConvNet( ... in_channels=1, out_channels=2, ... hidden_channels=[16, 32, 16], ... padding="CIRCULAR", rngs=rngs ... ) >>> output = net(lattice_data)
- class bijx.nn.nets.ResNet[source]¶
Bases:
Module
Residual neural network with skip connections.
- Parameters:
in_features (
int
) – Input feature dimensionality.out_features (
int
) – Output feature dimensionality.width (
int
) – Hidden layer width (number of neurons).depth (
int
) – Number of residual blocks.activation (
Callable
) – Activation function for hidden layers.final_activation (
Callable
) – Activation function for output layer.dropout (
float
) – Dropout rate for regularization.final_bias_init (
Callable
) – Initialization for final layer bias.final_kernel_init (
Callable
) – Initialization for final layer weights.rngs (
Rngs
) – Random number generator state.
Note
The residual connections are applied to the intermediate representations of fixed width. The final layer maps to the desired output dimensionality. Dropout is applied before each residual block for regularization.
Example
>>> net = ResNet( ... in_features=64, out_features=32, ... width=512, depth=10, rngs=rngs ... ) >>> output = net(features)
- class bijx.nn.nets.MLP[source]¶
Bases:
Module
Multi-layer perceptron for general function approximation.
Implements a standard feedforward neural network with customizable architecture and activation functions.
- Parameters:
in_features (
int
) – Input feature dimensionality.out_features (
int
) – Output feature dimensionality.hidden_features (
list
[int
]) – List of hidden layer widths.activation (
Callable
) – Activation function for hidden layers.final_activation (
Callable
) – Activation function for output layer.rngs (
Rngs
) – Random number generator state.
Example
>>> # MLP for coupling layer transformation >>> net = MLP( ... in_features=32, out_features=64, ... hidden_features=[128, 256, 128], ... activation=nnx.gelu, rngs=rngs ... ) >>> output = net(input_features)