|
opm-common
|
Public Member Functions | |
| NNLayerDense (Tensor< float > weights={}, Tensor< float > biases={}, ActivationType activation_type=ActivationType::kLinear) | |
| bool | loadLayer (std::ifstream &file) override |
| bool | apply (const Tensor< Evaluation > &in, Tensor< Evaluation > &out) override |
| Applies the forward pass of a dense (fully connected) neural-network layer. | |
|
overridevirtual |
Applies the forward pass of a dense (fully connected) neural-network layer.
This method performs a matrix–vector multiplication between the layer's weight matrix and the input tensor, adds the bias vector, and then applies the configured activation function.
This implements:
![\[ \text{tmp}_j = \sum_i \text{in}_i \cdot W_{i,j} + b_j,
\qquad \text{out} = \text{activation}(\text{tmp})
\]](form_146.png)
The current implementation assumes row-major access to W and is efficient when using larger batch sizes. For inference with very small batches (especially (1 × input_dim)), a column-major layout or transposed multiply could improve cache locality because each output neuron would read contiguous memory. Whether to switch depends on expected inference batch sizes and the storage layout of Tensor<Evaluation>. This will depend on future applications of ML. Current applications and best related convention:
Implements Opm::ML::NNLayer< Evaluation >.
|
overridevirtual |
Implements Opm::ML::NNLayer< Evaluation >.