| |
- apply_dropout(X, dropout_rate)
- Generate dropout mask and apply fused dropout.
- backward_cupy(layer_outputs, y, weights, activations, reg_lambda, is_binary, dWs, dbs)
- Perform backward pass using CuPy with fused derivative computations.
- calculate_bce_with_logits_loss(logits, targets)
- Calculate binary cross-entropy loss with logits.
- calculate_cross_entropy_loss(logits, targets)
- Calculate cross-entropy loss for multi-class classification.
- calculate_loss_from_outputs_binary(outputs, y, weights, reg_lambda)
- Calculate binary classification loss with L2 regularization.
- calculate_loss_from_outputs_multi(outputs, y, weights, reg_lambda)
- Calculate multi-class classification loss with L2 regularization.
- evaluate_batch(y_hat, y_true, is_binary)
- Evaluate batch accuracy for binary or multi-class classification.
- forward_cupy(X, weights, biases, activations, dropout_rate, training, is_binary)
- Perform forward pass using CuPy with fused and in-place operations.
- fuse(...)
- fuse(*args, **kwargs)
Decorator that fuses a function.
This decorator can be used to define an elementwise or reduction kernel
more easily than :class:`~cupy.ElementwiseKernel` or
:class:`~cupy.ReductionKernel`.
Since the fused kernels are cached and reused, it is recommended to reuse
the same decorated functions instead of e.g. decorating local functions
that are defined multiple times.
Args:
kernel_name (str): Name of the fused kernel function.
If omitted, the name of the decorated function is used.
Example:
>>> @cupy.fuse(kernel_name='squared_diff')
... def squared_diff(x, y):
... return (x - y) * (x - y)
...
>>> x = cupy.arange(10)
>>> y = cupy.arange(10)[::-1]
>>> squared_diff(x, y)
array([81, 49, 25, 9, 1, 1, 9, 25, 49, 81])
- logsumexp(a, axis=None, keepdims=False)
- Compute log-sum-exp for numerical stability.
|