torchkit.losses
- torchkit.losses.one_hot(y, K, smooth_eps=0)[source]
One-hot encodes a tensor, with optional label smoothing.
- Parameters
y (Tensor) – A tensor containing the ground-truth labels of shape (N,), i.e. one label for each element in the batch.
K (int) – The number of classes.
smooth_eps (float, optional) – Label smoothing factor in [0, 1] range. Defaults to 0, which corresponds to no label smoothing.
- Returns
The one-hot encoded tensor.
- Return type
Tensor
- torchkit.losses.cross_entropy(logits, labels, smooth_eps=0, reduction='mean')[source]
Cross-entropy loss with support for label smoothing.
- Parameters
logits (Tensor) – A FloatTensor containing the raw logits, i.e. no softmax has been applied to the model output. The tensor should be of shape (N, K) where K is the number of classes.
labels (Tensor) – A rank-1 LongTensor containing the ground truth labels.
smooth_eps (float, optional) – The label smoothing factor in [0, 1] range. Defaults to 0.
reduction (str, optional) – The reduction strategy on the final loss tensor. Defaults to “mean”.
- Return type
Tensor
- Returns
If reduction is none, a 2D Tensor. If reduction is sum, a 1D Tensor. If reduction is mean, a scalar 1D Tensor.
- torchkit.losses.huber_loss(input, target, delta, reduction='mean')[source]
Huber loss with tunable margin, as defined in 1.
- Parameters
input (Tensor) – A FloatTensor representing the model output.
target (Tensor) – A FloatTensor representing the target values.
delta (float) – Given the tensor difference diff, delta is the value at which we incur a quadratic penalty if diff is at least delta and a linear penalty otherwise.
reduction (str, optional) – The reduction strategy on the final loss tensor. Defaults to “mean”.
- Return type
Tensor
- Returns
If reduction is none, a 2D Tensor. If reduction is sum, a 1D Tensor. If reduction is mean, a scalar 1D Tensor.