Tag: machine-learning

Pytorch use nondeterministic algoritm

APIs with nondeterministic algorithm are used

Tensorflow enable ops determinism

Non-deterministic ops might return different outputs when run with the same inputs.

Pytorch assign in place mod

Detects if a torch variable is modified in place inside an assignment.

Pytorch disable gradient calculation

Checks if gradient calculation is disabled during evaluation.

Pytorch miss call to eval

Checks if eval() is called before validating or testing a model.

Notebook best practice violation

Best practices to improve the maintainability of notebooks.

Pytorch redundant softmax

Detects if Softmax is used with CrossEntropyLoss.

Tensorflow redundant softmax

Detects if Softmax is explicitly computed.

Pytorch control sources of randomness

Not setting seeds for the random number generators in Pytorch can lead to reproducibility issues.

Notebook invalid execution order

Notebook has uninitialized variable usage given the execution order

Pytorch sigmoid before bceloss

The computation of the bceloss using sigmoid values as inputs can be replaced by a single BCEWithLogitsLoss which is numerically more stable.

Pytorch data loader with multiple workers

Using DataLoader with num_workers greater than 0 can cause increased memory consumption over time when iterating over native Python objects such as list or dict.

Pytorch avoid softmax with nllloss

Checks if Softmax is used with NLLLoss function.

Pytorch miss call to zero grad

Zero out the gradients before doing a backward pass

Notebook variable redefinition

A variable is re-defined in multiple cells with different types.

Avoid using nondeterministic Tensorflow API

Detects if nondeterministic tensorflow APIs are used.

Tensorflow control sources of randomness

Detects if a random seed is set before random number generation.

PyTorch create tensors directly on device

Creating PyTorch tensors on the CPU and then moving them to the device is inefficient.