pystencils-autodiff

This is the documentation of pystencils-autodiff.

This document assumes that you are already familiar with pystencils. If not, here is a good tutorial to start.

Installation of this Auto-Diff Extension

Install via pip :

pip install pystencils-autodiff

or if you downloaded this repository using:

pip install -e .

Then, you can access the submodule pystencils.autodiff.

import pystencils.autodiff

Usage

Create a pystencils.AssignmentCollection with pystencils:

import sympy
import pystencils

z, y, x = pystencils.fields("z, y, x: [20,30]")

forward_assignments = pystencils.AssignmentCollection({
    z[0, 0]: x[0, 0] * sympy.log(x[0, 0] * y[0, 0])
})

print(forward_assignments)
Subexpressions:
Main Assignments:
     z[0,0] ← x_C*log(x_C*y_C)

You can then obtain the corresponding backward assignments:

from pystencils.autodiff import AutoDiffOp, create_backward_assignments
backward_assignments = create_backward_assignments(forward_assignments)

# Sorting for reproducible outputs
backward_assignments.main_assignments = sorted(backward_assignments.main_assignments, key=lambda a: str(a))

print(backward_assignments)

You can see the derivatives with respective to the two inputs multiplied by the gradient diffz_C of the output z_C.

Subexpressions:

Main Assignments:
    \hat{x}[0,0] ← diffz_C*(log(x_C*y_C) + 1)
    \hat{y}[0,0] ← diffz_C*x_C/y_C

You can also use the class pystencils_autodiff.AutoDiffOp to obtain both the assignments (if you are curious) and auto-differentiable operations for Tensorflow…

op = AutoDiffOp(forward_assignments)
backward_assignments = op.backward_assignments

tensorflow_op = op.create_tensorflow_op(backend='tensorflow_native', use_cuda=False)

… or Torch:

torch_op = op.create_tensorflow_op(backend='torch_native', use_cuda=False)

Indices and tables