Skip to content

PhilipLoewen/TensorGradient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Given a function f that maps a numpy ndarray to a scalar result, and a specific input ndarray A, use a fourth-order difference scheme to approximate the ndarray-valued gradient grad f(A).

The return value is an ndarray with the same shape as A, organized to support the following kind of linear approximation:
     f(A + dA) is very close to f(A) + grad(f,A)*dA.
Here on the right the operator "*" denotes the appropriate version of the dot-product. In numpy the final term added on the right above can be coded like this:
     numpy.tensordot( grad(f,A), dA, axes=A.ndim ).item()

Two fine points:

  1. Specifying the optional parameter "axes" in tensordot is essential.
  2. The tensordot function returns a 0-dimensional array containing a single floating-point number. To make this into a scalar, an explicit type-conversion is required. The "item()" method does that.

About

Finite-difference gradient calculation for a scalar-valued function with a numpy ndarray input

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages