An example of this is type of problems which require optimization is the motion of particles subjected to constraints due to a physical barrier, such as the one shown below, where a mass under the action of gravity moves on the surface of a paraboloid.

I briefly present the method, and then I describe how to use automatic differentiation to numerically solve the equations using the fourth order Runge-Kutta method. As with every post, I share an interactive numerical implementation of the method. On this ocassion, the demo is built using threejs and pyodide.
Interactive Demo:#
Mechanics of a particle with constraints: The Lagrange multipliers method.#
In rectangular coordinates \( r=(x,y,z )\) the motion of a particle of mass \( m \) under the effect of gravity has a Lagrangian function: $$ L_o = \frac{1}{2} m \dot{r}^2 + m g z \quad(1)$$
If the particle is constrained to move over a surface such an inclined plane or another landscape which can be described by a function \( z = f(x,y) \). The Lagrangian \( (1) \) needs to be complemented with a contribution for the constraint as:
$$L = L_o + \lambda(z-f(x,y)) \quad(2)$$
Where \( \lambda \) is a constant called the lagrange multiplier, which needs to be determined to ensure the constrained is fulfilled. The equations of motion obtained by optimizing \( (2) \) are: $$m\ddot{x} = -\lambda f_x\quad(3)$$ $$m\ddot{y} = -\lambda f_y \quad(4)$$ $$m\ddot{z} = -mg + \lambda\quad(5)$$
Where the indices in \( f\) stand for the partial derivative with respect to that variable. The system \( (3)-(5)\) needs to be complemented with the constraint equation:
$$\ddot{z} =(\dot{x}^2 f_{xx} + \dot{y}^2f_{yy})+\dot{x}\dot{y}(f_{xy}+f_{yx})+(\ddot{x}f_x+\dot{y}f_y)\quad(6)$$
Solving for \( \lambda \) using \( (3)-(6)\) gives:
$$\lambda = \frac{m}{1+f_x^2+f_y^2}(g + \dot{x}^2f_{xx} + \dot{y}^2f_{yy} + \dot{x}\dot{y}(f_{xy}+f_{yx}))\quad(7) $$
Which leads to the equations of motion:
$$\ddot{x}=-\frac{g + \dot{x}^2f_{xx} + \dot{y}^2f_{yy} + \dot{x}\dot{y}(f_{xy}+f_{yx}) }{1+f_x^2+f_y^2} f_x \quad(8)$$ $$\ddot{y}=-\frac{g+ \dot{x}^2f_{xx} + \dot{y}^2f_{yy} + \dot{x}\dot{y}(f_{xy}+f_{yx})}{1+f_x^2+f_y^2} f_y \quad(9)$$ $$\ddot{z}=-g + \frac{g+\dot{x}^2f_{xx} + \dot{y}^2f_{yy} + \dot{x}\dot{y}(f_{xy}+f_{yx})}{1+f_x^2+f_y^2} \quad(10)$$
There is a lot that can be said about these equations, which is of course beyond the scope of this little blog, however, it is worth noticing that in general these constitute a nonlinear system, which can’t be solved analytically, except for a few simple cases, such as an inclined plane. The system can be numerically solved using a general integration scheme such as the Runge-Kutta method, provided we can efficiently compute the derivatives of the constraint.
Runge-Kutta integration.#
The numerical solution of a system of the form \(\dot{ \bf{x} }= \bf{F(x,t)} \) is obtained by:
$${\bf{x}}_{n+1} = {\bf{x}}_n + (1/6)( {\bf{k}}_1 + 2 {\bf{k}}_2 + 2 {\bf{k}}_3 + {\bf{k}}_4 )\quad(11)$$
$$ {\bf{k}}_1 = {\bf{F(x }}_n,t ) \quad(12)$$ $$ {\bf{k}}_2 = {\bf{F(x }}_n + (h/2){\bf{k}}_1 ,t+h/2)\quad(13) $$ $$ {\bf{k}}_3 = {\bf{F(x }}_n + (h/2){\bf{k}}_2,t+h/2)\quad(14) $$ $$ {\bf{k}}_4 = {\bf{F(x }}_n + h{\bf{k}}_3,t+h)\quad(15) $$
Where \( h \) is the integration step. This is a fourth order method and it requires to evaluate the function \( \bf{F} \) four times per iteration. In this case: $$ {\bf{x}} = (x, y, z, \dot{x}, \dot{y},\dot{z}) \quad(16)$$
And the Function \( \bf{F} \): $${\bf{F}} = (\dot{x},\dot{y},\dot{z},\ddot{x},\ddot{y},\dot{z})\quad(17) $$ where the last three entries are given by \( (8)-(10) \).
With everything defined then it is very simple to write a program to solve the problem as long as we know the partial derivatives of the constraint. This is where automatic differentiation shines!.
Automatic differentiation.#
The numerical solution of optimization problems is required everywhere, as its is a fundamental component to machine learning and A.I. implementations. Accordingly, the availability of tools which automate the task of computing gradients has also evolved and are now available in a number of frameworks, such as JAX and Pytorch for the python language. In what follows I will be using Autograd, as it is well mantained, documented and offers the versatility of using numpy and can be readily used in webassembly projects.
Enter Autograd.#
Let us consider a set of functions which depend of \( {(x,y)} \) and a couple of parameters \( {(A,B).} \ \ \) For example:
def plane(a,b,x,y):
return a*x + b*y
def paraboloid(a,b,x,y):
return a*x*x + b*y*y
def wave(a,b,x,y):
return 0.2*(x*x+y*y)*np.cos(a*x*x+b*y*y)
Then it is easy to write down the function \( \bf{F} \) in \( (17) \) as:
def kn(r,pars,case):
a = pars[0]
b = pars[1]
g = 10
if case in ['plane','paraboloid','wave']:
if case == 'plane':
fn = plane
if case == 'paraboloid':
fn = paraboloid
if case == 'wave':
fn = wave
x = r[0]; y = r[1]; z = r[2]
vx = r[3]; vy =r[4]; vz= r[5]
fx = grad(fn,2)(a,b,x,y)
fy = grad(fn,3)(a,b,x,y)
dg = np.array([fx,fy,1])
dg2 = np.dot(dg,dg)
fxx = grad(grad(fn,2),2)(a,b,x,y)
fyy = grad(grad(fn,3),3)(a,b,x,y)
fxy = grad(grad(fn,2),3)(a,b,x,y)
fyx = grad(grad(fn,3),2)(a,b,x,y)
ddg = np.array([fxx*vx*vx,fyy*vy*vy,g,vx*vy*(fxy+fyx)])
Num = np.sum(ddg)
Den = dg2
A = Num/dg2
F = np.array([vx,vy,vz,-A*fx,-A*fy,-g+A])
return F
The function objects depend of four variables, the two parameters \(a,b \) and the coordinates \( x,y \). To obtain the partial derivatives with respect to the variables we want, grad
takes the index of the variable defined in the function. In the cases above \( x \) and \( y \) correspond to the second and third indices. Then, the required derivatives are simply computed as:
fx = grad(fn,2)(a,b,x,y)
fy = grad(fn,3)(a,b,x,y)
fxx = grad(grad(fn,2),2)(a,b,x,y)
fyy = grad(grad(fn,3),3)(a,b,x,y)
fxy = grad(grad(fn,2),3)(a,b,x,y)
fyx = grad(grad(fn,3),2)(a,b,x,y)
And the Runge-Kutta loop as:
def RK(tf,ro,case,h,a,b):
dtp = 1e-3
t = 0.0;
u = np.zeros(6)
u[0:3] = ro
prm = [a,b]
while t<tf:
k1 = kn(u,prm,case)
k2 = kn(u+0.5*h*k1,prm,case)
k3 = kn(u+0.5*h*k2,prm,case)
k4 = kn(u+h*k3,prm,case)
u = u+ (h/6.0)*(k1+2*k2+2*k3+k4)
t+=h
return u
Webassembly Implementation.#
- Import the Pyodide javascript module in the head of your document:
<script src="https://cdn.jsdelivr.net/pyodide/v0.27.5/full/pyodide.js"></script>
- If the numerical methods are located all in a file named, lets say, “integrals.py”. Then, the methods are made accessible to javascript in the initialisation part with something like this:
//Load Pyodide and install the autograd package.
const pyodideRuntime = await loadPyodide();
await pyodideRuntime.loadPackage("micropip");
const micropip = pyodideRuntime.pyimport("micropip");
await micropip.install("autograd")
//Access the methods in "integrals.py".
pyodideRuntime.runPython(await (await fetch("integrals.py")).text());
//Registers the functions used in the three.js script. These can now be used as any JS function.
let pyparaboloid = pyodideRuntime.globals.get('paraboloid');
let pyplane = pyodideRuntime.globals.get('plane');
let pywave = pyodideRuntime.globals.get('wave');
let RK = pyodideRuntime.globals.get('RK');