Pure and Applied

A tidbit on maths as an amateur.

  ·   2 min read

Pure and Applied

How I wish to be a better writer and coder back then! I think the two sides form two very interesting distinct use cases of machine learning technique in maths (in my uneducated views). The former is using efficient universal function approximators to generate observable data and its associated approximate supervised model, and through those patterns and results we can reformulate our hypothesis, forming a positive feedback cycle in understanding the underlying problem. The paper provided examples in knot theory and representation theory, where divergent in fields produces interesting datasets to form supervised targets.

The latter, I think is closer to a traditional applied maths topic, is the use of some new DL layer to better solve some classes of PDEs. The key interests here are two-fold: Better representations of PDEs through a new neural operator, but also practically, a machine learning based solver that’s more efficient and accurate. This speaks to me greatly in particular, because I do think representation of the problem space is likely the most important thing in DL for generalization and speed. In essence, we can view this as a better simulator.

Broadly speaking, this also represents what I see as two cultures in machine learning in practise, understanding driven and results driven. In most given problem set, we either want to know the answers quickly and accurately, or we want to know an explanation for the problem. I do not think they are too far apart, but it does show the diverse ways machine learning techniques can be applied in conjunction with human in the domain of “knowledge generation”.