Question:

Is there a class of functions acting on a set of projected points that remain invariant under changes in projection parameters?

Caleb: 2 days ago

Suppose I have a set of $k$ points $\{x_1,x_2,\ldots,x_k\}$ in $\mathbb{R}^n$ that I can project into $\mathbb{R}^m$ with the linear operator $\mathcal{P}$, with $\alpha, \beta, \ldots$ parameters of the projection operator. Is there a well known best method for determining the class of functions $f:(\mathbb{R}^m)^k \rightarrow \mathbb{R}$ such that $f(\mathcal{P}x_1,\mathcal{P}x_2,\ldots,\mathcal{P}x_k)$ is invariant under changes in $\alpha, \beta, \ldots$?

As an example, a situation I'm interested in has $\mathcal{P}: \mathbb{R}^3 \rightarrow \mathbb{R}^2, x \mapsto A[\phi, \theta, \eta]x$, where:

$A[\phi, \theta, \eta] = \eta \left( \begin{matrix} \cos\phi\cos\theta & \sin\phi\cos & \sin\theta \\ -\sin\phi & \cos\phi & 0 \end{matrix}\right)$

i.e. the projection of points in 3-D space onto the plane with unit normal $(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)$, with an additional scaling parameter of $\eta$.

Setting derivatives of $f$ with respect to the parameters equal to zero yields a set of equations with no obvious general solution, and a feeling that I'm missing a certain set of tools.

Answer:
Amir: 2 days ago

I can think of (at least) two ways of interpreting this question.

First: You are given some specific list of $k$ points $x_1$, $x_2$, ..., $x_k$ in $\mathbb{R}^n$, and you want to detect whether $k$ points $y_1$, ..., $y_k$ in $(\mathbb{R}^m)^k$ could be a linear projection of the original $k$ points.

Solution: Compute the vector space of all $(c_1,\ldots, c_k)$ such that $\sum c_i x_i=0$. For a basis of this space, whether or not $\sum c_i y_i=0$. If so, the $y_i$ are a projection, if not, they are not.

Second: You want to construct functions $f$ on $(\mathbb{R}^m)^k$ such that, if $(y_1, \ldots, y_k)$ and $(y'_1, \ldots, y'_k)$ are both projections of the same points $(x_1, \ldots, x_k)$, then $f(y_1, \ldots, y_k) = f(y'_1, \ldots, y'_k)$.

Solution: There are no nonconstant examples. I will show that we must have $f(y'_1, \ldots, y'_k) = f(y_1,\ldots,y_k)$ for any $y$ and $y'$ in $(\mathbb{R}^m)^k$.

For notational simplicity, I'll take $n=m+1$. Let $y^j_i$ be the vector in $\mathbb{R}^m$ whose first $j$ coordinates are that first $j$ coordinates of $y_i$ and whose last $m-j$ coordinates are the last $m-j$ coordinates of $y'_i$. So $y^0_i = y'_i$ and $y^m_i = y_i$.

Let $x^j_i = \mathbb{R}^{m+1}$ be the vector whose first $j$ coordinates are those of $y_i$ and whose last $m+1-j$ coordinates are those of $y'_i$. Then $(y_1^j, \ldots, y_k^j)$ and $(y_1^{j+1}, \ldots, y_k^{j+1})$ are both projections of $(x^j_1, \ldots, x^j_k)$. So $$f(y_1^0,\ldots,y_k^0) = f(y_1^1, \ldots, y_k^1) = \cdots = f(y_1^m, \ldots, y_k^m)$$ and $f(y'_1, \ldots, y'_k) = f(y_1,\ldots,y_k)$.

I can also think of other possibilities, like that you have a data bank of a few billion options for $x$, and you want to find the best match, but this seems like enough for now.