Spectral Clustering
Last updated
Last updated
Here I will derive the mathematical basics of why does spectral clustering work. I will break them into four parts. The first three parts will lay the required groundwork for the mathematics behind spectral clustering. The final part will be piecing everything together and show that why that spectral clustering works as intended.
Given matrices A
and x
, how to compute the following?
For example:
Multiply them out and get the following expression:
Think of taking derivative of x.T * A * x
with respect to a vector as an element wise operation:
Which is equivalent to:
Thus, the answer is:
Lagrange multiplier is frequently used in classical mechanics to solve function optimization under constraints. Lagrangian mechanic is often used in non-relativistic quantum mechanics for particle physics, however that would require knowledge in path integral. In this section, I am sticking to the plain old Lagrange multipler to solve a simple constraint problem as an example.
Suppose we want to minimize or maximize a function f(x, y, z)
while subjecting to a constraint function g(x, y, z) = k
. The easier example one can think of is that you want to a fence around your house, but the constraint is that you have limited materials from Home Depot. You want to maximize the area enclosed by your fence. Then you can use Lagrange Multiplier to perform the calculation and find the opitmal solution.
We define Lagrangian as follows:
The objective is to solve the following function, which I forgot what was the formal name for it:
We will have four equations and four unknowns.
Let's use a simple 2 dimensional example with only two variables, x
and y
:
Then the Lagrangian for this example is:
Compute gradient for our Lagrangian and set it to equal to zero:
Then the solution is clear:
The objective here is to combine what we know from Part 1 and Part 2 to achieve the following.
We want to minimize the expression x * A.T * x
under the following constraints.
Being positive semidefinite is an important quality, because if a matrix is definite or semidefinite positive, the vector, at which derivative of the expression is zero, has to be the solution for minimization. Now we have our constraints, we are ready to use Lagrange multiplier to minimize this expression.
We will make an important assumption here, that A
is a symmetric matrix. You will see that this is indeed the case later on.
Now solve for x
. I will begin to use the vector notation here in case we forget that x
is a vector, and it has always been a vector. I omitted the vector symbol to type less LaTeX code on my end but I must include it here to illustrate a point.
The constraint equation gave us that
A
is positive if all eigenvalues are positive.
A
is semidefinite positive if all eigenvalues are either positive or zero.
All eigenvectors are the same size, they have a norm equal to 1.
Suppose we are given a set of four vertices or nodes V = {v1, v2, v3, v4}
and we want to construct a adjacency matrix as follows:
Each element of W
represents the connectivity strength between vertix i
and vertix j
. For example, W[0][0]
is the connection strength between vertix 0 and itself (we can assume that 5 is the maximum connectivity strength.) A zero value can represent that there is no connection. So far the numbers in the adjacency matrix seems something arbitrary because it is. In general, we should use a Gaussian Kernel to fill in the connectivity strength.
The degree of a vertex represents its total connectivity.
We can then construct a degree matrix as follows.
Let's use a concrete example, this is a different example from above. Suppose we are given 6 vertices and here is their coordinates.
Visually speaking, it is very clear to us that these points can be grouped into two clusters, We can say that everything in the bottom-left group belongs to the set A
.
And everything in the upper-right group belongs to the set of not A
.
We can designate a feature vector for these vertices, and this feature vector represents whether a vertex is in the set A
or not using 1 to indicate positive and 0 to indicate negative.
Now the natural question to ask is, how does this work if we have more than 2 clusters? Suppose we want 6 clusters, then we simply create 6 feature vectors each represents one of the groups A, B, C, D, E, F
.
Now we can say that the point of spectral clustering is to find feature vectors such that the relationship between points in set and points not in set are minimized. We can declare the following equation as our relationship.
Our objective is to find the correct feature vector(s) such that when two vertices share strong connection should be categorized into the same cluster, which in turn minimize the relationship expression. There exissts many solutions to this minimization as we shall see soon.
Since W
is symmetric and looping through i
is the same as looping through j
, we can declare that the first term and third term of the above equation are equivalent.
Now we turn to the second term and see how to simplify it.
Therefore,
Usually we call the term degree matrix minus adjacency matrix, the Laplacian. We can EASILY minimize the expression based on what we derived in Part 3. In conclusion, if we want to find the solutions (vectors f
) such that it minimizes the connectivity and clustering relationship, we find the least dominant eigenvectors of the Laplacian matrix. This was mindblowing to me when I first learned about it.
So what does this mean? It means that if you want to minimize the expression , x
must be the eigenvectors of A
! Here are couple important properties:
The eigenvector corresponding to the smallest eigenvalue will give you the smallest possible value of
In converse, eigenvector corresponding to the biggest eigenvalue will give you the maximum of . However I am not 100% sure of this point, I need to run couple tests to verify it.
Let's define A
as a subset of V
such that . Also define to be a set of vertices that are not in the set A
. Then we can say that: