while a rotation leaves the sum of the squares of a
vectors components invariant and describes the
motion on a circle or n-dimensional sphere, a
Markov transformation leaves the linear sum
invariant and describes the motion on a straight line,
or plane or generally a hyperplane that is
perpendicular to the vector (1,1,1,…..) and where all
vectors, both before and after the transformation are
in the positive hyperquadrant. Markov
transformations describe diffusion and increasing
disorder such as the dispersion of ink into clear
water or dirt in one’s home. They are the
transformations that describe the irreversibility of
time, increasing entropy (disorder) and the gradual
loss of organized energy into heat (random energy)
and thus the second law of thermodynamics (and
even the loss of information in systems). Because
Markov transformations do not have an inverse, they
were never studied from the point of view of group
theory, because mathematical groups all have
inverse transformations (along with closure, an
identity, and associativity). It can be shown that all
Markov transformations are square matrices that
consist of non-negative (positive or zero) numbers
where each column sums to unity (one). Another
type of Markov matrix has the row values sum to
unity. Essentially all studies of Markov
transformations are for discrete and not continuous
transformations. It is the continuous Markov
transformation that will be central to our work
related to networks.
A mathematical group is a set of objects (say A,
B, C, …) and a multiply operation (say *) that has
(a) closure into another member of the set), (b) is
transitive i.e. the ordering of the operation among
three elements does not matter, (c) has an identity
transformation leaving another element unchanged,
and (d) for every element the group has an inverse
that reverses the action of the first. One simple
example is the set of the identity and the reflection R
in a mirror. Another example is the set of four
rotations of a square that leave it invariant (by 0, 90,
180, and 270 degrees). Then one can consider the
group of rotations about an axis or the translations
on a straight line as examples of continuous (Lie)
transformations of rotation both of which have an
infinite number of elements. In the 1890s Sophius
Lie invented a way to study all of these by studying
the associated infinitesimal transformation where he
showed that an exponentiation of the infinitesimal
transformation gives the original transformation.
This means that we can study a single
transformation L rather than the infinite number of
rotations. For rotations in three dimensions, there is
a set of three such transformations: Lx, Ly, and Lz
for rotations about each axis. Thus one only has
three objects that are needed to study all of the three-
fold infinity of rotations in three dimensions. The
resulting set of L matrices is called the Lie algebra
for that Lie group, R, which is generated by
exponentiation. This group is called the rotation
group R3 or the Orthogonal group O(3).
1.4 Decomposition of the Continuous
Linear Transformation Group
The general linear group of all continuous
transformations in n dimensions is represented by an
n x n (invertible) matrix of real numbers. Such
transformations include rotations, translations, and
the Lorentz transformations of the theory of
relativity as well as all the unitary transformations in
quantum theory. Transformations allow us to study
symmetry such as rotational symmetry or other
invariance. Since we wish to generate all continuous
linear transformations, we will need all possible
infinitesimal generating matrices which are easily
listed as having a ‘1’ in the i,j position and a ‘0’ in
all other positions. There are (as might be expected)
n
2
such matrices since we can put the ‘1’ in any of
the n
2
positions. Those matrices with a “1” in the i, j
position form the n
2
elements of the general linear
group. However, it was discovered by the author
(Johnson 1985) that the general linear group can be
decomposed into two separate Lie groups as follows:
(a) Consider the generator (Lie algebra) element
which has a 1 at the ii position and a 0 at every other
position. If we exponentiate that matrix then this is
obviously e
a
at one diagonal position, 1 at other
diagonal positions, and zeroes everywhere off the
diagonal. These transformations multiply that one
axis by e
a
and multiply all the rest by ‘1’ thus
leaving them unchanged so it just makes that one
axis longer or shorter by that factor. We call these
scaling transformations and the group is called
Abelian because every transformation commutes
with all the other elements in the algebra. We next
identify the Markov Type Lie Group (MTLG).
Consider the off-diagonal algebra (generators) and
rather than using just a ‘1’ at each off diagonal
position, let us form an element by placing a ‘-1’ on
the corresponding diagonal of that column. This
makes the sum of the elements in each column of the
generator equal to zero with a “1” off the diagonal
and a “-1” on the diagonal in the same column.
Every other value is “0”. Formally this defines the
m,n matrix element. There are obviously n
2
-n such L
matrices corresponding to every position off the
MathematicalFoundationsofNetworksSupportingClusterIdentification
279