In Buddhist philosophy, all functioning
phenomena are said to exist in three ways, known as the three modes of existential
dependence:
- Causality
- Structure
- Mental Designation or Meaning
(1) Causal dependency.
Functioning objects exist in dependence on
the causes and conditions that brought about their existence in the first place, and
continue to maintain their existence (e.g. acorn, soil, rain, air and sunlight for an oak
tree). In particular, causal dependencies show a high degree of regularity (oak
trees aren't produced from chestnuts, and the planets don't wander around the solar
system randomly, but are constrained by Newton's laws).
(2) Compositional and structural
dependency. (Sometimes known as 'mereological' dependency')
Functioning phenomena exist dependently
upon their parts, and upon the way that those parts are arranged (structural features such
as aspects, divisions, directions etc).
The parts of a functioning phenomenon are
known as the 'basis of designation', which, when arranged in an appropriate manner, prompt
the observer to designate the entire structure as a single entity. Thus
the correct arrangement of pistons, cylinders, crankshaft, spark plugs etc is designated
'engine', and the correct arrangement of engine, wheels, chassis etc is designated
'car'.
But neither engine nor car can exist as
independent entities, apart from their bases of designation. See Mereological
Dependence in Buddhist Philosophy for a detailed discussion.
(3) Conceptual dependency
This is the most subtle mode of existential
dependency, and concerns the way that things exist in dependence of our minds designating
them by concept and name.
For example, what is a box? Is
there some kind of ideal prototype box existing in the Platonic realm of ideal forms, or
does a box exist only by arbitrary convention in the mind of the box-user, or from the
collective minds of box-users?
If I say "I'll get a box to put this
stuff in", then most people will understand that I'm going to fetch a container which
performs the conventional function of a box, i.e. holds things. To do this it must have a
bottom and at least three sides (like some chocolate boxes), though usually four. A lid is
optional.
But if we were to cut the sides of a box
down, it would perform the functions of a tray.
The box exists from causes and conditions
(the box-maker, the wood from which it is made, the trees, sunlight, soil, rain,
lumberjacks etc.)
The box exists in dependence upon its parts
(bottom and three or more sides).
The box also exists because I and others
decide to call it a box, not because of some inherent `boxiness' that all boxes have as a
defining essence.
If it were a big cardboard box, and I cut a
large L-shaped flap out of one side so it hinged like a door, then I could turn it upside
down and it would
be a child's play-house.
If I cut the sides of a wooden box down a
centimetre at a time, then the box would get shallower and shallower. At some point the
box would cease to exist and a tray would have begun to exist. So at some arbitrary point
did the essence of `boxiness' miraculously disappear, and 'trayfulness' jump in to
the undefined structure?
Where does box end and tray start?
I don't know. Maybe there's an EU directive
forbidding the construction of boxes with insufficiently high sides, or specifying that
all boxes must have lids permanently attached to avoid any possible confusion with trays.
![]() |
EU standard box |
Or perhaps there's a Tray Descriptions Act
enforcing a maximum height for trays.
But whichever way, as well as existing in
dependence on its parts, and on its causes and conditions, the box exists in dependence
upon our minds (or the collective minds of the EU Box-Standards Inspectorate).
The minds project 'box' over a certain
collection of parts. And those parts can be the common bases of designation of both
a box and a tray.
Mental designation goes all the way up,
and all the way down
Developments in 20th century physics have
shown that the observer is part of the system, both at the very smallest levels of reality
(quantum physics)
and at the very largest (relativity).
These findings confirm what Buddhists have been saying for thousands of years; that the
observer is part of the system at all levels of reality, not just in our everyday
world of domestic storage containers.
Causal regularities in Buddhist
philosophy
Unlike Islam,
which completely rejects the laws of science and insists that everything happens
moment-to-moment because of God's arbitrary will, Buddhism has always viewed regularities
in the working of the universe as axiomatic.
As Jay L. Garfield states in 'The
Fundamental Wisdom of The Middle Way' (footnote 29 p 116)
'The Madhyamika position implies that we
should seek to explain regularities by reference to their embeddedness in other
regularities, and so on. To ask why there are regularities at all, on such a view, would
be to ask an incoherent question. The fact of explanatorily useful regularities in
nature is what makes explanation and investigation possible in the first place and is not
something itself that can be explained.'
![]() |
The mathematical laws governing the motion of the planets can be simulated by clockwork |
The mathematical and algorithmic nature
of regularities
Although asking why there are explanatorily
useful regularities in nature may be ultimately incoherent, to ask why these
take a mathematical form is a valid subject for enquiry.
The standard computer analogy for causality
is to regard the laws of physics as being analogous ('isomorphic') to algorithms, with the
physical objects being analogous to the datastructures the algorithms act
upon.
From an article by Gregory Chaitin...
'My story begins in 1686 with Gottfried
W. Leibniz's philosophical essay Discours de métaphysique (Discourse on Metaphysics), in
which he discusses how one can distinguish between facts that can be described by some law
and those that are lawless, irregular facts. Leibniz's very simple and profound idea
appears in section VI of the Discours, in which he essentially states that a theory has to
be simpler than the data it explains, otherwise it does not explain anything. The concept
of a law becomes vacuous if arbitrarily high mathematical complexity is permitted, because
then one can always construct a law no matter how random and patternless the data really
are. Conversely, if the only law that describes some data is an extremely complicated one,
then the data are actually lawless.
Today the notions of complexity and
simplicity are put in precise quantitative terms by a modern branch of mathematics called
algorithmic information theory. Ordinary information theory quantifies information by
asking how many bits are needed to encode the information. For example, it takes one bit
to encode a single yes/no answer. Algorithmic information, in contrast, is defined by
asking what size computer program is necessary to generate the data. The minimum number of
bits---what size string of zeros and ones---needed to store the program is called the
algorithmic information content of the data. Thus, the infinite sequence of numbers 1, 2,
3, ... has very little algorithmic information; a very short computer program can generate
all those numbers. It does not matter how long the program must take to do the computation
or how much memory it must use---just the length of the program in bits counts...
...How do such ideas relate to scientific
laws and facts? The basic insight is a software view of science: a scientific theory is
like a computer program that predicts our observations, the experimental data. Two
fundamental principles inform this viewpoint. First, as William of Occam noted, given two
theories that explain the data, the simpler theory is to be preferred (Occam's razor).
That is, the smallest program that calculates the observations is the best theory. Second
is Leibniz's insight, cast in modern terms---if a theory is the same size in bits as the
data it explains, then it is worthless, because even the most random of data has a theory
of that size. A useful theory is a compression of the data; comprehension is compression.
You compress things into computer programs, into concise algorithmic descriptions. The
simpler the theory, the better you understand something'
In summary: If a computer program or
algorithm is simpler than the system it describes, or the data set that it generates, then
the system or data set is said to be 'algorithmically compressible'.
This concept of algorithmic
simplicity/complexity can be extended from the realms of mathematics into physical
systems. The complexity of a physical system is the length of the minimal
algorithm than can simulate or describe it. Thus the orbits of the
planets, which seemed so complex to the ancients, were shown by Newton to be
algorithmically compressible into a few short equations.
![]() |
Visually complex but algorithmically simple |
The computer model of the three levels
of dependency
So causal dependency can be modelled as
algorithms, and compositional/structural
dependency can be modelled as datastructures, but where does that leave conceptual
dependency?
According to Buddhist philosophy, the
function of the mind cannot be reduced to physical or quasi-physical processes.
The mind is clear, formless, and knows its object. Its knowing the object
constitutes the conceptual dependency, which is fundamental, axiomatic and
cannot be explained in terms of other phenomena, including algorithms and datastructures.
Buddhism versus Materialism
The question that separates the Materialist from the Buddhist is whether there is anything left to explain about reality once algorithms and and datastructures have been factored out.
The Materialist would answer that algorithms and datastructures offer a complete explanation of the universe, without any remainder. The Buddhist would claim that a third factor, mind, is also required.
The Mother of all Algorithms
The mind itself is not algorithmically
compressible, but is responsible for carrying out algorithmic compression.
Algorithms, as executed, do not contain within themselves any meaning. For example,
the following two statements reduce to exactly the same algorithm within the memory of a
computer
(i) IF RoomLength * RoomWidth > CarpetArea THEN NeedMoreCarpet = TRUE
(ii) IF Audience * TicketPrice > HireOfVenue THEN AvoidedBankruptcy = TRUE
Such considerations have led critics of philosophical
computationalism to claim that algorithms can only contain syntax, not semantics.
Hence computers can never understand their subject matter. All assignments of meaning to
their inputs, internal states and outputs have to be defined from outside the system.
This may explain why the process of writing algorithms does not in itself appear to be
algorithmic. The real test of computationalism would be to produce a general purpose
algorithm-writing algorithm. A convincing example would be an algorithm that could
simulate the mind of a programmer sufficiently to be able to write algorithms to perform
such disparate activities as controlling an automatic train, regulating a distillation
column, and optimising traffic flows through interlinked sets of lights.
According to the computationalist view this 'Mother of all Algorithms' must exist as an
algorithm in the programmer's brain, though why and how such a thing evolved is rather
difficult to imagine. It would certainly have conferred no selective advantage to our
ancestors until the present generation (even so, do programmers outreproduce normal
people?).
The proof of computationalism would be to program the Mother of all Algorithms on a
computer. At present no one has the slightest clue of how to even start to go about
producing such a thing.
According to Buddhist philosophy this is hardly surprising, as the Mother of all
Algorithms is itself NOT an algorithm and never could be programmed. The mother of all
algorithms is the formless mind projecting meaning onto its objects (i.e.
conceptually designating meaning on to the sequential and structural components of
the algorithm as it is being written).
The non-algorithmic dimension of mind, of understanding of meaning, is needed to turn the
user's (semantically expressed) requirements into the purely syntactic structural
and causal relationships of the algorithmic flowchart or code.
Minds, machines and meaning
The computer analogy of conceptual
dependency, as far as one is possible, would be the 'meaning' of symbolic variables
which gets stripped out of high level languages during compilation to machine code.
This removal of meaning is inevitable because a machine cannot understand, interpret, use
or manipulate meaning. Only
minds can grasp meaning, hence
the programmer's lament:
I'm sick and tired of this machine
I think I'm going to sell it
It never does do what I mean
But only what I tell it