1. Can we quantify Gödel ?
One bonus of DR (Digital Relativity) is to soothe despairing fears and to temper hopes about the possible perfection and closure of a global DU of or any more or less partial DU.
The basic idea is suggested by Gödel, with his undecidability and uncompleteness assertions, and a tentative union with physical experience limits, such as the limit of speed to the speed limit and the quanta theory with its uncertainty In both cases, theory asserts and experience confirms that numerical definite values may be assigned for these limits : light speed and Planck's constant. Hence the hope that somehow a sort of logic limit constant could be calculated to get more about the Gödelian expression "any system complex enough to include arithmetics" : how many bits does that do with ?
An effective calculus of
limits due to DR would open the way to some engineering.
If we knew the e=mc2, we could directly compute the IEO space curvature. If not, we could at least say that, at tome times, there is threshed passing. For instance, the way we can find formally infinite loops in a program.
I never could find an answer. Neither by myself (but that is not surprising, due to my limited capabilities in mathematics and formal systems), nor in any book or dialogue with competent persons. Nobody however said me that the question itself is nonsensical.
Still waiting for an answer, I found some possible contributions to such a theory. (DR is also a proprietary concept, launched with my book "L'informatique libère l'humain" in 1999).
2. First approach :time and distances, noise
There is no such thing as perfectly white noise. It is a theoretical limit, as the real line. Noise is something like a dirty chaos More or less dirty. .
Noise plays on the transmission, with an error probability tied to distance. We admit that noise is a probability of false bits, given a size (or ops) of a being and/or a number of cycles.
Below some distance, noise is considered as negligible. This threshold may considered as a characteristic of a given DU, or as a feature or beings, which then have to be sufficiently protected if they are to survive.
In a noisy system, beings
of high levels take action to compensate for noise. For instance with
redundancy, or through an appropriate space organization (they locate
themselves near their most related partners).
Any growth must be paid by more noise, probably with a decreasing yield (depends on..).
Borders around beings are useful. And intermediate regions to cope with small addressing errors, due to the fact that the addressing system and the communication system will be affected by noise.
Noise has more or less
important effects depending on the regions it hits, active or passive, and
- bits of weight high/low
- bits in zones frequently/rarely used, of high/low pressure
- syntactic bits, with possibly cumulative effects bits of syntax, a fortiori in repetition
In a totally reduced systems (all independent/orthogonal bits), a fault on any bit kills all the system. (to be checked)
2.2. Noise vs. distance
For instance, if we think of communication between DU beings, separated by a distance of d (in bits, whatever method be used). If this universe has a noise rate such as the move from one binary position to the next entails a risk of n% random chance of a bit error, then the probability of error over d is (n.d)%. If we use error detection and correction such as a l% of erroneous bit is acceptable, then the maximum distance of communication between the two beings is given when n.d < l, or d < l/n.
This limit has consequences on the size of beings themselves: their radius, or the maximal distance between two points, or between any points and a supposed controlling centre must not exceed l/n. That could explain, for instance, the necessity for life to protect itself against (chemical ) noise behind cell membranes. It could also give ground to the utility of a distinction between internal and external modes of communication, with much more protected and heavy communication methods with the external world, contrasting with efficient but per se fragile communication inside the being.
2.3. Noise vs. time
Of course, the fact that digital beings cannot exist without some "incarnation" into physical world makes it depending on the physical relativity and uncertainty. That did not matter much for the first digital beings. DNA, neurons and electro-mechanic devices are neither large nor speedy enough to suffer from speed limit or quantum uncertainty. It is no longer so, with the gigahertz clock speed of today’s computers, and the long distance communication through satellites. Here, the quantitative values are well known, and taken into account into the design and architecture of digital systems.
In a future to come, quantum computing and possibly instantaneous communication due to the EPR (Einstein Podolsky Rosen, seeWikipedia )effect, could have dramatical consequences on DU. How far and how soon ?
Even without reference to physical matter ant its relativity laws, a simple time coherence problem arises if we suppose that the communication speed is finite in terms of digital clock cycles. It the communication delay between two systems is longer than the clock cycle of one or two of them, control of one by the another one, or any reciprocal interaction will demand operations upon several cycles. To say it crudely, even with the best systems of to day, you could not pleasantly make love with a partner living some light-years away !
That lets open also the impossibility of operating strictly synchronous systems over long distances, and a fortiori the dream of a globally (if not centrally) clocked digital universe. However, if all the beings that matter have sufficiently rapid basic clock cycles, that does not prevent a time coordination on longer cycles for more global actions.
Law : When we augment the load, we augment the organic distance and even functional.
(subsistence, MTBF...) linked to distance reliability.
A model where DU reproduces itself, with noise, from an instant to the following
2.4. Noise and code saturation
When we add one bit to the code, we create a new undetermined zone. What about noise in these zones ? When the code is saturated, every error is paid directly : we get a wrong being, or another being. In some way, all beings are then equivalent, but if we put a hierarchy on the bits. But, precisely, set a hierarchy is a form of factorization, with probable losses if the order (the ordinals) are meaningful, and we have no longer a pure and meaningless code.
As soon as one uses gap to
make meaning, noise has very different consequences :
- absolute transformation (as with saturated code)
- big change/error
- small change/unease.
(See also in Genetics)
We should find regions
which be at the same time important (bit mass) and indifferent (zones around
ruptures), to have maximum effects.
We shall have big effects if noise perturbs regions with conditions (IF) or recursive.
Interesting case: the error generates spontaneously a virus.
If we make a difference between programs and data, think about the difference of effects, and reach a sort of trilogy:
The bit as protection
against noise, assertion of authority within chaos
1. totally protected zone
2. input zone, clock included
3. zone with very strong error correction
Attraction : Beware. Here,
to hold together, bits must have a common resistivity to noise.
Gap .The miracle symmetric of sin. On the first side appears a gap between model and real, some probability, uncertainty. At one extreme, nearly certain, the venial sin... At the other extreme, the very improbable, lethal sin. And at opposite, the miracle.
Quantify, measure the gap. How many bits to a real error? Correcting codes ? Distance ? Cybernetic offset ,
Quantitative phenomenon : beings become complex enough to let it compute its life duration expectancy, which makes death appear as shocking, a kind of failure.
L evaluation has no meaning if P is infinite (though the series could converge, but with infinitely small H after some time).
How many bits do S need to be able to detect an anomaly, a bad. And how for self-representation (if not conscience).
The question must be asked
more smartly. A bad may be detected on one only bit, if the criterion is
external (there should be that, and there is something else).
But how many bits a being must have for
- an external S can detect a bad by internal contradiction
- S itself can detect the abnormality.
3. Second approach : number of sub-cycles
We suppose than, during a
"principal" cycle, a processor cannot do more than a given number of
operations (temporal bits, analogous to spatial bits).
Besides, the model varies strongly according to the number of cycles. We must say what the processor is able to do in one cycle (clock, or larger cycles). Addition, multiplication, ... pattern recognition ?
But we cannot dodge the issue. Otherwise, we could have anything inside a cycle.
Anyway, a processor of a given volume power cannot do an infinity of things, nor have an infinity of different states. Hence, if the number of cycles is too high in relation to mass, then we have cycles. This number would of course be very large (64 bits, yet, by the simple exponential play). But, her also, we have reductions by meaningful differences (Kolmogorof, etc.).
4. Third approach : logics, paradoxes, recursion, viruses
4.1. Language : holes and clashes
Perhaps more important than all that, or in duality with it, a form or relativity or uncertainty (uncertaintivity) comes directly from the mere basis of assimilation and expression, from the mere notion of analysing and creating beings. A the limit, this form is not even specific to digital systems ; or, to be more precise, to the rasterization of representations (images, sounds, measures and actuator commands). It comes from the other aspect of digitization, which is fundamental to language.
A language is a set of basic elements (character, words) plus a set of assembling rules. As such it deploys a space of possible expressions. But this space is too widely open, and lets place for a lot of uninteresting expressions and phrases. Then, grammar adds rules to generate only "meaningful" expressions. Inside that space, remains the traditional question of truth, and the way of deciding of it. Gödel has shown, at least for formal languages, that whatever is the proof system, there remain undecidable assertions.
In programming languages,
we find also a scale of valuations, with at least four cases:
- expressions rejected by the compiler, for syntactic reasons and sometimes for concrete reasons tied to the development environment
- expressions accepted by the compiler but leading immediately or not to a stop, or endless loop
- formally correct program, but not conform to the intentions of the developer or of the users.
- correct program from all standpoints.
Around these basic levels
may be played some. For instance :
- the compiler accepts, but gives the programmer a warning
- the program stops, but not immediately, and why not after having done some useful job ; besides, an endless loop is not always a prejudice, and is precisely looked for in transactional systems for instance (with of course some way of closing the process, but at the limit, a stop by reset or off-switching the mains may be sufficient) ; infrequent bugs are tolerable in many cases (if even totally evitable)
- the conformity to intentions of the developer and user needs is rarely binary ; in management information systems, anyway, it is well known that user needs change with time, under multiple pressures (from law changes to user knowing better).
In other cases, formally correct expressions may prove contradictory for logical reasons (a white black horse) or more remotely semantic (a modest diva).
To take a metaphor : words and rules are some way to project lines and planes in infinitum, and these lines may as well cross themselves (contradiction) or never delineate interesting spaces (incompleteness’).
The fact that a language is
used by a human being or a digital machine does not change this fact. We tend
to say that, in the bad cases, a human being finds always some solution,
possibly with a rule change, and that the machine can not. But this distinction
does not hold, for symmetrical reasons:
- human beings are frequently unable to cope with contradictions or to be practically blocked by undecidable issues
- within limits, machines may be built which cope not too bad with this kind of difficulties.
4.2. The uncanny loops of self reference and recursion
"I am a strange loop" (Hofstadter)
Self reference is a specific feature of language. Any programmer knows the power, and the dangers, of recursive functions. And a major part of paradoxes, as well as the Godelian limitations, draw on "diagonalization", particular form of self referent assertions.
For instance, the "The least natural number that cannot be described in less than twenty words" . If such a number does exist, we have just described it in thirteen words. If such a number does not exist, then all natural numbers can be described in fewer than twenty words.
In physical world, recursion is impossible, but sometimes present in some way, for instance with parallel plane mirrors facing each other.
For the philosophers, self
reference has played a considerable role for long. There are to main points :
- self reference is impossible to beings : omne quod movetur ab alio movetur ; only God is creator of himself ; and that is not the last of its paradoxical nature !
- metaphysics is the only science legitimately judge or itself... and for this reason is not a "natural" science
- man is the only being able to be conscious of himself ; but what is conscience...
The Richard-Berry paradox reads: "The least natural number that cannot be described in less than twenty words" . If such a number does exist, we have just described it in thirteen words. If such a number does not exist, then all natural numbers can be described in fewer than twenty words.
Note : a universal anti-virus is fundamentally impossible.
Digital self reference has a crucial importance in one case : self-reproduction. One of the basis of life, which invited itself on Earth. And a no less invited presence in "our" world of artefacts : computer viruses. In both cases, the process has at its core the duplication of a code of some length. And, if it has some remote metaphysical cause, that does not prevent it to happen billions of times everyday without asking our permission.
Computer viruses may seem less mysterious than life. We certainly know how they appeared, and we are but too able to make them appear, with a not so considerable competence in programming. But, beyond the fact that we shall never be free of them, they have also tricky feature. To begin with, the fact that the creation of a universal antivirus program is... mathematically... impossible. And even than there are cases where it is impossible to say if a program is, or is not, a virus. And all that to diagonalization reasons which refer to the same diagonalization processes as Gödels laws.
It is even probable that some day will come where spontaneous generation of viruses will emerge out of the Googelian soup.
Hypothesis : Virus must
"spontaneously" emerge :
- when DU of any sub-universe becomes very large, the probability of any string of finite length grows ; that is true also for virus strings
- when DU is large, cracks happen, favourable to virus contamination
A virus :
- cannot be held in a too small digital being
- is certainly present if the being is very large, which we prove as follows :
10700 ? (we have
in mind minimal length viruses, and their probability or emergence)
- This figure reducing factors: all equivalent viruses of same length
- this figure augmenting factors: non random generated viruses, but indeed anti-virals
4.4. Recursive functions
Automata with the Input/State/Output model) beings are recursive through their O and E functions. Functions contained in E are themselves IEO beings.
Diagonalization at the
heart of Gödel
1. number of signs in the function
2. limit value for the parameters (bit length of the variables)
3. Finitude on the number of steps. We cannot go down nor up in steps. In particular, we cannot go back to the origins.
In practice, one starts with a finite being, at a definite place, finite, in DU.
Make a recursive meta-function (a sort of projective geometry), to pass to infinite.
5. Fourth approach : reciprocal saturation (groupware)
No system can process the totality of information available in the universe. It has to select a given number of bits in input. (A possibly voluntary selection). Similarly, its capacity (in terms of state numbers) being bounded, when an input results in a complexity growth, the system must choose what it keeps and what it drops.
That, in the real world, is observable in human being relations (see part 6).
Particular case: auto saturation. If we suppose that a processor processes information at maximal complexity (defined as: the number of bits on output is equal to the number of bits in input, plus the number of state bits (after Kolmogorov reduction (and let us check that is properly attached to the "complete" character of complexity theory), and if it receives in input all its output bits, E should grow exponentially, and rapidly. As soon as we reach a threshold in E, the processor has to operate a reduction, at least in the function E(I,E).
Reducibility relates to a
language (words, grammar) with of course volume and computing time constraints
- if S has only memory/copy, are reducible only the beings having an identical image in memory (differing only by localization)
- if we add concatenation, all beings are reducible by bit to bit copy, but that is not a true reduction ; it is more interesting for strings containing several times a same substring)
-if we add position of patterns on a white page (any hyperplan), concatenation generalized to several dimensions
210 gives 1024, we earn only one character
factorials also are interesting
- the Euclidean division A = bQ + R can reduce any number
if b may be factorized, the giving of Q and R is interesting (to be checked), interesting for multiples of Q
5.2. Noise from sampling and filtering
Mismatch between computing functions may cause interesting cases of noise. Such are the case of sampling (see for instance Nixon-Aguado) or filtering (see Bellanger)? That applies to living as well as artificial devices.
The sampling itself, or rasterization, of physical beings and processes, is a major theme for digital system limits. But where are exactly these limits ? Since at least the 1970's, we read predictions that limits of matter will be reached, as well for elementary circuit sizes as for communication rates. But, regularly, technological progress pushes them farther. A rather surprising news has been the quantum computation itself, which would transform the uncertain malediction into a boon for computing power !
5.3. Minimal loops
1 bit model.
Generation : if S1, display of a stored image
if 0, random generation
- if recognized as ... 1
- if not recognized as such, 0
2 bit model
Generation : one image among three images, stored or recognized
then, this bit, add to NEW images
For instance, when a load, le ... 2 images non recognized are stored. Then, take as the new
This principle may be
applied to things radically new, or this image, or this algorithm
We may have image generation around a prefix
Let a model on one domain/flow, having a good performance on its domain
A free bit margin,
I f comes a domain where it does not wok correctly, very bad performance in recognition (e.g. new language), launch a new recognition search, with a previs bit "new"
To do more thin, we should make analysis on what we obtain to re-integrate the prefix
difference between evolution and revolution
The stream come under recognition. If it is not recognized, alarm or betterment (value index), or jump to the next recognized message. It is recognized, a message is sent to the addressee, an action corresponding to what is looked for is triggered.
6. Consequences of digital relativity
Hence follow some
- the limiting principle may be taken as a germ for transcendence; beyond the reachable, God. Or at least the poet, the ineffable. The ontological proof... a file so large that it would exist by metaphysical necessity.
- consequences may be positive or negativer.
1. Law. DU cannot be totally united. A unique machine (Mumford) is technically impossible. That is reassuring, but with limits, see Google! This question is dealt with in Lyotard. Sloterdijk elaborates on that in his series "Spheres". The "myth of the machine" (Mumford), unified is impossible in principle.
2. Law. That applies also to time. There can be no total saga, no global history. That goes along with for post-modernism. This question is largely dealt with in Lyotard. Of course not on a computed fort. But with Gödel, Thom, etc. says a lot. Nevertheless, I would like to reach a computation (something like a curvature radius in Hyperworld). (bits by reduction e.g; here on the ditital size.).
3. There are also rules on minima, on minimal loops with self reference, distance of an S to itself...
4. Distance from neighbours is part of safety. Hence also the "membrane" justification in living beings and elaborate artefacts. To be at ease, any being must have some free space around. Duprat's law (system/population alternance). Protection against elementary errors. In time also, any being, to be at ease, must have enough space around.
5. Contradictions that appear may cause stress
6. Undetermined spaces appear, which are to be filled. That may be a curse as well as an opportunity. At the limit; an interstitial region for Spirit, Eccles like, but otherwise.
That offers a base for liberty and creativity : there is an indifference margin, that everybody must use, get the best of his own L, locally optimizing. See Ecumes, of Sloterdijk. Then comes the time of conflict: then, we have to build, in some way (physical wall, netiquette...).
Varia to be edited
And a point in Bailly-Longo.