2. The global saga
3. Time laws and patterns
The actual Digital World History
A major aspect of real word is its temporality. Digital global evolution is an aspect of the World evolution. And the rachis of it, we tend to think. .
After a presentation of the basic issues... paradoxical, we shall summarize the saga sketched in the preamble, then attempt to list the main laws of digital and general World history.
Here in particular we take a “modern” stance, history being taken as a progress along millennia and centuries. We are not post-modern, with the kind of skepticisms it brings about "grand narration". We assume, and attempt to show, that there is really a global growth of the World as digital, even if there can be recession periods (typically, the fall of Roman Empire).
History and prediction of the World as digital. We see it as, globally, an exponential growth.
Can the evolution from the big bang to mankind analyzed along our lines of digitization ? The first chapter of the Bible tells it that way : "God divided the light from the darkness... and divided the waters which were under the firmament from the waters which were over the firmament... ". God, the first digital artist !
- the historical facts themselves (platonician, let us say);
- the way we talk about them, paint them;
- the progress in knowledge about the past , both concrete (archaeological excavations), abstract (science in general), new tools (palinology...).
1.1 The global theory paradox
If the digital relativity theory is pushed to its end, the physical world may be seen as an emergence of matter in the digital bosom. The Big Bang is seen as a kind of phase change : a purely digital reality suddenly loads itself with physical values, including their apparent continuity. This latter is perhaps only a kind of approximation to cope with excessive abundance of quantic parameters, which prohibits managing them discreetly. Perhaps would it be possible to build a totally digital physics, thus dropping the differential equations in their analogy form.
Actually, the upturn from "real" to "digital" is disturbing mostly when you first think of it. After a while, when the representations in become large, with populations and beings swarming, the digital hides itself and the observer finds back again something like the classical view, continuous and analogue. And so more as these representations are shared with a large number of other acculturate beings.
Digital relativity is constantly present, both a brake and accelerator, inducing dialectic moves. The Hegelian thesis/antithesis/synthesis is directly called for by the digital nature of this world, with the "n+ 1th bit" scheme.
And everything stems out of the L growth law, which appears as the most general law in Universe, and then meets the paradoxical fate of contradicting himself: it pretends to enclose the whole universe, while stating that Digital relativity prevents that absolutely.
1.2. History emerges into digital (and vice versa)
This is just a particular case of concrete and nature emergence from the digital. Like concrete and abstract. One can see abstract as emerging from concrete, due to thinking processes (even from computers, with data analysis, for instance), or concrete emerging from abstract, when more and more detail and precision in abstract assertions zoom on specific points which lose the abstraction lightness and matter independence.
There are two dual ways of thinking digital history. The first, traditional, is to write an History of computers, of computing, or computer scientists, and how the digital beings emerge from matter and progressively take their ubiquity on Earth and in the skies. The second one, which we shall attempt to sketch here, is to see how digital beings, mainly data bases then textual beings, after having only stored some numbers and literals (names and postal addresses, for instance) about present beings, integrate more and more the historical dimension, hence making general history take its place in their models. And that including a representation of themselves in the general landscape. The two ways are intermeshing.
Henry De Lumley recent book L'homme premier européen shows at the same time the emergence of man in the World, and in Europe , and the way the archaeologist emerges to the enormous lengths of prehistory.
2. The global saga
In our theory, the problems origin is a problem of quest. Any systems starts hic et nunc, and does not reach before a lot of development the questions of its origin, be they ontogenic (the primal scene, intra-uterin memory), phylogenic (low scale with genealogy, large scale with history and paleontology), and global (physics), plus possibly metaphysical (God).
At a given moment, an being has access to some place in GG, maybe its internal E or an I. hence, it may deduce an origin. Something like prospective geometry. In the history storage, some functions will vanish in a computable time. Symmetrically, S can extend the functions to make forecast and predictions.
Our model is resolutely “modern”. The World history has a direction, a meaning, the growth of the digital, of the number of actually explicated bits, to make it clear. Then, there is really a “progress”. The progress is not monotonous. Some periods may be taken as highly regressive (Roman Empire fall, Auschwitz), but globally the move goes on.
3. Laws and time patterns
Grow or die.
It is the global “categorical imperative”, to say it in Kantian words.
It is true for the World in general as well as for each and every being.
3.2. the paradoxical Principle of least action
"Lorsqu'il arrive quelque changement dans
But digital growth is a
growth of L. Then its basic contradiction : the digital universe is determined to grow in
indetermination. And we must both
- agree with post-modernism, in his refusal of any "great narration", with for instance its ending in a paradise, of infinite duration (P) and the infinite variety of God (H);
- look for the various sides and consequences of this paradoxical growth.
In absence of a E law, a S reproduces itself identically from t to t+1. More precisely : each being does not change if it is not changed by something else (omne quod movet ab alio movetur), but it has its own changing rules : simple mechanic energy or physical bodies, automata law E = f(E), when I is null.
If there is an E law, but no inputs, S has a finite number of different states, and will follow cyclic moves inside that space, since from a given state E, it can only to to the f(E) state. If there are input, then possibly it can follow any pattern within the space of accessible states.
The meaningful growth is L growth. The imperative to have L growing is the basic human, and general "geodesic".
1 - if L = E (but for a
reserve of ad hoc function), for a E given, we cannot do anything
2 - if E is fuzzy, we can increase L with emerging or conquered bits
. from an external model
. from inputs
. combining both.
3 - Mobilization of bits. It could be seen that some places, some bits, are never used, nor pointed ad. Garbage collection.
then progress by "mobilization" of bits or, more generally, by temperature raising, and distributing it better.
4. One may consider L as a level of unpredictability of O impacts me. Then L measures the imperfection of the model we have of S, or more exactly, that another being S' has of S.
Or, still, may consider
k(O/1), that is the complexity of the minimal system required to get O from I,
one could reason directly on b(O) or k(O)
one could say that k(S) is the smallest program witch lets getting O from I (considering S as a black box, of which E is ignored)
if k(S) > E, we have not a good model to compute k(S)
if k(S) < E , E is under-used
if k(S) = E, or better k(O/I) = k(E),that means
- either that we have an optimum of use, with a good external model
- or that there are compensating imperfections (we could talk of a global quality index, but perhaps interegisnt, since it combines the properties of S ands S').
Still in the fourth model
In the simple case, the method consists in mobilizing E as much as possible, introducing into it a combination of complexity K maximal from de standpoint O/I. Or letting I aside.
One method could be to have a process of simple copy, and all the rest in random data (acquired outside once and for all)
That works well on one cycle, or series of cycles if b(0) <= 1/2 b(E)
But an external observer should then detect rather easily his copy effect, and then find E again. A better solution must be : design a clever program generating the lest predictable O's.
case 2. Random numbers
A finite random generator products always the same finite sequence. The optimum O/I related to E consist in finding a program which would be the least easy to be re-written from outside. That could be measured in number of cycles demanded to an external processor in order to reconstitute E.
case 3. Use the I random.
If S' ignores what comes on I, then k(O) will be something like the sum k(I) + k(O or the part of O coming from E), or a combination of both.
A fortiori, if S' controls I, then it may experiment methodically to identify IE more rapidly.
It remains to be known on
how many cycles is measured k(O/I).
If we put no limit on the observer system S', this number depends only on S. We could imagine that S keeps for a very long time some hidden results, in order to lure S'. S could, for instance, have a cycle counter, and decide to send new elements very seldom (this limit may be very far located as soon as E has more than some decades of bits).
If we put limits on S', then the number of cycles depends on S' as well as on S.
O may be precisely adapted to the comprehension of S by S'. It is the case of human-machine interfaces (Generally, the problem is to understand a functional part of S, rather than S as a whole, but when the interface is made for designers or repair person).
For many devices, the production of well defined O from the I is precisely the economic justification of the device!
When S autonomy grows, it
may create more sophisticated models of its own P, and for that optimize as
well A as I (verity principle, or commercial feeling).
An important case: O specifically designed to let one or several S' to know S.
Keeping on along this path,
we could find a desire of S to exist virtually beyond its normal P, by
expressing itself at a maximum, and transmitting the best of itself (the major
part of its E) so that E' keeps beyond S death.
But how to explain hat in L terms, but by considering that S is part or a larger family, and that this transfer of E increases the global L.
Let us insist on he fact that, between some entity types, certain relation types may exist, and others not. But, for a large part, the types themselves are defined only as their relations (clear in OO programming).
3.3. History speed and acceleration.
The growth has an
exponential speed. We know at least two cases of exponential growth laws, in
the case of artefacts
- Moore 's law for digital chips
- Leroi-Gourhan for silicon prehistorical tools.
It would probably possible to generalize this law to all artefacts, with a law of substitution when one technology is made obsolete by a better one.
A general cause could
result from a sort of thermodynamical laws. We could assert, for instance, that
- The stronger is pressure, the shorter are the cycles.
- With time, the lateral pressure augments. Bits reinforce themselves. Physical spaces diminish.
- Pressure pushes to digitization (as, in physics, pressure pushes to crystallization).
To give a more precise base
to the law of acceleration we must define better history speed. For example, in
human matters :
1. History speed : number of events per unit of time
2. If we admit that any human being has the same capacity to generate events by time unit, then speed history is directly proportional to world human population
3. Nevertheless, on can assert that only part or the events will be sensed as meaningful by everybody.
Hypothesis. For a given individual, the number of meaningful events per time unit is constant. Then, for a given person, speed history is constant.
One can assume that, due to
- the number or events generated by an individual grows (better health, better "productivity")
- the number of events perceived by an individual (events about which he is informed) grows (media).
From a digital being counting time on its own clock, acceleration of History will emerge as a slowing of event rates in the past.
A basic motor, the (n+1)th bit dialectics (see 10.10.2)
See our presentation at Club de l'Hypermonde Une loi de croissance historique digitale (LHD). (2009, with some later updates).
We would define a DHI (Digital history index), basically its digital mass at that time, multiplied by some coherence factor. (In some way, all that is real is rational, as said Hegel).
Hence could be deducted an
average temperature, total mass, population, technological level, gross world
production (and hence history speed), but also typical structures :
distribution, urbanization, etc.
- biological species
- human beings and their groups
- star system
- machine distribution
- power distribution : concentration then division
If the digital world is infinite, its global mass it constant. And a given being has always a null mass in respect to infinite.
If digital world is finite, or at least accessible by a finite process, DR lends to think that the accessible global mass (accessible GG) is constant. In this case, the evolution of beings is not independent of this constant.
Then there is a geography
of the digital universe , related to the materiall universe. Any digital entity is located somewhere in
the general landscape of its time. We could try also to diversify the model
into a series of "circles" :
1st circle. If we take ik/ops. sum of processors, peripherals, memories as digital. Grows rapidly.
2d circle. Add the electronic analogic. Here we can correctly approximate by digital. Paper, all information supports. Grows also, but less rapidly than the 1st circle.
3d circle. Add neurons, life. With human population, biomass.
4th circle. The whole physical world. Then it is constant.
An advanced system
(relatively to its time) has high performance (material) but its practical
yields is limited by environment, lack of experienced users, etc. It cannot
exchange easily with the other S, less advanced. Probably also, it is
comparatively costly and unreliable.
Efficiency as combination of maturities, as for liberty.
The stability of family (type) of S is generally maintained, beyond its natural (organic, physical) obsolescence time, because the environment know how use it efficiently.
The DHI has to be declined
according to localization, down to the individual.
At a given time, the different subspaces have different indices, but this difference itself is conditioned by the general index.
Until Renaissance, the general index is too low for the globalization of all mankind (see remark on prehistoric mankind, from the first precarious and rare culture of hunters/gatherers.
By 1999, the general index is such that isolated tribes, too much different, can survive only by protection from the stronger powers.
(local lettering in below lines)
D volumic density, P
population , L average L
sub parts (di , pili<) of each part
but the total of pi=P/V
and the total of pi wih total of Vi Vi = V
The distribution of DHI in different regions is not any distribution. It is determined by the mere level of DHI.
And the index will grow more rapidly in certain zones positive fedbacks. Liberty is aristocratic.
But, beyond some level, the rapidly developing regions become conscious that they need the others, that they cannot let survive too cold regions, then compassion, care for the poor, etc.
DHI can be applied only
with some or mathematical regression, the choice of a physical and temporal
granularity. A technology does not exist in itself, but only through its
instantiation into some concrete S which, at a given time, correspond
imperfectly to its nominal yield.
A good use of the law demands that one reasons not on one only S, but on a population and its average features.
At a given time, interact
the following factors :
- a state of the technical knowledge, communicated by documentation and training (by extension: training of young mammals by their parents, genetic transmission, epitaxial transmission)
- a state of the most advanced types and entities
- more or less large populations, and more or less in interaction.
Hence derives a level of aptitude to progress, to yield some appreciation, explicited in industrial R&D (by extension, conquest of new territories, acquisition of new behaviours by mammals, mutation spaces, favourable milieus to the synthesis of more complex molecules).
Gillier law (generalized, and, in this sense, particular case of the Gupta law): the more unified are world descriptions, the more numerous the number of dimensions. And, under an apparent simplicity, horrible Laplacians)
A standard succeeds (and
- because it is terrain proofed : if everybody does some way, that proves at leas that it is statistically safe
- because it simplifies action, affords reusability
- supports the community ; others may be confident, communication is more effective through a shared language
For a syntax to be elaborated, at a given time history, it is necessary that it have a sufficient yield (darwinian). Abstraction has a infinite yield potentially, but not practically/historically.
3.5. General Era Parameters
DHI, average temperature, total mass (if we know how to measure it ; for
instance inferring it from the observer space)
being population (there may some difficulty in the count. the total mass/being population measures a sort of mean complexity
Technological level of beings. Linked to the complexity.
Gross global production (if we know how to get it)
Note (reading the introduction of Bertrand Gille : the more we go, the smaller the yield of theories. In basic chemistry, atoms and molecules are counted by 1023, 1040, etc. With plants and animals, several billions only. Inside mankind, the complexity of each being makes it unique.
Distribution structure, concentration levels
Different beings types
Distribution in a biological organism (dinosaur progress)
Power concentration level (then power division levee)
At any era corresponds a geography. This geography is partly marked by the historical construction chances (we have to see whence came these aleas. see DR).
Any being, at a given time,
is situated somewhere in the general landscape of its era. It may be more or
less different of the average.
Example. An "advanced" (for its time) system has high nominal performance and organic structures. But its practical performance is limited by environment, learning curves, etc. Functional performance depends on its environment. It is probably costly, since it is still rather experimental, with manufacturing facilities still young. It cannot exchange easily with less advanced systems. Sans doute ici il faudrait plusieurs niveaux pour mesurer plusieurs échelons de performance.
Here, we need probably several scales for performance.
The DHI growth means that the capital of knowledge grows, for each being as well as for the global digital universe.
The global growth
- among new beings
- more bits about existing beings. Some beings vanish ("forgetting" (a problem to be looked at more in detail), spoiling by noise....).
What about the number of copies of the same being ? Pointless if the network efficiency makes the copy, other than temporary for practical purposes (proxy) of safety, pointless. Any new article about Mona Lisa makes Mona Lisa richer... And, in the paper universe, a book is important if its sold by the hundred of thousands.
The star effect applied to representations.
3.6. Reciprocal strengthening of progress modes
Digital mass growth demands dematerialization, in order to spare matter and physical energy. On the most global level, anyway, any digital increase is done at constant matter quantity, since universes mass does not change.
dematerialization calls for complexity, since we must compensate the lack of
physical resources with a better mobilisation and higher yields.
Complexity augments autonomy. First, directly, by definition But also because it affards to interact more smartly with the environment, to find food at lower costs ant to avoit the threats. We cannot forget that, for a same technology, complexity augments the number of components ant then the failure risk, plus the risks caused by their interactions. In immaterial and digital spaces, complexity lets us navigate through format adaptations, for instance.
Reciprocally, autonomy augments complexity, since it gives way to a better transportation, hence to knowledge and habituses acquisition.
Dematerialization augments autonomy, due simply to the fact that lightness and low energy dissipation permit longer haul.
The digital soul
sophisticates itself progressively. And that can be measured. Then, it encounters
limits, which it may overpass with new ways of complexity augmentation, with
new types of links between bits, bytes and objects, with new structures. These
limits, and the structures they foster, come from four sources that we shall
- physical matter in general
- specific features of silicon chemistry
- specific features of carbon chemistry
- limits due to the digital soul itself.
A major aspect of digital growth is the "softwarization" (and perhaps virtualization) of beings, as well as replacement of "hard" genetics by cultural transmission. That goes along with a replacement of strong interactions with weak interactions, crystal, molecule, large molecules, cells, societies, democracy... (It may be taken as the basis of the big trilogy of Sloterdijk, Spheres).
This dematerialization is a necessity for as well as a consequence of being growth. On average, since earth matter quantity is rather constant, the number of bits per unit of mass must grow. Physical and even digitally material connexions are replaced by "software" ones.
Let us climb back up the distinction sequence which leads to the present galaxy. First at the entity level. All that we have said statically about structures finds here its development in time.
This difference has progressively emerged with the structural enrichment of machines. The tool signals the emergence or an external reusability, with a sort of operational cycle given by the user (e.g. sawing cycle, or axe cycle). The tool by its construction incorporates some sort of "program", which must be completed by the action of the user, which adds the rest of the cycle (the efficient gesture he has learned to repeat efficiently) along with the "data" i.e. the place, time and beings where to apply the operation.
With a mechanical machine,
the cycle is integrated in its system of wheels and gears, and possibly using
its own energy, and working at its specific rhythm (time frequency), possibly
controlled by a cybernetic loop (Watt's ball regulator). Then one could say
that the program is completely integrated in the machine, the data being
brought implicitly in the physical beings to be processed and the place where
the machine is located. With machine sophistications, the program emerges more
and more explicitly :
- it can be "parametered" through some levers or valves
- it take a specific physical form with cams or complex gears
some parts may be assembled or changed to obtain a given result (drawer, writer and organ player of Jacquet-Droz, for instance).
With the punched supports (cards and tapes) of barrel organ or Jacquard weaving loom, the program becomes external, rather easily exchangeable. One can even ask if, in the Jacquard loom, the disposition of threads to be weaved is a sort of data, and the hols in the cards a program... or vice versa.
In the punched cards calculating machines, the cards bear the data, and the program is represented by a manual disposition of the device (sorter) but possibly also on certain cards (control cards). The progress of these machines let appear a sort or programming through "wiring" of interchangeable "connection tables".
But the distinction becomes much clear with the Von Neumann machine, because here the physical support is the same (by definition), but the inputs are acquired separately, on different batches of cards, for data and for program. Nevertheless, the border cannot be totally clear. HTML and so more XML re-introduce intermediary information types..
3.8. Lighter "social" links
Unity extends to machines. Indeed, the unity of all machines has always been a sort of accepted myth (Mumford)... Unification of science
Atom, mineral molecule, organic molecule, cell, polycellular, animal societies, biotopes..
Beings, and more specially beings, meet and compete. The "softwarisation" law applies here also.
At start, pure physical laws apply, and do not let the explicit functions IEO apply. Then the relations become more explicit and go to "adult to adult", stable and contractual (hence digital, since a contract is a text).
The biological growth could not be oriented but towards a greater brain. And the digitization of representations does nothing but continue the first break between significant and signified.
Necessary to globalization. Sloterdijk Spheres.
Communication between beings is growing at a rate even more high than processing capabilities. Direct communication between living beings reached a high point with speech and articulated language. Since then, progresses rely on artificial communication devices. The recent times have seen a rather fantastic increase of communication, from the "Gutenberg" galaxy to Google and cell phones. An important amount of communications connects the world of machines, but living beings also are implied in these ubiquitous networks. How far will all that merge, integrating the living beings beyond privacy and beyond skin limits ?
We could elaborate as
1. elements become stable within the limits of one S ;
2. layering of elements, organs and functions inside one S
3. emergence of semi-independent entities, with a layering of slave/master type
4. emergence of coordinating entities, powers or servers
4. contractualization and peer-to-peer cooperation
Then if, there were several independent S, one has eaten the others.
3.10. Coordinated entities. One S uses another one. Thresholds
A dialectic view of DU history: thresholds of non coherence.
If the used system grows, during some time, the used S perfects itself in a rather homothetic way on its functional and organic mass. New components appear as well as new functionalities. Rather that when one buys new add-ons for a camera or a computer game, while learning new user techniques.
Comes a time when the functional complexity (here, interface complexity seen by the user S) extends beyond its capacities. Some enhancements or optimizations remain possible. But then it is no longer enough, and the way to keep on progressing is to replace open loops by closed ones, and to delegate some command executions, or decisions, to the used S itself. That is a topological jump, because we add loops. And it decouples the two complexities: use complexity lowers, organic complexity increases again (strongly, maybe).
(Same image as in Complexity)
3.11. Progress of Art
Here we must face contradiction, or paradox : there is evidence both for progress and for no progress since the origins.
The non-progress forces
evidence. Prehistoric wall paintings, great achievements of
The progress seems a necessity to be original. A lot of experts think in the line of Michaux (Critères esthétiques et jugement de goût) "Jusqu'au moment où... un jeu de langage épuisé s'ouvre alors sur d'autres. Ni meilleurs, ni pires. Ni plus avancés, ni moins avancés. Simplement autres". "Up to the time when... an exhausted language game opens itself to another ones". Neither better nor worse. Neither more nor less advanced. Simply others".
That would imply that, for artists as for seers, an indefinite space is opened, where new ground may be found. Like a sort of indefinite "frontier" opened to pioneers, which would be nor better nor worse than their predecessors.
That was perhaps the case when Mankind conquered progressively the entire surface of earth, with only beasts to push away. But came a time when the totality of Earth was covered, and newcomers had to be somehow "better" than the natives if they wanted to find their place. And to kill or dominate the natives in order to develop the new way of life.
In the language games, the space is no more unlimited, but physically. New bits can always be added. There is no need to kill the more "primitive" arts. Just, perhaps, during some decades, to consider a precedent generation, or "school" as outmoded. But what can't be done is to forget. A painter today cannot do as if Masaccio, da Vinci, Ingres, Monet, Picasso and Klein had never existed. He must know them, as well as his public knows them. And the painters of tomorrow will have to keep those of today in mind. Then there is, at the minimum, a progress by accumulation of knowledge in the art community, from creators to visitors of shows and to buyers of woks, including the powerful chain of valuation by critics, curators, auctioneers and public authorities.
Of course, new sources of creation can be found in non-cultivated types of artists, like Art brut, Naive, Primitive. Even child scribblings reach sometimes summits of beauty without artistic culture. But these sources are limited, and rarely contribute to the cutting edge of Art. And every visitor to local art exhibitions, even when pleasantly impressed by some woks, even when he is ready a pretty watercolour or oil landscape to enlighten his own household, cannot avoid an impression of indefinite iteration of the same efforts and the same well known kind of results.
Let us add that not every civilization and culture puts on its artists this pressure on originality. Egyptians painters worked during at least twenty centuries without progress, and even with a sentiment of inability to reach the heights of their predecessors. So did generations and generations of Chinese or Japanese artists. That originality imperative is perhaps a specificity of Occident and modern times ...
Then, for each period, how to find new grounds, assuming that the whole community knows the existing works.
From Renaissance to the middle of 19th century, representing the real (or supposed so, as religious and mythological scenes) was the main aim of painters, and found progress in perspective, new painting media (oil) and better perfection in drawing and colour combining. Summits were reached with the great masters of Renaissance, and various summits explored so up to the "grandes machines" of Ingres and Delacroix. This way closed when photography brought an unsurpassable (and cheap) way of representation. Then painting turned to more subjective realities, with impressionism, expressionism, abstract art. If found new thematic sources in the critic of society and political powers. And finally, from Dada and after, art found novelty in self criticism and derision. It seems definitely impossible to produce really new paintings, if not by exploring local topics, or possibly spiralling deeper in dirt and shame.
Music is more abstract, and explored rather the sound spaces and their subjective effects. Here also, it reached its summit in the 19th century, with the total art of Wagner. Baudelaire, as an art critic, said that with texts on Delacroix as well as on Wagner. Then music composers had afterwards, like painters, to concentrate on more local or derisive niches.
Conclusion: There is necessarily something irreversible in art evolution, since what has been done cannot be done again, of if so gets out of art properly to enter craft or amateurship. This evolution is a form of progress in shared knowledge by artists and their public.
But, as long that mankind does not progress, and even more, as long as it is considered as not perfectible, art must always reduce its ambitions to that.