One of the main concerns of your work is the SYSTEM approach to information. There is a standard system concept defined as an arbitrary set with relations. How is system thinking to be applied to information? Moreover, which are in your view the limitations and strenghts of the standard concept of system?
In my opinion, the standard definition suffers from severe deficiencies. The most striking problem to me is that it does not account for the emergence of the systemic properties – and this despite the fact that system theory sets out to give a scientific understanding of the old saying "the whole is more than the sum of its parts". By that the standard definition foregoes the necessity to define the essence of a system which is effects of synergy on the level of the system. It is essential to distinguish two different levels in a system: the level of the parts and the level of the whole, or the micro- and the macrolevel, and to point out that on the macrolevel you find these emergent properties but not on the microlevel, and that both levels are coupled together by a certain dynamics that lets these properties emerge.
Mario Bunge has made an attempt to extend the standard definition and include the processes that go on in a specific system. He calls the processes "mechanism". If we interpret this "mechanism" as the dynamics of self-organisation, we may head in the right direction.
I want to stress that it is self-organising systems that are of interest here and that it is them I want to see applied to the understanding of information. The reason is that information, in my view, has to do with novelty, with something new and that the new entity or event can be brought about only by a process of emergence. Thus we need systems that are capable of letting the new emerge – these are the self-organising systems!
The notion of SELFORGANIZING SYSTEM plays a crucial role in your written work. Would you explain the reach of this concept? Which conceptual presuppositions regarding identity or selfhood does it imply?
According to the sciences of complexity, self-organising systems abound in our universe. Only under certain, mostly artificial, constraining conditions which are not so often found in our universe do self-organising systems degenerate to mechanical systems that do not produce novelty. This means, that all systems we find are, basically, self-organising systems – whether in physical, chemical, biological or social domains.
Of course, self-organisation does evolve itself. So you find systems of different self-organising capacities. They form different evolutionary strata. The most primitive self-organising systems demonstrate only the dissipative type of self-organisation Prigogine dealt with. Biotic systems are dissipative systems that do their dissipation in an autopoietic way Maturana and Varela made popular. Social systems are autopoietic systems that do their autopoiesis in a "re-creative" way as Erich Jantsch – by the way, an Austrian, and a friend of Peter Fleissner in whose teams I had been working for a long time in my life – liked to talk of.
Well, these different self-organising systems populate our world. And they are the agents of change. They are, in a way, subjects, they are, in a way, selves, if to different degrees, from the most rudimentary dissipative system which might be called a protoself to the autopoietic system which might be labeled a quasi-self to the most sophisticated re-creative system which is that fully-fledged self we are used to calling today (so far we do know nothing of extraterrestrial life – but, I'm sure, if it is out there, it is another branch of self-organisation). What they all have in common is spontaneity which means there is more than only one option to be realised by them.
Let's have some input for such an agent. The output then is not predetermined, not fixed, but dependent on the agent itself and the output is dependent on the agent as much as it is dependent on the input. This is the precondition of human free will which decides in a reflective manner between possible alternatives.
Often, “information” refers to relations or processes in nature or in artifact which are lacking MEANING or indeed semantical content. Thermodynamics, electronics, genetics and many other disciplines offer examples of this use of “information”. Also often, “information” typically refers to semantical content and involves MEANING or related intentional phenomena. Folk psychology, library science, common sense and many other activities offer examples of this use of “information”. A Unified Theory of Information is aimed at a unified account of both uses of “information”. How is this possible? How is this program to be carried over?
I think we have to approach this question in an evolutionary perspective. That is, we have to accept that there is a lineage of ramifications in the ontogenesis and evolution of real-world systems that makes all the different manifestations of information genetically linked to one another – linked by history, by being descendants having the same ancestors, so to speak. If information comes with meaning in humans or in other biotic systems as biosemioticians like to insist on, then there must be a prestage of meaning, a predecessor of meaning, in systems that are less complex. The task is to investigate what is in common on different stages of evolution and to investigate what is different on the basis of the commonality.
In my attempt to do justice to that, information in the most general sense is a relation that is produced by a self-organising system. This agent relates some entity or event, in the most general sense, in its umwelt – an input – to some entity or event that it generates itself – to the so-called output. This output is not necessitated in its specificity, not necessitated in its entirety, by the input. The agent could produce a variety of outputs that relate to the same input. In this respect, there is novelty in the relationship, there is a leap in quality from the input to the output, there is emergence which is brought about by the agent. In doing so, in producing a unique output, in providing novelty, the agent assigns an agent-specific signature to the input. Using semiotic terms, we can say the agent which is the sign-maker makes a sign which stands for that for which the agent makes the sign. So, I hope I could make this clear: meaning does not fall from heaven, it is already there in the most general sense in an agent in so far as the agent relates a certain output to an input. We just have to add that this relation undergoes a long way of evolution and that ever more complex manifestations of meaning, tied to, in my terminology, ever more complex manifestations of information, are developed.
How are content and meaning explained in nonintentional or unified terms? Or, alternatively, how successful or how needed is a semantical explanation of natural (nonsemantical) facts?
The unified terms are not nonintentional, because every agent in so far as it organises itself generates its own informational relation. Information is information for this agent. It is agent-specific, it is system-relative. But to state this, on a metalanguage level, is to state a fact. It is a real-world fact that agents irrespective of how complex they are generate their own informations.
UTI does not understand itself as REDUCTIONistic. How is a non-reductive integration of informational processes possible? It seems to me that in your view reduction turns to be false by definition. Is this not a idiosyncratic understanding of REDUCTION?
Well, I hope it is not!
We can talk of reduction in a methodological, that is, an epistemological or theoretical or semantic sense, but it always has an ontological correlate. If we reduce in thinking something A to something B we want to say that A is nothing else than B, then we disregard that which makes A different from B. The ontological side of this coin is to state that all the real As are real Bs and only real Bs. This makes the statement false, if we want to believe there is something that turns a real B into a real A. You are right, if you want to tell me that merely a relationship between A and B should be pointed out that says all As are also Bs. This is just to point out the commonality. This is not a reduction but, e.g., a tracing back to origins.
In my opinion it's the most difficult task in science to draw here the distinction and to see the common together with the unique, the universal with the particular, the general with the specific, in a union. Integration is only possible in a non-reductive way. Reduction is not integration. It's subsumption, subjugation, subjection.
And this is the way I want to approach information. There is a common basis to all information manifestations in our universe, and there is an evolution in which one state of compexity gives rise to another, higher, state of complexity and in which ever more sophisticated information manifestations emerge.
The notion of EMERGENCE is very useful in many of your arguments, being explicitly or implicitly at work in your understanding of informational phenomena. Can you explain this notion in plain terms? Which, if any, are the differences between supervenience and emergence?
I have to admit that I never understood the arguments by which Kim tried to explain supervenience. To me it was an unsuccessful attempt to attack the notion of emergence.
The phenomenon of emergence is very easy to understand. The relationship between the emergent and that from which it emerged is best treated in terms of a necessary but not sufficient condition. Let A be the emergent and B that from which A emerges. Then B is the precondition for A. B can be give rise to A, but need not to do it, and it might also be able to give rise to C. Given A, B is a must. There is no A possible without B. All that means B is a necessary condition for A but not a sufficient condition. You can't derive A from B but you can derive B from A. There is a leap in quality from B to A because A inheres some novel feature compared to B but A still inheres B.
In evolution we see upward emergence which lets complexity rise.
It is probably hard not to find the word COMPLEX in your answers to the previous questions. How should we understand complexity in general? The relations between the mathematical theory of complexity and empirical theories does not seem obvious. Does UTI presuppose a unified theory of complexity?
Well, complexity is located between total order and total disorder. A rise in complexity takes place, if something new, a novel feature, is added to the already existing state of complexity. And seen from an evolutionary angle, complexification does not preclude but include simplification. The reason for that is that the emergent may exert an influence on that from which it emerged to such a degree that it reworks and – Luciano Floridi has coined a good term which I would like to use in that context – reontologises the historical precondition and makes it an actual precondition. Having done so, a new system has emerged which always takes care of the basis from which it is dependent as long as it will exist. This new system is more complex than from what it emerged but, at the same time, it is more simple since the old complexity has been reontologised.
So different levels of complexity arise. Each higher level as it includes the lower levels is also a simplification. This becomes manifest when looking at, e.g., the language of different scientific domains. The mind language is more complex than the brain language but it is more simple too. It is more simple to deal with psychological phenomena in terms of psychology than in terms of neurophysiology. And that means you need not to understand all neurophysiological phenomena in order to successfully deal with psychic ones. The higher level is more simple as it integrates the lower one.
As you point out in many texts we may quote here, you see a relevant divergence between MECHANISM and FREEDOM, which is further of import to the understanding of COMPUTATION. In the following sense: you state that “self-organising systems have the freedom to choose between several alternatives, compared with mechanical systems where there is only one possibility”. Since there are mechanical devices functioning on random bases, the difference between mechanic and free choice must lie in two eventual meanings of ”choice”. Should we think that self-organizing systems somehow ponder? Are they little minds? Is that the reason why they may be called free?
Ok, I believe if we look closer we will find that the pretended inclusion of randomness in artificial deterministic systems turns out to be deterministic as well - determinism in the sense of allo- or heterodetermination. Self-organisation presupposes a degree of freedom and it is, in which way ever, self-determined. It is up to the agent how it produces the output. And the agent has more than one possibility to do this. If the system that has to produce the output has only one possibility, namely, can give one certain output only to one certain input, then it works as a mechanical system and not as an agent. Does a primitive agent ponder? I am not fond of saying that the universe is perfused with a soul. What I do think is that if we human agents ponder there must be some preconditions from which our pondering capability emerged and we have to be clear about the fact that we share this precondition with lower level agents though there is a clear-cut difference which makes us distinct from them.
Joe proposes the following additional question about the Sustainable Information Society (it could be a way to integrate the questions about freedom, selbst-organization, C-C-C, the challenges we have to face, etc.):
If we consider that nowadays our abuse of natural resources, the inequality among people or the difficulties of the majority collecting relevant information to solve everyday problems is higher than ever, we then might ask what is the "Global Sustainable Information Society" you talk about. Is it a goal, the end to which social systems seems to evolve? or is it more like a regulatory idea as Kant´s perpetual peace, an utopia?
It's a concrete utopia in the sense of critical theorist Ernst Bloch. Though our societies are self-organising systems they do not develop by themselves in that direction which would be desirable and would meet the long-term interests of the systems self-maintenance. This is a feature inherent in every self-organising system. Look at Ward's recent book on the self-destructiveness of life. Look at the history of evolution in which more than 99 per cent of all existing biotic systems have perished. Thus also our systems of human civilisation may perish. But since we are parts of these systems that - in contradistinction to self-organising systems on lower levels – have the possibility to influence the whole by pondering with reflection, rationality, and reason, and since we can identify possible futures for humanity, our systems are not yet deemed to perish. Anyway, it is up to the parts of the systems to do the right choices.
We are approaching what I call the "Great Bifurcation". There is an alternative between at least 2 possible paths. One is the path of a breakdown and one is a path of a breakthrough to what I call a "Global Sustainable Information Society". The latter provides a framework for the persistence of humanity, that is, for the capability of humanity to fight against anthropogenic, sociogenic, self-made causes for extermination.
In order to enable societies to get on the sustainable path, ICTs have to be shaped according to their potential to foster the generation of human and humane information. Unfortunately, due to informational capitalism we live in, neoliberal policies have furthered the preponderance of meaningless gadgets over means for survival and for the good society. However, this is not the end of the story. And since humans are self-organising systems, they have the freedom to change priorities. Thus hope is still there.