posted Nov 29, 2011, 5:22 AM by José María Díaz Nafría
[
updated Nov 29, 2011, 5:30 AM
]
Interviewer: José María Díaz Nafría Interviewee: Rainer Zimmermann Date: 09/11/2011 On the
occasion of Rainer Zimmemann’s 60th birthday on November 9th,
I met him in Vienna while attending a whorkshop on system science chaired by
Wolfgang Hofkirchner. Nearby the Burg
Theatre, we met in the pleasant atmosphere of the Viennese café Landtmann,
where he first sketched out to me his forthcoming book on Schelling which I am
now looking forward to hold in my hands. Before bringing up the matter for
discussion, let me give a short review of his scientific carrier:
Rainer Zimmermann, born in Berlin in 1951,
studied mathematics and physics in Germany and England not far from those who
have formulated some of the best candidates to provide a unified understanding
of the physical world, then he arrived to philosophy after having deepened into
the core knowledge of our natural sciences. Thus his philosophy has been a
“philosophia ultima” in the first place, as he advocates that it properly
should be. His academic itinerary shows an earnest dedication to both the
updated knowledge about the world and a philosophical speculation driven to
find a more adequate conception towards human life in its social praxis. As professor of philosophy, he has taught and
developed extensive interdisciplinary research in Berlin,
Kassel and Munich,
also in Cambridge (UK), Bologna and Salzburg.
In the territory of philosophy he has dug into
the work of Sartre, Bloch, Schelling and Spinoza –among others – finding out
that among them there is an underground thinking line which goes back to both
Averroism and Stoicism. His inquiry into the work of these philosophers – as we
can see in his recent “New ethics proved in geometrical order”– has not been a
sort of mere archaeology of thinking or apologetic reflexion, but a sort of
heuristic approach to current problems of our knowledge and praxis and
particularly in the understanding of complex evolutionary systems. To this end
he has respectfully followed the path pointed out by these authors though using
the horizon of our current knowledge.
His approach can be better branded as transcendental materialism as he names
it since 1990. He has authored about 350 publications including some 24 books
and monographies, scientific articles in a broad spectrum of topics (from
mathematics to ethics, from physics to political systems…).
J.M.: In your writings, you often
refer to the necessity to reorient
philosophy as it has been conceived in the 20th century in order to
properly reflecting the world. You mention that it should be “visualized as a
science of totality” following the works of Hans Heinz Holz, and you also
consider the task of your own philosophy, the “transcendental materialism” as
an “ultima philosophia” rather than a “prima philosophia” in the Aristotelian
sense (referring to an expression introduced by Theunissen for the first time).
As I understand, both things are closed related. Can you explain in some detail
the requirements of this reorientation, as well as its alleged benefits?
R.Z.: The
basic idea is that we cannot conceive a theoretical nucleus of what is
traditionally called “metaphysics” as something which can be derived from first
thoughts entailing then a picture of the world which prescribes so to speak the
latter’s evolution and structure. Instead, we have to look first for what the
sciences (and the arts as to that) are offering us in terms of insight. This
present state of knowledge is our raw material for constructing then the
desired picture of the world such that philosophy can be visualized as one
which follows up the scientific and artistic modeling of fragments of the world
rather than laying the grounds for them (this is Theunissen’s 1989 aspect of ultima philosophia) and, by doing so,
drafts out an overarching “theory of everything” whilst composing a meta-theory
telling us about what is common to the worldly fragments in structural terms,
but also about what we actually do or have to do when developing theories about
the world in the first place (this being Holzen’s aspect of philosophy as science of totality). The
important point is here that within this approach, philosophy gains an
explicitly empirical character: It is thus possible to speak not only of
theoretical and practical philosophy, but also of experimental philosophy, namely by exploring possible worlds whilst
exploring possible implications of scientific and artistic results and
viewpoints. Thanks to recent developments in computer technology, these somehow
“artificial worlds” can be modeled much more easily nowadays. (I have discussed
these aspects in detail in my book on transcendental materialism and within the
framework of the INTAS cooperation, led from 2000 through 2005 by Wolfgang
Hofkirchner.) Obviously, this type of philosophy is achieving nothing else than
what philosophy is always achieving: i.e. an improved orientation within the
world in order to eventually draft adequate principles for an appropriate
ethics.
J.M.:
This implies –to my understanding– a
significantly different conception of interdisciplinarity as it is usually
claimed. Could you give some insight on the methodological requirements of a proper interdisciplinarity?
R.Z.: There
are essentially two important tasks: First of all, in order to comprehend what
the present insight of the sciences and arts actually is, one has to refer to
all what is known at a given time. This is what Jean-Paul Sartre once called
“method of totalization”. Obviously, this cannot mean that philosophers have to
repeat the work of scientists or artists by themselves: Instead, it means that
they have to look for what is structurally stable and meaningful in an evolutionary
sense within all these different fields.
Second, for
being able to do so, it is necessary to utilize the terminology and the
conventions established in the various fields on the one hand, and to attempt
for this purpose the explicit development of a unified language on the other.
The idea is here to speak about the world in terms of a unified approach in the
first place in order to eventually facilitate the understanding for as many
colleagues as possible. In fact, this is the true meaning of
“interdisciplinarity”: to look for what is common to all disciplines and thus
shared by them within a space which is literally “between” them.
J.M.: What should be the task of a philosophy of information within this
framework?
R.Z.: As
within the approach of transcendental materialism the worldly (i.e. physical)
ground of the world can be visualized as constituted by its two fundamental
aspects, namely energy-matter on the one hand, and information-structure on the
other, the philosophy of information shows up as that part of philosophy that
deals with the second half of these aspects. But by doing so, it also shows up
as part of the philosophy of physics in so far as we hold that information is
physical. Hence, the information part of this theory deals with the first basic
differentiation of what there is in the physical world when talking about it in
terms of a physical theory of everything.
J.M.: As we can see in your work,
your criteria of what philosophy should be is closely related to ethics – as it was for Spinoza, for
instance –. Can you explain this relationship?
R.Z.: The point is that all of this knowledge which
is being achieved in the sense explained above does exclusively serve the
purpose of being capable eventually to derive a reasonable framework for an
adequate ethics. This is what we learn from Spinoza. And the first one to show
this in detail was actually Deleuze. In other words: Ethics itself shows up as a kind of science, and it does not really
deal with values at all. The latter are reserved for moral judgement, but this
has nothing to do with ethics. At most it is a bad form of approximation to
ethics. On the contrary, ethics asks what kind of behaviour is adequate within
a given situation. And more than that: If the ethical analysis finds that some
behaviour was non-adequate, then it is the further task to clarify how the
conditions for the given situation should be changed such that non-adequate behaviour
is not necessary anymore. In other words: Ethics is formulating then explicit
proposals which have to be discussed in the proper institutions. But in order
to find out about adequacy, the relevant criterion for this is knowledge at a
given time about a given situation in the first place. Hence, it is already the
choice of the method which is always
an ethical choice.
J.M.: In some of your works, for
instance in “Die Kreativität der Materie” (The creativity of matter, 2007), you
show and refer to the line of thinking
which links the Stoicism, Averroism, Spinoza, Schelling, Bloch and Sartre (just
to mention those you deal more often with). Reading your texts, this connection
– alongside their alleged “systematic” character from Spinoza on – is clearly argued. However, talking about
this connection with many philosophers, I have found out they often consider it
bizarre at the first place. Can you briefly clarify this connection as well as
the frequent astonishment about it?
R.Z.: The
point is simply that most colleagues are educated within a specialized field
which they hardly leave for the rest of their career, unless they change their
field according to job conditions. Especially in Germany, it is the custom to
concentrate on historical aspects of philosophy first. But instead of
continuing the historical taking in sight of philosophical thought by applying
it afterwards, most of them are not able to eventually leave history and come
forward to philosophy proper. Once they have acquired detailed knowledge of a
philosopher and his works they argue the sorting out of these would consume
most of their time. This is the reason why in Germany, there are very few active
philosophers by now. Exceptions are Habermas e.g. or Manfred Frank, but most of
them keep to the history of one philosopher only. Hence, in the long run, the individuality of philosophers of the past
is being stressed, not what is common to their thoughts. Because, generically,
one would expect that if the same thought is showing up in different philosophers,
this fact would add to its relevance and consistency. I myself did not have
this sort of problems, because I came from the sciences, where the strategy as
to acquiring knowledge is quite different. This is especially so because in the
sciences, the state of knowledge is changing very quickly, and one is asked to
keep pace by permanently extending systems, frameworks, and methods. So what I
did was to apply scientific rather than literary methods to discussing
philosophical problems in the first place, rather than discussing philosophers
and their work as prime objective. Hence, after a while, I recognized generic
lines of thought, and by trying to reduce their complexity, I tried to find
structural morphisms among them in order to derive general results. This is why
I can say that I am actually working on a line of thought (ranging from the
ancient Stoa via Spinoza and German Idealism, especially in the version of
Schelling’s, up to French existentialism in the sense of Sartre’s and to Ernst
Bloch). Bloch himself used this viewpoint of visualizing lines of thought as
did in fact the disciples of Hegel. It was Bloch who introduced the expression
“Aristotelian Left” for the line he chose to be a member of, parallel to what
is commonly called “Hegelian Left”. I would like to locate myself very much on
this first line made explicit by Bloch.
J.M.: According to your “transcendental materialism” – correct
me, please, if I’m mistaken –, matter has to be understood in a much broader
and dynamic sense as it is commonly thought. It represents not only a
potentiality in the Aristotelian sense, but also the foundation for its own
dynamics, which allows it to create new properties, new structures, new
causalities… . i.e. new beings. What are the foundations for this vision?
R.Z.: I
started from re-phrasing Spinoza’s system in a modernized language (as he did
himself once when re-phrasing Aristotle). The fine structure of this approach
comprising of substance including its modes and attributes has been developed further
by Schelling then. In fact, there is not much choice than to visualize matter
as what in the Aristotelian terminology would show up as “hypokeimenon”, i.e.
subject, different from substance: Matter (in the philosophical sense) turns
out to be prime material of modality. But it is observed in the fourfold shape
of energy-mass (or conventionally: matter) on the one hand, and as
information-structure on the other. Both of these are physical aspects of the
underlying prime material (Urstoff). The latter itself is physical ground of
the former, while substance itself is speculative ground of this physical
ground. This leads straightforward into the systematic differentiation of
being, non-being, and speakable as well as unspeakable nothingness for which Schelling
has done important preparatory work. These concepts are being discussed within
current international research.
J.M.: To the concept of information in its broad diversity – from physical
systems to cognitive or social ones – this understanding of matter might have
significant repercussions. What is the role of information in the – so to say –
architecture of reality from the point of view of the transcendental
materialism?
R.Z.: The
side of information (comprising of information-structure, where the first part
refers to both potential and actualized states of information, and the second
to actualized states of information only) is serving the purpose of defining
the organizational structure of systems: In order to evolve their
characteristic shapes, systems have to utilize energy, while information is
telling them how to actually utilize
it. The idea goes back to Penrose, Smolin and others: The universe has to be
visualized as a self-organizing system.
If this is so, then information must
be present from the beginning on in order to define the possibility for the
system to organize itself. This is also why this information is meaningful from the beginning on,
because, following a definition of Wittgenstein’s here, its meaning is in the
function which it triggers. Hence, the constituents of the universe themselves act as autonomous agents from the
beginning on, as Stuart Kauffman would say.
J.M.: Now that
you mention meaning, its foundation
is probably a key issue to bridge the understanding of information between the
natural sciences – particularly in physics and chemistry – and the social
sciences and humanities. This concern is often referred
to as the “grounding symbol problem”
– by Floridi for instance –. Which is your insight into this fundamental and
open problem – as Floridi states in his “Philosophy of Information” –?
R.Z.: I
think this problem is clarified best in terms of the physical perspective as
explained above. If it is feasible, on a very fundamental level, to implement a
task for agents which is essentially defined in terms of a thermodynamic law
(Kauffman calls this 4th law
which tells us that an actualized state of a system evolves exclusively to
those possible states which are just one reaction step away in phase space – he
calls this principle of the adjacent
possible), then there is an overall meaning for models of evolutionary
processes, i.e. the tendency to internally reduce complexity while externally
maximizing it. Obviously, this can be interpreted as a dialectical version of
the competition between order and disorder as expressed in terms of the 2nd
law of thermodynamics valid for the entropy balance. Therefore, there is always
already meaning from the beginning on. Once, a theory of this is constructed by
human beings when performing their research, the symbolic language utilized
entails an adequate mapping of this meaning from the beginning on.
J.M.: We started talking about the
need to reorient philosophy as to become an “ultima philosophia”, and we have
also deal with the requirements of a proper interdisciplinarity. I understand
both are closely related to the role mathematics should play in our reflexion
about the world. Nevertheless, I notice many people disqualify the role of
mathematical models as a means to avoid facing open problems of a substantial
kind. To what extent does modern mathematics actually change the landscape of
reflexion in comparison with classical mathematics – as available to Spinoza
for instance, or even to 19th century scientists –, as well as in relation to
what we know about the world?
R.Z.: The
first significant difference is in the concept of discontinuity: Nowadays, we
deal with a mathematics which is capable of modeling discrete rather than
continuous processes, because we believe that the majority of processes in
nature are of the former kind. On the other hand, mathematics has become
qualitative rather than quantitative: In other words, the objective is not
simply to compute things (although even computation alone also entails
organization and interpretation of data after all), but to actually model systems. Hence, the mathematical language
has not only become more and more qualitative (and thus amenable to hermeneutic
methods), but it has also gained more and more the structure of a meta-language such that mathematical
expressions (as we find them in category theory or topos theory) do not only
map structural patterns of underlying processes, but also the logical choice for this mapping in the
first place. As far as I can see, the enormous potential of topos theory for
the modeling of actual everyday life processes is far from being fully
recognized yet. For the first time, we have the chance to include the
epistemology in the ontology of a problem whilst modeling its underlying
system. Or to re-phrase it in ethical terms: The choice of methods as derived
from topos theory suggests itself as an adequate choice of our days. |
posted Mar 6, 2010, 4:45 PM by José María Díaz Nafría
[
updated Mar 24, 2010, 8:47 AM
]
Interviewer: Francisco Salto Interviewee: Luciano Floridi Started: 5/3/2010 To be finished: 15/3/2010
For many of us considering the job of doing philosophy as that of making arguments, Luciano Floridi offers highest argumentive standards on main open problems concerning information logic, ontology and ethics. Preliminary scattered hints of those problems are here discussed. Full records of papers and further materials are available on his website.
Which is in your view the essence of structural realism concerning informational phenomena? How would you locate your contributions in the general philosophical landscape of realisms and its alternatives?
A1
How can informational content be both a natural, formal and semantical phenomenon?
A2
How would you sketch your main contributions to the ontology of information? Which is the upshot of your arguments, kantian and floridian, against digital ontology? Is informational identity dependent in your view of a given level of abstraction?
A3
In your "global information ethics" you propose a minimal ontology to overcome the misunderstanding at a global level concerning ethical principles. How could such ontology be practically achieved (considering social and political hindrances)? Could it be enough to tackle complex problems arisen within intercultural interaction?
Can you sketch the state of the art concerning measurement of information in its distintc dimensions?
A5
Which, if any, is in your opinion the shape of problems, tasks and resources characterizing the scientific and academic field of Information Science?
A6
|
posted Jan 16, 2010, 9:19 AM by José María Díaz Nafría
[
updated Feb 3, 2010, 3:15 AM
]
Interviewer: Francisco Salto Interviewee: Wolfgang Hofkirchner Started: 13/1/2010 To be finished: 5/2/2010
We are very pleased to begin this set of interviews with you as a prominent figure in Information Science and leader of the Unified Theory of Information Reearch Group (UTI).
One of the main concerns of your work is the SYSTEM approach to information. There is a standard system concept defined as an arbitrary set with relations. How is system thinking to be applied to information? Moreover, which are in your view the limitations and strenghts of the standard concept of system? In my opinion, the standard definition suffers from severe deficiencies. The most striking problem to me is that it does not account for the emergence of the systemic properties – and this despite the fact that system theory sets out to give a scientific understanding of the old saying "the whole is more than the sum of its parts". By that the standard definition foregoes the necessity to define the essence of a system which is effects of synergy on the level of the system. It is essential to distinguish two different levels in a system: the level of the parts and the level of the whole, or the micro- and the macrolevel, and to point out that on the macrolevel you find these emergent properties but not on the microlevel, and that both levels are coupled together by a certain dynamics that lets these properties emerge. Mario Bunge has made an attempt to extend the standard definition and include the processes that go on in a specific system. He calls the processes "mechanism". If we interpret this "mechanism" as the dynamics of self-organisation, we may head in the right direction. I want to stress that it is self-organising systems that are of interest here and that it is them I want to see applied to the understanding of information. The reason is that information, in my view, has to do with novelty, with something new and that the new entity or event can be brought about only by a process of emergence. Thus we need systems that are capable of letting the new emerge – these are the self-organising systems!
The notion of SELFORGANIZING SYSTEM plays a crucial role in your written work. Would you explain the reach of this concept? Which conceptual presuppositions regarding identity or selfhood does it imply? According to the sciences of complexity, self-organising systems abound in our universe. Only under certain, mostly artificial, constraining conditions which are not so often found in our universe do self-organising systems degenerate to mechanical systems that do not produce novelty. This means, that all systems we find are, basically, self-organising systems – whether in physical, chemical, biological or social domains. Of course, self-organisation does evolve itself. So you find systems of different self-organising capacities. They form different evolutionary strata. The most primitive self-organising systems demonstrate only the dissipative type of self-organisation Prigogine dealt with. Biotic systems are dissipative systems that do their dissipation in an autopoietic way Maturana and Varela made popular. Social systems are autopoietic systems that do their autopoiesis in a "re-creative" way as Erich Jantsch – by the way, an Austrian, and a friend of Peter Fleissner in whose teams I had been working for a long time in my life – liked to talk of. Well, these different self-organising systems populate our world. And they are the agents of change. They are, in a way, subjects, they are, in a way, selves, if to different degrees, from the most rudimentary dissipative system which might be called a protoself to the autopoietic system which might be labeled a quasi-self to the most sophisticated re-creative system which is that fully-fledged self we are used to calling today (so far we do know nothing of extraterrestrial life – but, I'm sure, if it is out there, it is another branch of self-organisation). What they all have in common is spontaneity which means there is more than only one option to be realised by them. Let's have some input for such an agent. The output then is not predetermined, not fixed, but dependent on the agent itself and the output is dependent on the agent as much as it is dependent on the input. This is the precondition of human free will which decides in a reflective manner between possible alternatives.
Often, “information” refers to relations or processes in nature or in artifact which are lacking MEANING or indeed semantical content. Thermodynamics, electronics, genetics and many other disciplines offer examples of this use of “information”. Also often, “information” typically refers to semantical content and involves MEANING or related intentional phenomena. Folk psychology, library science, common sense and many other activities offer examples of this use of “information”. A Unified Theory of Information is aimed at a unified account of both uses of “information”. How is this possible? How is this program to be carried over? I think we have to approach this question in an evolutionary perspective. That is, we have to accept that there is a lineage of ramifications in the ontogenesis and evolution of real-world systems that makes all the different manifestations of information genetically linked to one another – linked by history, by being descendants having the same ancestors, so to speak. If information comes with meaning in humans or in other biotic systems as biosemioticians like to insist on, then there must be a prestage of meaning, a predecessor of meaning, in systems that are less complex. The task is to investigate what is in common on different stages of evolution and to investigate what is different on the basis of the commonality. In my attempt to do justice to that, information in the most general sense is a relation that is produced by a self-organising system. This agent relates some entity or event, in the most general sense, in its umwelt – an input – to some entity or event that it generates itself – to the so-called output. This output is not necessitated in its specificity, not necessitated in its entirety, by the input. The agent could produce a variety of outputs that relate to the same input. In this respect, there is novelty in the relationship, there is a leap in quality from the input to the output, there is emergence which is brought about by the agent. In doing so, in producing a unique output, in providing novelty, the agent assigns an agent-specific signature to the input. Using semiotic terms, we can say the agent which is the sign-maker makes a sign which stands for that for which the agent makes the sign. So, I hope I could make this clear: meaning does not fall from heaven, it is already there in the most general sense in an agent in so far as the agent relates a certain output to an input. We just have to add that this relation undergoes a long way of evolution and that ever more complex manifestations of meaning, tied to, in my terminology, ever more complex manifestations of information, are developed.
How are content and meaning explained in nonintentional or unified terms? Or, alternatively, how successful or how needed is a semantical explanation of natural (nonsemantical) facts? The unified terms are not nonintentional, because every agent in so far as it organises itself generates its own informational relation. Information is information for this agent. It is agent-specific, it is system-relative. But to state this, on a metalanguage level, is to state a fact. It is a real-world fact that agents irrespective of how complex they are generate their own informations.
UTI does not understand itself as REDUCTIONistic. How is a non-reductive integration of informational processes possible? It seems to me that in your view reduction turns to be false by definition. Is this not a idiosyncratic understanding of REDUCTION? Well, I hope it is not! We can talk of reduction in a methodological, that is, an epistemological or theoretical or semantic sense, but it always has an ontological correlate. If we reduce in thinking something A to something B we want to say that A is nothing else than B, then we disregard that which makes A different from B. The ontological side of this coin is to state that all the real As are real Bs and only real Bs. This makes the statement false, if we want to believe there is something that turns a real B into a real A. You are right, if you want to tell me that merely a relationship between A and B should be pointed out that says all As are also Bs. This is just to point out the commonality. This is not a reduction but, e.g., a tracing back to origins. In my opinion it's the most difficult task in science to draw here the distinction and to see the common together with the unique, the universal with the particular, the general with the specific, in a union. Integration is only possible in a non-reductive way. Reduction is not integration. It's subsumption, subjugation, subjection. And this is the way I want to approach information. There is a common basis to all information manifestations in our universe, and there is an evolution in which one state of compexity gives rise to another, higher, state of complexity and in which ever more sophisticated information manifestations emerge.
The notion of EMERGENCE is very useful in many of your arguments, being explicitly or implicitly at work in your understanding of informational phenomena. Can you explain this notion in plain terms? Which, if any, are the differences between supervenience and emergence? I have to admit that I never understood the arguments by which Kim tried to explain supervenience. To me it was an unsuccessful attempt to attack the notion of emergence. The phenomenon of emergence is very easy to understand. The relationship between the emergent and that from which it emerged is best treated in terms of a necessary but not sufficient condition. Let A be the emergent and B that from which A emerges. Then B is the precondition for A. B can be give rise to A, but need not to do it, and it might also be able to give rise to C. Given A, B is a must. There is no A possible without B. All that means B is a necessary condition for A but not a sufficient condition. You can't derive A from B but you can derive B from A. There is a leap in quality from B to A because A inheres some novel feature compared to B but A still inheres B. In evolution we see upward emergence which lets complexity rise.
It is probably hard not to find the word COMPLEX in your answers to the previous questions. How should we understand complexity in general? The relations between the mathematical theory of complexity and empirical theories does not seem obvious. Does UTI presuppose a unified theory of complexity? Well, complexity is located between total order and total disorder. A rise in complexity takes place, if something new, a novel feature, is added to the already existing state of complexity. And seen from an evolutionary angle, complexification does not preclude but include simplification. The reason for that is that the emergent may exert an influence on that from which it emerged to such a degree that it reworks and – Luciano Floridi has coined a good term which I would like to use in that context – reontologises the historical precondition and makes it an actual precondition. Having done so, a new system has emerged which always takes care of the basis from which it is dependent as long as it will exist. This new system is more complex than from what it emerged but, at the same time, it is more simple since the old complexity has been reontologised. So different levels of complexity arise. Each higher level as it includes the lower levels is also a simplification. This becomes manifest when looking at, e.g., the language of different scientific domains. The mind language is more complex than the brain language but it is more simple too. It is more simple to deal with psychological phenomena in terms of psychology than in terms of neurophysiology. And that means you need not to understand all neurophysiological phenomena in order to successfully deal with psychic ones. The higher level is more simple as it integrates the lower one.
As you point out in many texts we may quote here, you see a relevant divergence between MECHANISM and FREEDOM, which is further of import to the understanding of COMPUTATION. In the following sense: you state that “self-organising systems have the freedom to choose between several alternatives, compared with mechanical systems where there is only one possibility”. Since there are mechanical devices functioning on random bases, the difference between mechanic and free choice must lie in two eventual meanings of ”choice”. Should we think that self-organizing systems somehow ponder? Are they little minds? Is that the reason why they may be called free? Ok, I believe if we look closer we will find that the pretended inclusion of randomness in artificial deterministic systems turns out to be deterministic as well - determinism in the sense of allo- or heterodetermination. Self-organisation presupposes a degree of freedom and it is, in which way ever, self-determined. It is up to the agent how it produces the output. And the agent has more than one possibility to do this. If the system that has to produce the output has only one possibility, namely, can give one certain output only to one certain input, then it works as a mechanical system and not as an agent. Does a primitive agent ponder? I am not fond of saying that the universe is perfused with a soul. What I do think is that if we human agents ponder there must be some preconditions from which our pondering capability emerged and we have to be clear about the fact that we share this precondition with lower level agents though there is a clear-cut difference which makes us distinct from them.
Joe proposes the following additional question about the Sustainable Information Society (it could be a way to integrate the questions about freedom, selbst-organization, C-C-C, the challenges we have to face, etc.):
If we consider that nowadays our abuse of natural resources, the inequality among people or the difficulties of the majority collecting relevant information to solve everyday problems is higher than ever, we then might ask what is the "Global Sustainable Information Society" you talk about. Is it a goal, the end to which social systems seems to evolve? or is it more like a regulatory idea as Kant´s perpetual peace, an utopia?
It's a concrete utopia in the sense of critical theorist Ernst Bloch. Though our societies are self-organising systems they do not develop by themselves in that direction which would be desirable and would meet the long-term interests of the systems self-maintenance. This is a feature inherent in every self-organising system. Look at Ward's recent book on the self-destructiveness of life. Look at the history of evolution in which more than 99 per cent of all existing biotic systems have perished. Thus also our systems of human civilisation may perish. But since we are parts of these systems that - in contradistinction to self-organising systems on lower levels – have the possibility to influence the whole by pondering with reflection, rationality, and reason, and since we can identify possible futures for humanity, our systems are not yet deemed to perish. Anyway, it is up to the parts of the systems to do the right choices. We are approaching what I call the "Great Bifurcation". There is an alternative between at least 2 possible paths. One is the path of a breakdown and one is a path of a breakthrough to what I call a "Global Sustainable Information Society". The latter provides a framework for the persistence of humanity, that is, for the capability of humanity to fight against anthropogenic, sociogenic, self-made causes for extermination. In order to enable societies to get on the sustainable path, ICTs have to be shaped according to their potential to foster the generation of human and humane information. Unfortunately, due to informational capitalism we live in, neoliberal policies have furthered the preponderance of meaningless gadgets over means for survival and for the good society. However, this is not the end of the story. And since humans are self-organising systems, they have the freedom to change priorities. Thus hope is still there.
|
|