From PhilPapers forum General Philosophy of Science:

2009-05-25
laws of nature
Reply to Abuzaid Samir
Your query raises a host of issues, and I'll not be able to delve at all deeply into them, but instead will just offer a hint or two.

It is my impression that a new consensus is emerging regarding the "laws of nature". The old (Humean/positivist) view was that natural laws are generalizations of our experience, and a successful prediction of outcomes serves to justify their truth. Today it seems that  we have moved away from this in two ways. One is to  suggest that rather than reify laws (represent them as ontologically real), "laws" are simply properties of matter.

To expand on this a bit, if (because of the exigencies of daily life as well as Enlightenment ideology) we define entities in terms of their persistent properties, which we then proceed rather parochially to categorize as "essential", this reduces things to their intrinsic properties and represents the basis for the conceptual categories we need in order to think and communicate: epistemology here determines ontology. Since change used to be thought of as accidental and arising from external influences, to discover the essential truth of things we must insulate them from outside purturbations. This represents the positivist laboratory model, which defined reality in terms of behaviors arising from intrinsic properties. However, in principle no system is really isolated (to use the thermodynamic term), and closure is now taken to be only a hypothetical limiting case. If in fact all systems are open, then the derivation of laws in the laboratory can only yield a one-sided view, an artifact of our framing of reality in a certain way, not a representation de re natura.

One implication of this is that we have come represent singular-case causation as being fundamental rather than try to explain things by appealing to general laws. There is a considerable literature on single-case causation, and it has profound implications for how we represent our world. I would add that "cause" and "effect", a sufficient and perhaps necessary relation of events, properties or states of affairs, is itself an artifact of our framing of the world. This gets into the complicated subject of time, and I here only suggest that the conventional notion of a causal event leading subsequently to an effect event (and even the notion of "event" itself) is today contentious.

Another issue is that we tend to represent reality as consisting of a hierarchy of emergent levels (rather than nested "systems"), in which each level represents the constraint of a structure upon one or more general, basic, or universal levels (the levels from which the less probable level emerged). This raises problems, however, particularly in regard to how we go about defining the relation of these levels. This is usually discussed in terms of the mind-body problem, but it does not reduce to that alone. This relation of levels is usually described in terms of supervenience, which says, to put it very simply, that an emergent level depends on the base level. However, this begs the question: How can we define or explain the specific properties of an emergent level (such as mind) in terms of its base level (such as neurons)?

People have justifiably criticized positivist reductionism, but there are perhaps alternative kinds of reductionist explanation. Following hints in Jaegwon Kim and elsewhere, it seems that we can indeed reduce an emergent level to its base if we somehow redefine the base in terms other than simply its observable or empirical properties. However, I don't believe this line of investigation has actually been developed, and I'd sure like to hear from others any citations to the contrary.

My own feeling concerning this matter, if you will allow some adventurous speculation, is that we need to redefine levels, not as systems having emergent properties, but as processes, where "process" is defined as a structural constraint on real possibilities (inherited from the more universal levels from which the level of concern emerged) to produce a probability distribution of possible actualizations. That is, a level is a conceptual unity of hypostatized intrinsic and extrinsic properties (like a pointless topology), where intrinsic structure constrains possibilities to yield a real (in physical, not philosophical sense) probability distribution of possible outcomes (actualizations or space-time localizations). I assume here that extrinsic and extrinsic properties are an artificial deconstruction of process due to the limitations of our mind. If this ontology were adopted, then a reductionist explanation of emergent levels seems to become possible because we represent the emergent level as an actualization of possibilities arising from a base-level structural constraint on possibilities that transcend that base level. However, this makes predictions probabilistic rather than unequivocal, which is fine in the "special" sciences, but more troublesome in  physical science, where standard deviation offers a work-around.

I have questions about your "we are still within the analytic tradition", about whether the relevance of chaos and complexity theory is really well-founded, and about "rules of randomness".

Haines Brown

 
.