photo by splunge

photo by TheophileEscargot

photo by Kronos_to_Earth

photo by ethylene

Comment Feed:

♦ RSS

MetaChat is an informal place for MeFites to touch base and post, discuss and
chatter about topics that may not belong on __MetaFilter__. Questions? Check the __FAQ__. Please note: __This is important__.

about this physics nerd fight →[More:]

Does anyone know the original post the EB refers to, one in which (he claims) Rothko asserts a deterministic world view?

I had some information-theoretic type points on the matter discussed that I wanted to broach, as well as some points on the nature of induction in math and physics; I also wanted to get my dander good and up about some distinctions (I think drawn by EB?) that seemed to imply that experimental physicists have a primitive p.o.v. on the more esoteric concepts underlying certain principles that whoever it was suggested experimentalists treat as just measuring sticks or brute force type implements., but that thread just kind of broke.

I'm a physicist, more or less, but these science-type threads at mefi depress the hell out of me. It's about 75% of people being glib and thinking they're clever. I don't like the perspective that says you can only have a serious discussion of physics with those who have a technical background in the field; the elitism is very off-putting. But these things seem to veer off into wankerdom pretty often.

Anyway, there are about twenty people in there dicking around and about four people who I'd like to make sure I get complete arguments from. It's obvious there's another thread somewhere around that I didn't see. Does anyone know what it is?

Does anyone know the original post the EB refers to, one in which (he claims) Rothko asserts a deterministic world view?

I had some information-theoretic type points on the matter discussed that I wanted to broach, as well as some points on the nature of induction in math and physics; I also wanted to get my dander good and up about some distinctions (I think drawn by EB?) that seemed to imply that experimental physicists have a primitive p.o.v. on the more esoteric concepts underlying certain principles that whoever it was suggested experimentalists treat as just measuring sticks or brute force type implements., but that thread just kind of broke.

I'm a physicist, more or less, but these science-type threads at mefi depress the hell out of me. It's about 75% of people being glib and thinking they're clever. I don't like the perspective that says you can only have a serious discussion of physics with those who have a technical background in the field; the elitism is very off-putting. But these things seem to veer off into wankerdom pretty often.

Anyway, there are about twenty people in there dicking around and about four people who I'd like to make sure I get complete arguments from. It's obvious there's another thread somewhere around that I didn't see. Does anyone know what it is?

It might have to do with this recent question, but I'm not sure, because I'm too unawake to read it.

posted by
interrobang
04 December | 11:07

The only evidence I have is that Rothko posted the question, and EB participated in the thread. And, it's about physics.

posted by
interrobang
04 December | 11:09

anyway the idea that the information processing limit provides a lower limit on classicallity of systems seems fubar'd. it is possible to force classical measurement still on microscopic scales in (relatively) simple condensed matter systems that surely don't require 2^120 bits to represent them. some of what was described in the linked article about classical v. quantum measurement also seemed alittle dodgy, and i hate any argument that proceeds from citing schro's kitty. the whole point of that paradox (much like the twins paradox in relativity) seems to be missed -- that there already is a macroscopic system that has interacted, ie measured the decay -- in the form of said kitty.

my first undergrad quantum texas was d. griffith's popular book with the live cat on the front and you can guess what on the back cover.

my first undergrad quantum texas was d. griffith's popular book with the live cat on the front and you can guess what on the back cover.

hell i'm pretty sure we could actually get the same effect in a BEC system, with far fewer degrees of freedom. i haven't calculated it, but figure ~10^5 particles, 3N degrees of freedom, and what, 10 or 20 recoil energy trap depth...

we could arguably say that the position information isn't important since we can use conjugate variables of energy and time, and since we're basically dealing with a harmonic oscillator, we can just count integer numbers of states...

i think we can get this well, WELL below the level of complexity this guy asserts is necessary for a classical measurement/quantum waveform collapse/whatever.

on preview, thanks jigga.

we could arguably say that the position information isn't important since we can use conjugate variables of energy and time, and since we're basically dealing with a harmonic oscillator, we can just count integer numbers of states...

i think we can get this well, WELL below the level of complexity this guy asserts is necessary for a classical measurement/quantum waveform collapse/whatever.

on preview, thanks jigga.

yeah we already crossed the rubicon a long time ago. about this:

"

(The only exceptions are those required as a matter of deductive logic.)

"

i'm REALLY glad that EB added this in to his second (substantial; there was one other tiny post w/ the MeTa callout that I as an Atlantic M. subscriber wholeheartedly approve of) post. Otherwise he was drastically oversimplifying when he asserted that non-existence is very, very hard to prove. He didn't say impossible, which is good, but I think that this is an important enough point to make explicit, if only because so many people hear about the difficulties presented by the framework of science and then run around misapplying the soundbites they've picked up all willy-nilly.

"

(The only exceptions are those required as a matter of deductive logic.)

"

i'm REALLY glad that EB added this in to his second (substantial; there was one other tiny post w/ the MeTa callout that I as an Atlantic M. subscriber wholeheartedly approve of) post. Otherwise he was drastically oversimplifying when he asserted that non-existence is very, very hard to prove. He didn't say impossible, which is good, but I think that this is an important enough point to make explicit, if only because so many people hear about the difficulties presented by the framework of science and then run around misapplying the soundbites they've picked up all willy-nilly.

"

(The only exceptions are those required as a matter of deductive logic.)

"

i'm REALLY glad that EB added this in to his second (substantial; there was one other tiny post w/ the MeTa callout that I as an Atlantic M. subscriber wholeheartedly approve of) post. Otherwise he was drastically oversimplifying when he asserted that non-existence is very, very hard to prove. He didn't say impossible, which is good, but I think that this is an important enough point to make explicit, if only because so many people hear about the difficulties presented by the framework of science and then run around misapplying the soundbites they've picked up all willy-nilly.

(in the thread Gyan linked) Rothko:

"

So do you agree the universe is causal, that one event follows as a consequence of another?

"

me: no. but i'm aware that i have a very extreme point of view on this, which holds that (non-mathematical) induction is completely fallacious and bullshit and usually works but doesn't mean anything except so far we've had a long string of coincidences. mathematical induction, on the other hand, i hold to be essentially uninteresting because i do not have a platonist type view of the universe -- i don't think that the inverse square law actually has anything to do with me falling back to earth when i jump.

the point being that so called natural law has nothing to do with what goes on around us.

i submit that this is a very strange perspective and not one that is very convincing. but it is*incredibly* liberating to refute causality.

it's also fun considering everyone in my professional circle grew up on asimov and star trek.

"

So do you agree the universe is causal, that one event follows as a consequence of another?

"

me: no. but i'm aware that i have a very extreme point of view on this, which holds that (non-mathematical) induction is completely fallacious and bullshit and usually works but doesn't mean anything except so far we've had a long string of coincidences. mathematical induction, on the other hand, i hold to be essentially uninteresting because i do not have a platonist type view of the universe -- i don't think that the inverse square law actually has anything to do with me falling back to earth when i jump.

the point being that so called natural law has nothing to do with what goes on around us.

i submit that this is a very strange perspective and not one that is very convincing. but it is

it's also fun considering everyone in my professional circle grew up on asimov and star trek.

oh man i have to say this and i'm sorry for bringing it here but it pissed me off too much. speaking as someone who does quantum information stuff, i take HUGE issue with the crap article monju cites as an argument in favour of qm being an essentially deterministic theory. this is completely crap, because of about a million reasons, but mostly because it confuses physical reality with its mathematical model, WHICH HAS SET LIMITS OF APPLICABILITY. jesus it ignores those limits so much it's just stupid.

and i don't like the assertion that those of us in the physics community who hold such views are naive or sophomoric. it might be that we've actually paid attention to what goes on, instead of reading a bunch of crap intro to a field of mathematical physics and then deciding to run wiht it using a bunch of concepts from philosophy.

YES the wave function describing a pure system evolves deterministically. BUT inherent uncertainty in the energy-time conjugate expression of the Heisenberg uncertainty principle limits the applicability of the time evolution of a wave function, in either the Schrodinger or Heisenberg equation, because all of the time what we will actually be dealing with is not the evolution of a pure state but rather of a density matrix operator. And even then we will have EVER only a statistical approximation to the inital condition of that density operator.

goddamnit. whoever wrote that at plato.stanford annoys the hell out of me.

fucktards. how can so many phil grad student/professor types be such utter fucktards.

and i don't like the assertion that those of us in the physics community who hold such views are naive or sophomoric. it might be that we've actually paid attention to what goes on, instead of reading a bunch of crap intro to a field of mathematical physics and then deciding to run wiht it using a bunch of concepts from philosophy.

YES the wave function describing a pure system evolves deterministically. BUT inherent uncertainty in the energy-time conjugate expression of the Heisenberg uncertainty principle limits the applicability of the time evolution of a wave function, in either the Schrodinger or Heisenberg equation, because all of the time what we will actually be dealing with is not the evolution of a pure state but rather of a density matrix operator. And even then we will have EVER only a statistical approximation to the inital condition of that density operator.

goddamnit. whoever wrote that at plato.stanford annoys the hell out of me.

fucktards. how can so many phil grad student/professor types be such utter fucktards.

seriously, the nature of these arguments always seems to be that one guy who thinks he has a better understanding of leibnizian monadology says 'oh my friend you are mistaken' and then someone else goes and gets heraclitus off the shelf and says 'au contraire my friend, it is YOU who are mistaken!' without actually caring about the accuracy of what clubs they try to use to bash each others' heads in.

some of us like those clubs. and don't like seeing them abused like that.

some of us like those clubs. and don't like seeing them abused like that.

i hereby call bullshit on everything plato.stanford.edu has to say on the basis of this sentence,

"But it is becoming increasingly difficult to find any who, when pressed, will defend this interpretation. It seems clear that quantum mechanics is fundamentally about atoms and electrons, quarks and strings, not those particular macroscopic regularities associated with what we call measurements of the properties of these things,"

which is absurd.

1. quantum mechanics is not about quarks and strings. quarks are part of QFT which is a quantized theory. it is not equivalent to quantum mechanics and it is a poor but frequent accident to suggest that is the case. fuck, strings aren't even QFT. i was discussing this last year with a grad student at harvard (i'm at texas, because i am lazy and also dumb, but she is neither) who thought it was bizarre that i said i'd be much more freaked out if qm turned out the be wrong than if qft did -- but she agreed with me when i said that the reason for that is that if qm turns out to be wrong there's no apparent reason that my digital watch works. if qft is wrong, things aren't quite so dire.

2. qm is not about atoms and electrons, either. honestly, it's about a non-commutative algebra that seems useful. that's about it. it sometimes seems to be useful for lots of things, but to say it's about atoms and electrons... this smacks so damn much of some grad student trying to separate the tenor of a metaphor from the vehicle from the ground. it is wrong. stop trying to do the cave parable with qm.

"But it is becoming increasingly difficult to find any who, when pressed, will defend this interpretation. It seems clear that quantum mechanics is fundamentally about atoms and electrons, quarks and strings, not those particular macroscopic regularities associated with what we call measurements of the properties of these things,"

which is absurd.

1. quantum mechanics is not about quarks and strings. quarks are part of QFT which is a quantized theory. it is not equivalent to quantum mechanics and it is a poor but frequent accident to suggest that is the case. fuck, strings aren't even QFT. i was discussing this last year with a grad student at harvard (i'm at texas, because i am lazy and also dumb, but she is neither) who thought it was bizarre that i said i'd be much more freaked out if qm turned out the be wrong than if qft did -- but she agreed with me when i said that the reason for that is that if qm turns out to be wrong there's no apparent reason that my digital watch works. if qft is wrong, things aren't quite so dire.

2. qm is not about atoms and electrons, either. honestly, it's about a non-commutative algebra that seems useful. that's about it. it sometimes seems to be useful for lots of things, but to say it's about atoms and electrons... this smacks so damn much of some grad student trying to separate the tenor of a metaphor from the vehicle from the ground. it is wrong. stop trying to do the cave parable with qm.

sam, you have to understand the nature of science discussions on MeFi. It's a bunch of mostly uninformed folks mouthing off, usually. Every tenth post will be from someone that knows enough to be dangerous, but is not an expert (like me), and every 20th or 50th post is by an actual expert.

*shrug* That's the nature of human discourse. Unless you are at an academic physics symposium, I doubt it would be different anywhere else. Sad and annoying, but life is sad and annoying a lot of the time.

*shrug* That's the nature of human discourse. Unless you are at an academic physics symposium, I doubt it would be different anywhere else. Sad and annoying, but life is sad and annoying a lot of the time.

teece -- that's true about so many areas, but for some reason, everyone feels like they know everything about physics. i don't know why, it just seems to attract wankery like nothing else. this might be an artifact of my perspective; biologists might say the same thing. so might experts on jane austen.

q- not since i crashed his firebird. but we used to be tight.

q- not since i crashed his firebird. but we used to be tight.

I'm not a professional physicist, but I'm comfortable with some of the mathematical language and the conclusions they imply. I tend to hold to a pretty strict view of Occam's razor. The simplest and accepted models of the universe are causal (so far), the rules we've discovered seem invariant across space (so far), and there's no empirical evidence to suggest it behaves otherwise (so far). Causality = determinism until there's evidence otherwise, IMO. /shrug

posted by
AlexReynolds
04 December | 15:54

That was me, you're right. I was trying to allow that

Sam, about the information processing limit = complexity argument...could you say more?

I'll re-read the argument, but I don't think the author is claiming that classical measurement is impossible with such systems in an absolute sense. I think before we argue this, we need a rigorous definition of "measurement". To me, the argument is that this limit forms the barrier to what systems can be described*completely*. Well, by definition this is true, right? This is only interesting if we can quantify a system's lower computational limit for full description. The writer of that bit in the linked page doesn't elucidate this, but intuitively the idea is appealing as the boundary between the micro and the macro.

For me, the idea is appealing not because it supposedly could set that boundary for the purposes of working out these problems with QM, but just as a generalized rigorous definition of "complexity".

As to Schroedinger's Cat, you write:

Well, I think it depends upon who is using the thought experiment for what purposes. :) Isn't it the case that this argument of Schroedinger's us like the EPR*reductio ad absurdum* argument in that it was intended to "prove" the incompleteness of QM because the result is an absurdity? But, like the EPR, there is an answer? EPR's answer is to show that what is actually happening isn't what the argument seems to assume—that is, that an actual transmission of information is possible. The answer is that it's not.

In the case of SC, I think the answer is, like you say, that the uncertainty does reache well into the macrocosm even if that seems absurd. But Schroedinger himself meant it as a reductio.

I'll re-read the argument, but I don't think the author is claiming that classical measurement is impossible with such systems in an absolute sense. I think before we argue this, we need a rigorous definition of "measurement". To me, the argument is that this limit forms the barrier to what systems can be described

For me, the idea is appealing not because it supposedly could set that boundary for the purposes of working out these problems with QM, but just as a generalized rigorous definition of "complexity".

As to Schroedinger's Cat, you write:

...the whole point of that paradox (much like the twins paradox in relativity) seems to be missed -- that there already is a macroscopic system that has interacted, ie measured the decay -- in the form of said kitty.

Well, I think it depends upon who is using the thought experiment for what purposes. :) Isn't it the case that this argument of Schroedinger's us like the EPR

In the case of SC, I think the answer is, like you say, that the uncertainty does reache well into the macrocosm even if that seems absurd. But Schroedinger himself meant it as a reductio.

I wrote this in an email to Sam, but I think I'm going to post this here and also on MeFi, because it makes the debate clear:

My expertise is in the philosophy and history of science, which is, in fact, the context for the argument about "determinism", really. As the physicist ozmatli says on MetaFilter,"Also, I would like to state that determinism means different things to different people.", which is obviously true; but the thing is that in the context of the larger philosophical debate in the history of western science "determinism" has a specific meaning. That meaning specifically is that "classical mechanics completely describes the universe and, as such, every event in the universe, large or small, is completely determined by the universe's history back to its origin". This is clearly not true in the context of QM.

kmellis, i may have misread the article, but i think the substance the author in quesiton was getting at is the point at which a measurement becomes classical. and i agree that when we say measurement, we need to be more precise; from a q.i. standpoint, i think it would be probably correct to say 'interaction with a classical, macroscopic system' rather than a measurement. it probably needs more of a restriction on it than that -- non-adiabaticity, etc, but i think this gets the point across more or less. so, from the brief look it got this morning, i think the author was asserting that the computational limit on the universe restricts description of events as either quantum or classical, ie as non-waveform-collapsing interactions (what goes on whilst a wave description evolves) and what goes on when that wave collapses. i think his implication is that for systems that require more than a certain level of precision to describe in an accurate way, the interaction will be non-adiabatic/will collapse the waveform/etc. but i was proposing that i can put together a system that will interact 'clasiically' with an arranged 'quantum system' that does not break that complexity requirement. i can put together a BEC that does not require more information to describe than the limit set by the whatever it's called limit. this BEC is itself usually described as a quantum system, or semi-classically. but, we could almost certainly describe a moderately large BEC with equilibrium statistical mechanics, and observe a 'classical' type of interaction with a prepared quantum system.

in fact, though, thinking about it, i have to reconsider this. for example, i could prepare a simple harmonic oscillator system of a particle in a potential well, then adiabatically move the zero of that potential until i bang that particle into my BEC and scatter it. bam, scatter. probably classical. but thinking about this, i decided i might still get a quantization in the distribution of the rayleigh/brilluoin peaks of the spectrum of such a set of scattered particles. i can't decided.

the qi side of me likes the kind of idea this guy's going for, but if he is in fact asserting what i believe he is, i feel like it's arbitrary. the amount of matter in the universe is probably not fundamental (at least to my suspicions. this may be COMPLETELY incorrect) but rather secondary. coincidental. just sort of happened, if you will. otherwise, if the amount of matter that exists in the universe were primative, i imagine we would be able to sit down and deduce some known prpoperties of the universe and from those work out how much matter had to be in it. instead, investigations into that sort of thing tend to go the other way -- people say, look, these clusters move like this, and this over here moves like that, so there's this much matter, which means we're all fucked because the universe will collapse -- or whatever hypothesis those data support.

now, this could be because we're doing things backwards. but i wouldn't have thought that there is anything fundamental relating the amount of mass in the universe to the fundamental constants that operate within that universe. adding more mass doesn't seem to change the rules, just the outcome, so i don't think the rules likely determine the amount of mass, if you will. so it seems to me a rather arbitrary measuring stick -- to say, look, we've only got this much stuff so the universe is going to make a certain set of interactions wave-collapsing while other things will not disturb quantum information.

it seems too arbitrary.

if he said, look, pi starts off with three, so obviously any system with four or more degrees of freedom must be one which will lose quantum information, i'd probably be much more prone to believe him, even though such an assertion would be much more easily demonstrated to be false.

anyway some nasty stuff was going on in that thread on the blue about causality and determinism that's just plain bushwa from a physics p.o.v. -- like the argument that heat death proves determinism in the universe. HORSESHIT. my cup of coffee will be room temperature in ten hours. it had a igher starting temperature. but the path that it goes through in configuration space is completely non-deterministic. if there a word to describe how non-deterministic it is, that word would be ape-shit.

in fact, though, thinking about it, i have to reconsider this. for example, i could prepare a simple harmonic oscillator system of a particle in a potential well, then adiabatically move the zero of that potential until i bang that particle into my BEC and scatter it. bam, scatter. probably classical. but thinking about this, i decided i might still get a quantization in the distribution of the rayleigh/brilluoin peaks of the spectrum of such a set of scattered particles. i can't decided.

the qi side of me likes the kind of idea this guy's going for, but if he is in fact asserting what i believe he is, i feel like it's arbitrary. the amount of matter in the universe is probably not fundamental (at least to my suspicions. this may be COMPLETELY incorrect) but rather secondary. coincidental. just sort of happened, if you will. otherwise, if the amount of matter that exists in the universe were primative, i imagine we would be able to sit down and deduce some known prpoperties of the universe and from those work out how much matter had to be in it. instead, investigations into that sort of thing tend to go the other way -- people say, look, these clusters move like this, and this over here moves like that, so there's this much matter, which means we're all fucked because the universe will collapse -- or whatever hypothesis those data support.

now, this could be because we're doing things backwards. but i wouldn't have thought that there is anything fundamental relating the amount of mass in the universe to the fundamental constants that operate within that universe. adding more mass doesn't seem to change the rules, just the outcome, so i don't think the rules likely determine the amount of mass, if you will. so it seems to me a rather arbitrary measuring stick -- to say, look, we've only got this much stuff so the universe is going to make a certain set of interactions wave-collapsing while other things will not disturb quantum information.

it seems too arbitrary.

if he said, look, pi starts off with three, so obviously any system with four or more degrees of freedom must be one which will lose quantum information, i'd probably be much more prone to believe him, even though such an assertion would be much more easily demonstrated to be false.

anyway some nasty stuff was going on in that thread on the blue about causality and determinism that's just plain bushwa from a physics p.o.v. -- like the argument that heat death proves determinism in the universe. HORSESHIT. my cup of coffee will be room temperature in ten hours. it had a igher starting temperature. but the path that it goes through in configuration space is completely non-deterministic. if there a word to describe how non-deterministic it is, that word would be ape-shit.

Keith M. Ellis, please troll somewhere else. Don't ruin Metachat for me as well. Thanks.

posted by
AlexReynolds
04 December | 17:46

Calculating individual particle motion is an intractible problem for human beings (so far), but that doesn't mean we can't know the final state of those particles. Entropy in the system increases as time proceeds. Entropy is accepted to be irreversible. This establishes causality, or temporal directionality that is determinism by most standards.

posted by
AlexReynolds
04 December | 17:52

Which is to say, show me evidence of a cup of coffee that gets hotter without energy input and I'll reconsider my "opinion".

posted by
AlexReynolds
04 December | 17:54

Hell, just show me a cup of coffee that stays hot forever.

posted by
AlexReynolds
04 December | 18:10

Yeah, it seem too arbitrary. My attraction to it is in its quality, not its specific. I like that it attempts an objective definition of "complexity" derived from some (theoretical) exact description of the nature of the universe.

Rothko has resolutly stated in that thread and elsewhere that the universe is "deterministic".

I think that your coffee example is, stricly speaking, not an example of indeterminism in the strong sense in which we are using the term. (BTW, I'm sure you realize that you're alluding to chaos theory, which Rothko being a mathematician, is likely quite competent.) Chaos theory, as it applies to the physical world, and complexity theory don't require an indeterministic universe. They are related tools for describing phenomena that—whether deterministic or indeterministic—far exceed our ability to describe

And you don't need what we're calling a sufficient level of "complexity" to exceed our ability to describe something reductively: pretty much everything in the universe is so. The classic example is the three-body (or more) problem in the context of Newtonian gravitation. (Not even Einsteinian physics!) It's insolvable.

In each of these examples, taken on their own terms (that is, setting aside a particle physics description of these systems), one could say that they are "indeterminate" in the way that Rothko is using, and thinks we're using, the term. That is to say, we cannot in this sense determine something absolutely, perfectly. Pretty much everything in the universe exceeds our grasp in this way. That doesn't mean that, for example, given an infinite computational budget we couldn't achieve a deterministic—absolute, perfect—description. This sort of hypothetically completely comprehensible universe is that Rothko is asserting. That we can't doesn't mean that it in principle can't be done.

But QM, as it stands now, says that in principle it can't be done. This is the sense in which the universe is understood to be non-deterministic. And, to be clear, what we're talking about is a

Good God, you're like a three-year-old child. Try and learn to deal with the facts that a) there are things you are ignorant or wrong about and people will correct you; and b) when they do so it doesn't mean that they are behaving in some objectively offensive manner.

Hell, just show me a cup of coffee that stays hot forever.McDonalds used to sell one but then they got sued.

posted by
dodgygeezer
04 December | 18:16

This is like a bar fight in an old western where the fighters grapple and crash though a plate glass window on to the street.

more like an dubbed spaghetti western, where the characters seem to be using english, but they're very hard to understand

/barely grasps qm, let alone qft

posted by
Popular Ethics
04 December | 20:43

..and you're not quite sure why the curly haired dude and the guy in the black hat keep fighting.

posted by
Popular Ethics
04 December | 20:45

"

Which is to say, show me evidence of a cup of coffee that gets hotter without energy input and I'll reconsider my "opinion".

"

okay. let us use einstein fluctuation theory and assume that at any time the

system is close to average equilibrium; the upshot of this assumption is that, no, you will not see the coffee cup heat up drastically by a million degrees. we're going to work in a taylor-expansion type of way.

to make the problem tractable, we divide the coffee up into a number of cells, each of which is sufficiently small that we can define a temperature which is locally close to the temperature at any point within the cell -- in other words, another condition to satisfy the convergence of the series. this is not necessary physically, but helps to make for an easy mathematical description of the process.

we realize from the second law of thermodynamics that only fluctuations that DECREASE the entropy of the system will be permissible. this seems counter-intuitive at first, perhaps, because we all know that the entropy of the universe tends to increase in time. but, if the entropy of the cell would increase due to fluctuations, those fluctuations would represent the attainment of a new equilibrium position, and that our previous assumption was invalid, ie that the cells were not locally in equilibrium.

from this point, we are able to reduce the number of free thermodynamic variables to two, specifically the temperature and density, by relying on the well-known Euler relations that relate partial derivatives of the exact differentials of the fundamental equation of thermodynamics.

we obtain a probability propotional to Exp[ a * deltaT^2 + b*deltaT^2]. The expectation value, then, of a fluctuation in temperature is zero, and the expactation value of cross terms depending linearly on temperature and density is zero. likewise, the expectation value of a density fluctuation is zero. the expectation values of higher moments of those fluctuations are NOT zero however, and we can conclude NOTHING about the sign of the fluctuations of either. the coffee can fluctuate up in temperature, and, in a counter-intuitive way, decrease it's entropy. it's fucking weird.

so the coffee can get hotter.

now an infinitely long time to cool cup of coffee i can't show you.

i looked over on the other thread on the blue, and several people said that chaotic behaviour does not imply non-determinism. this is true for smooth, differentiable manifolds. unfortunately, we do not have a single map for the entire configuration space that describes hardly any real system -- to simplify this greatly, since i'm not explaining it well, if we wwant to describe a pendulum's position using angle, we end up fucking ourselves over because we can't write a coordinate that doesn't do some weird 2Pi->0, or -Pi->Pi thing. it's not continous everywhere, or rather, the system is continuous but the manifolds which describe it are not globally differentiable. so, we're dealing with chunky spaces. this is problem #1. it has implications for the geometry of the space which are fuzzy to me at the moment and i have to think about how it will affect Lie derivatives of dynamical variables before i consider along this line much farther.

the second reason that the coffee cup is a non-deterministic system (and not just in the computational sense) is because each collisional event is non-deterministic due to quantum effects. i realized sometime after mentioning this example (which is not at the heart of the matter, but really was just an easy way of explaining why the 'proof' that uniform temperature/heat death==deterministic universe was incorrect) that this could easily be mistaken for a determinstic chaotic behaviour.

i assure you, this is not the case. IF heat flow were as simple as Fourier's law, which says basically that heat will flow through any surface proportionally to the temperature difference across that boundary, this would (i think, probably) be a determininstic chaotic system.

thermal processes, however, are inelastic processes. each inelastic collision is a quantum scattering problem. since it is non-elastic, we are by definition talking about systems moving from low energy to higher energy states of internal configuration (we could also have that energy going into the eternal configuration of, say, a lattice that defines the geometry of a crystal. but such a phonon is still energy going from a strict translation kinetic form of energy into a potential energy -- the energy in the bonds that are 'oscillating.' it doesn't do too well to think of this example using physical intuition much farther, i think, because we're not really working wtih tinker toys, and there aren't little sticks connecting the lattice sites together). thus, we introduce our delta E; we all know that along with a certain delta t (here, time, not temperature) we have an uncertainty principle at work. this is derivable from the fact that the observables corresponding to E and whatever we're using for t (frequency is probably the more fundamental quantity) are not commutable.

If the coffee cup were a bunch of idealized billliard balls, we would have a deterministic chaotic system. as it is we have quantum chaos. i suppose the weakness of this example is that the effect is lost in the inability to precisely measure the system and then determiine the difference between the classical chaotic and the quantum chaotic elements.

I hope that this explanation of why this is in fact a non-determinstic system is satisfactory.

as for kmellis's question regarding the heisenberg relations. humph.

i think i use these things a lot mechanistically. i work with lasers, and there are fundamental limits on the linewidths of the transitions in atoms that produce the photons that are sort of our stock in trade. this is because there's an energy difference between those states, and we can get a low-end limit on linewidths of transitions using the HUP. it's quick, and gives an order of magnitude answer (the actual width will always be broader for several reasons. lasear physics is quite interesting and if anyone wants to talk about it sometime, please let me know. because i am a big nerd and i know i got into this death ray business for a reason).

as for what it really means -- this is actually really important to a lot of optics/atomic physics stuff that's being done right now, producing what are known as 'squeezed states.' this means that we're taking the usual uncertainty from the delta x * delta p relation, and crammming almost all that uncertainty into one of those variables or the other, so that one aspect of the state will be known to a very, very high precision. in my kind of physics, working with lasers, we use the conjugate variables E and t, which obey an identical uncertainty relation. so we have a beam where we know very little about the intensity, but we have great information about the frequency, for example. this looks like a great way to do frequency metrology -- working out precise time measurements, basically. you can probably get better time resolution using a cesium clock, but i think the point is, how do you use that to measure something? count atoms, i guess. but you've got all kinds of shit with detectors going on then. with a light pulse with highly known frequency properties, we can just interfere it with something we want to measure, and blammo. we have great measurement capabilities.

this is interesting science. it will have alot of ramifications for other fields -- like probing the internal strucure of atoms to figure out exactly what goes on in a transition, for example ( i mean how long that sort of thing takes, how long it takes for angular momentum to reconfigure, things like that).

so. i think that as an experimentalist, i use it ALL. THE. TIME, and very mechanistically. but i think that, as experimentalists often lead the way into new areas of science (think about the discovery of superconductivity in liquid mercury, for example), a lot of the shit we come up with relies on thinking about what the profound implications of our every day tools really are.

i am reminded of this guy ketterle, at mit, who is one of the BEC big dogs. lots of theorists were arguing A LOT about the coherence properties of BEC, about getting independent BEC's to interfere, etc. There were so many intractable arguments on both sides. Ketterle said, fuck it, i'm just going to measure it. and he did.

Which is to say, show me evidence of a cup of coffee that gets hotter without energy input and I'll reconsider my "opinion".

"

okay. let us use einstein fluctuation theory and assume that at any time the

system is close to average equilibrium; the upshot of this assumption is that, no, you will not see the coffee cup heat up drastically by a million degrees. we're going to work in a taylor-expansion type of way.

to make the problem tractable, we divide the coffee up into a number of cells, each of which is sufficiently small that we can define a temperature which is locally close to the temperature at any point within the cell -- in other words, another condition to satisfy the convergence of the series. this is not necessary physically, but helps to make for an easy mathematical description of the process.

we realize from the second law of thermodynamics that only fluctuations that DECREASE the entropy of the system will be permissible. this seems counter-intuitive at first, perhaps, because we all know that the entropy of the universe tends to increase in time. but, if the entropy of the cell would increase due to fluctuations, those fluctuations would represent the attainment of a new equilibrium position, and that our previous assumption was invalid, ie that the cells were not locally in equilibrium.

from this point, we are able to reduce the number of free thermodynamic variables to two, specifically the temperature and density, by relying on the well-known Euler relations that relate partial derivatives of the exact differentials of the fundamental equation of thermodynamics.

we obtain a probability propotional to Exp[ a * deltaT^2 + b*deltaT^2]. The expectation value, then, of a fluctuation in temperature is zero, and the expactation value of cross terms depending linearly on temperature and density is zero. likewise, the expectation value of a density fluctuation is zero. the expectation values of higher moments of those fluctuations are NOT zero however, and we can conclude NOTHING about the sign of the fluctuations of either. the coffee can fluctuate up in temperature, and, in a counter-intuitive way, decrease it's entropy. it's fucking weird.

so the coffee can get hotter.

now an infinitely long time to cool cup of coffee i can't show you.

i looked over on the other thread on the blue, and several people said that chaotic behaviour does not imply non-determinism. this is true for smooth, differentiable manifolds. unfortunately, we do not have a single map for the entire configuration space that describes hardly any real system -- to simplify this greatly, since i'm not explaining it well, if we wwant to describe a pendulum's position using angle, we end up fucking ourselves over because we can't write a coordinate that doesn't do some weird 2Pi->0, or -Pi->Pi thing. it's not continous everywhere, or rather, the system is continuous but the manifolds which describe it are not globally differentiable. so, we're dealing with chunky spaces. this is problem #1. it has implications for the geometry of the space which are fuzzy to me at the moment and i have to think about how it will affect Lie derivatives of dynamical variables before i consider along this line much farther.

the second reason that the coffee cup is a non-deterministic system (and not just in the computational sense) is because each collisional event is non-deterministic due to quantum effects. i realized sometime after mentioning this example (which is not at the heart of the matter, but really was just an easy way of explaining why the 'proof' that uniform temperature/heat death==deterministic universe was incorrect) that this could easily be mistaken for a determinstic chaotic behaviour.

i assure you, this is not the case. IF heat flow were as simple as Fourier's law, which says basically that heat will flow through any surface proportionally to the temperature difference across that boundary, this would (i think, probably) be a determininstic chaotic system.

thermal processes, however, are inelastic processes. each inelastic collision is a quantum scattering problem. since it is non-elastic, we are by definition talking about systems moving from low energy to higher energy states of internal configuration (we could also have that energy going into the eternal configuration of, say, a lattice that defines the geometry of a crystal. but such a phonon is still energy going from a strict translation kinetic form of energy into a potential energy -- the energy in the bonds that are 'oscillating.' it doesn't do too well to think of this example using physical intuition much farther, i think, because we're not really working wtih tinker toys, and there aren't little sticks connecting the lattice sites together). thus, we introduce our delta E; we all know that along with a certain delta t (here, time, not temperature) we have an uncertainty principle at work. this is derivable from the fact that the observables corresponding to E and whatever we're using for t (frequency is probably the more fundamental quantity) are not commutable.

If the coffee cup were a bunch of idealized billliard balls, we would have a deterministic chaotic system. as it is we have quantum chaos. i suppose the weakness of this example is that the effect is lost in the inability to precisely measure the system and then determiine the difference between the classical chaotic and the quantum chaotic elements.

I hope that this explanation of why this is in fact a non-determinstic system is satisfactory.

as for kmellis's question regarding the heisenberg relations. humph.

i think i use these things a lot mechanistically. i work with lasers, and there are fundamental limits on the linewidths of the transitions in atoms that produce the photons that are sort of our stock in trade. this is because there's an energy difference between those states, and we can get a low-end limit on linewidths of transitions using the HUP. it's quick, and gives an order of magnitude answer (the actual width will always be broader for several reasons. lasear physics is quite interesting and if anyone wants to talk about it sometime, please let me know. because i am a big nerd and i know i got into this death ray business for a reason).

as for what it really means -- this is actually really important to a lot of optics/atomic physics stuff that's being done right now, producing what are known as 'squeezed states.' this means that we're taking the usual uncertainty from the delta x * delta p relation, and crammming almost all that uncertainty into one of those variables or the other, so that one aspect of the state will be known to a very, very high precision. in my kind of physics, working with lasers, we use the conjugate variables E and t, which obey an identical uncertainty relation. so we have a beam where we know very little about the intensity, but we have great information about the frequency, for example. this looks like a great way to do frequency metrology -- working out precise time measurements, basically. you can probably get better time resolution using a cesium clock, but i think the point is, how do you use that to measure something? count atoms, i guess. but you've got all kinds of shit with detectors going on then. with a light pulse with highly known frequency properties, we can just interfere it with something we want to measure, and blammo. we have great measurement capabilities.

this is interesting science. it will have alot of ramifications for other fields -- like probing the internal strucure of atoms to figure out exactly what goes on in a transition, for example ( i mean how long that sort of thing takes, how long it takes for angular momentum to reconfigure, things like that).

so. i think that as an experimentalist, i use it ALL. THE. TIME, and very mechanistically. but i think that, as experimentalists often lead the way into new areas of science (think about the discovery of superconductivity in liquid mercury, for example), a lot of the shit we come up with relies on thinking about what the profound implications of our every day tools really are.

i am reminded of this guy ketterle, at mit, who is one of the BEC big dogs. lots of theorists were arguing A LOT about the coherence properties of BEC, about getting independent BEC's to interfere, etc. There were so many intractable arguments on both sides. Ketterle said, fuck it, i'm just going to measure it. and he did.

oh god i just re-read. expert opinion?

i think i'm being made fun of. where's my giant frowney face button.

i keep going back to this (quoting Rothko):

"

Entropy is accepted to be irreversible. This establishes causality, or temporal directionality that is determinism by most standards.

"

i don't understand what you mean. can you explain this? of course there's a directionality implicit in the laws of thermodynamics, and it's one of the biggest problems in physics to assert why laws that are apparently completely reversible on a microscopic level turn out to produce systems with only one possible direction. but this doesn't establish causality, i don't think. it says that broken glasses don't hop off the floor and reassemble if you pump into them the energy of the smashy-smashy noise.

i don't get what you're saying. also knowing the final state and the initial state is very, very different from knowing every in-between state. unless you're implicitly making a sort of feynman path integral kind of argument, we're you're saying, well, we just keep on time-slicing and looking at smaller and smaller intervals and as long as the second law applies, we'll have knowable endpoints. and you assume a limiting process where you do this an infinite number of times, and arrive at some sort of propagator function?

i don't think that works. second law stuff doesn't work for individual events, it's a statistical thing by definition. it's why stat mech works with delta T's instead of dT's, you know?

please explain!

i think i'm being made fun of. where's my giant frowney face button.

i keep going back to this (quoting Rothko):

"

Entropy is accepted to be irreversible. This establishes causality, or temporal directionality that is determinism by most standards.

"

i don't understand what you mean. can you explain this? of course there's a directionality implicit in the laws of thermodynamics, and it's one of the biggest problems in physics to assert why laws that are apparently completely reversible on a microscopic level turn out to produce systems with only one possible direction. but this doesn't establish causality, i don't think. it says that broken glasses don't hop off the floor and reassemble if you pump into them the energy of the smashy-smashy noise.

i don't get what you're saying. also knowing the final state and the initial state is very, very different from knowing every in-between state. unless you're implicitly making a sort of feynman path integral kind of argument, we're you're saying, well, we just keep on time-slicing and looking at smaller and smaller intervals and as long as the second law applies, we'll have knowable endpoints. and you assume a limiting process where you do this an infinite number of times, and arrive at some sort of propagator function?

i don't think that works. second law stuff doesn't work for individual events, it's a statistical thing by definition. it's why stat mech works with delta T's instead of dT's, you know?

please explain!

i thnink the reason that the graininess of the cotangent bundle space implies non-determinism is because we have to use a different map when we go across a boundary that isn't differentiable, and we have to sacrifice previous information and derive new conserved quantities in the second space. for pendulum type examples, this is a non-starter, because the system can still be explained with for example, the same action-angle type variables. so we can sort of cheat and say we know it's initial conditions. i am fairly certain this doesn't work for larger systems, because there is no framework to determine what the conserved action-angle variables initial values are in the new map. this is maybe not right but i think possibly right.

i think i'm being made fun of. where's my giant frowney face button.

Absolutely not. I was entirely in earnest and asked with respect. I'm thrilled to have an actual particle physicist on hand to answer some of these questions. (I wish ozomatli would say some more.)

It doesn't seem like you answered my question, exactly. Isn't it the case that as a practical matter in doing almost everything—or everything—that you do, as an experimentalist, there is no practical difference between the mechanistic view and the epistemological view? I mean, it seems to me that with the epistemological view this would still be the case: you'd think about it mechanistically (measuring alters the system in a mechanically physical way) for almost all or all purposes. But in the context of the philosophy of the worldview implied by QM, the correct interpretation of of HU is not that measuring affects what is measured in some inevitable way, but that as a matter of principle one can't say, for example, that a particle actually

AN IMPORTANT DISCLAIMER: i am not a particle physicist. i do atomic/molecular/optical aka AMO physics, which means mostly i do shit with Bose-Einstein Condensation. i fully concede that this is not as badass as high energy/particle physics and i am sorry if i mistakenly gave the impression that such is my field of study -- that was not my intention. but i know some of the high energy guys in my program and they are douchebags.

to answer your question, kmellis,

no, i think that's a perspective both sides of the argument come from -- if for no other reason than because so many of the important texts on the matter have been written by people who assert primacy of that point. i'm thinking very much of sakurai, whose text*modern quantum mechanics*, is the gold standard in my opinion. he's a particle guy, and in his pedagogy, the uncertainty principle is actually a derived result, rather than a fundamental limit on knowledge -- sort of, it turns out you can't know this stuff. but not because god forbids it or because the ninth law of thermodynamics says thou shalt not knowest simultaneously momentum and position. or because of gravity. or large cats, or whatever.

rather, we obtain an uncertainty principle by taking the expectation value of trying to simultaneously measure the momentum and position of two identically prepared systems (there are some reasons why it's more accurate to say this than 'the same system,' but they're really just sort of loophole-closers) but in different "orders." what i mean by this is let's consider and operator P, for momentum, and Q, for position, and we have a state |X> that represents our system in a Hilbert Space. Basically, we're going to measure Q|X> and get q, or properly

Q|X>=q|x>

where q is just some number. Q is an operator -- it actually doesn't have any other representation than just Q. it is defined as something that gives you the position, that;s all -- in a certain representation, such as a cartesian coordinate basis, we might see something like =q(x,y,z), where q is a point in that three space. But we're actually throwing in another step there, where we projec t our position q into a certain basis. or, alternatively, we project Q into a basis and then |X> into a basis and then we have Q in that basis act on |X> in that basis.

Same sort of thing with momentum, we have an operator P that does this:

P|X>=p|X>

with the same hoopla about projection into particular bases and whatnot.

Now, let's measure the position after we've measured the momentum. This is all happening instantaneously, incidentally, there's no time in between measurements or anything like that. That means we go XP|X>, and we get some number out that should have something to do with a product of q and p, right?

Well, okay, let's do the same thing, but now we measure X first, so we take PX|X>.

Now, let's subract the two, ie let's look at PX|X>-XP|X>. So we have some number times something like but maybe not actually equal to |X> (it's for reasons to do with whether or not X is something called an eigenvector of the operator, it's not the vital aspect because we could for purposes of demonstration choose it to be such a thing) subtracting some other number times something else that's the same sort of object, a state somehow connected to (but again, not the same as) |X>.

Shouldn't this be zero?

Well, we don't really observe these things since how do I measure a |X>? i measure a position. So I do an operation that we shall call "taking an expectation value," and it turns out, no, this isn't zero. It can be all sorts of things, but it has to be greater than (depending on your axioms this can change by a factor of two) Planck's constant divided by 4 Pi.

It turns out it's not zero. It never is.

That, to my mind, is more interesting than a lot of other author's treatments of uncertainty -- some even start with it as an axiom, and work "backwards" to obtain the non-commutative algebra. I think it's much more satisfying to start with axioms describing things like states and operators and then use some rules about differentials to produce generators of translation and time evolution, and from there end up with this derived result that says, shit, we can't know lots of things at once. i think that' damn near mind blowing.

i'm not certain how you would choose to see this, ie which category it falls into. i would happily accept either; i don't know why, but earlier i read into yr. statement some of what seems to be common understanding among people with an interest in science but not a technical background -- that the theoretical physicists are the brains and we just go out and prove their hypotheses. mostly, it works the other way -- experimentalists determine new physics and theorists show its consistency or departure from existing cannon. canon? i can't spell that word.

to answer your question, kmellis,

no, i think that's a perspective both sides of the argument come from -- if for no other reason than because so many of the important texts on the matter have been written by people who assert primacy of that point. i'm thinking very much of sakurai, whose text

rather, we obtain an uncertainty principle by taking the expectation value of trying to simultaneously measure the momentum and position of two identically prepared systems (there are some reasons why it's more accurate to say this than 'the same system,' but they're really just sort of loophole-closers) but in different "orders." what i mean by this is let's consider and operator P, for momentum, and Q, for position, and we have a state |X> that represents our system in a Hilbert Space. Basically, we're going to measure Q|X> and get q, or properly

Q|X>=q|x>

where q is just some number. Q is an operator -- it actually doesn't have any other representation than just Q. it is defined as something that gives you the position, that;s all -- in a certain representation, such as a cartesian coordinate basis, we might see something like =q(x,y,z), where q is a point in that three space. But we're actually throwing in another step there, where we projec t our position q into a certain basis. or, alternatively, we project Q into a basis and then |X> into a basis and then we have Q in that basis act on |X> in that basis.

Same sort of thing with momentum, we have an operator P that does this:

P|X>=p|X>

with the same hoopla about projection into particular bases and whatnot.

Now, let's measure the position after we've measured the momentum. This is all happening instantaneously, incidentally, there's no time in between measurements or anything like that. That means we go XP|X>, and we get some number out that should have something to do with a product of q and p, right?

Well, okay, let's do the same thing, but now we measure X first, so we take PX|X>.

Now, let's subract the two, ie let's look at PX|X>-XP|X>. So we have some number times something like but maybe not actually equal to |X> (it's for reasons to do with whether or not X is something called an eigenvector of the operator, it's not the vital aspect because we could for purposes of demonstration choose it to be such a thing) subtracting some other number times something else that's the same sort of object, a state somehow connected to (but again, not the same as) |X>.

Shouldn't this be zero?

Well, we don't really observe these things since how do I measure a |X>? i measure a position. So I do an operation that we shall call "taking an expectation value," and it turns out, no, this isn't zero. It can be all sorts of things, but it has to be greater than (depending on your axioms this can change by a factor of two) Planck's constant divided by 4 Pi.

It turns out it's not zero. It never is.

That, to my mind, is more interesting than a lot of other author's treatments of uncertainty -- some even start with it as an axiom, and work "backwards" to obtain the non-commutative algebra. I think it's much more satisfying to start with axioms describing things like states and operators and then use some rules about differentials to produce generators of translation and time evolution, and from there end up with this derived result that says, shit, we can't know lots of things at once. i think that' damn near mind blowing.

i'm not certain how you would choose to see this, ie which category it falls into. i would happily accept either; i don't know why, but earlier i read into yr. statement some of what seems to be common understanding among people with an interest in science but not a technical background -- that the theoretical physicists are the brains and we just go out and prove their hypotheses. mostly, it works the other way -- experimentalists determine new physics and theorists show its consistency or departure from existing cannon. canon? i can't spell that word.

shit that probably still doesn't answer your question. i say this: no, you cannot know those two properties simultaneously. it is more than 'cannot measure.' but we tend to use 'measure' and 'know' equivalently when speaking abstractly, though with VERY different meanings when speaking concretely about an experiment. there's a really cool exp. i have come up with, but it turns out i can't make it work because the phenomenon i want to work will always be lost in the noise of a solid state object.

*Runs around the room, noogies everyone, throws up Jim Beam, passes out, does not get into good college, spends rest of life wishing he had not stolen triple beam balance from science lab to trade for an ounce of pissed on Mexican ditch weed, never gets to retire, becomes first in many generations of immigrant strivers to be downwardly mobile, dies at the age of sixty one trying to climb a speaker stack at a Johnny Winter show in Delaware.*

posted by
Divine_Wino
05 December | 10:43

Hush up, Nerd!

posted by
Divine_Wino
06 December | 10:07

sam, I've been reading the threads, but I have no comments to make ('cept maybe this one).

posted by
safetyfork
06 December | 11:07

All posts and comments © their authors