David Elm’s EPR Explanation
EPR Paradox – Bell’s Inequality
Updated July 24, 1997
by David Elm
Referring to Bell’s paper “On the Einstein-Podolsky-Rosen paradox”
PHYSICS 1 (1964) p.195-200 and to all EPR experiments in general, it can be shown that there is an error in applying Bell’s inequality tothe tests which were designed to test it. Therefore all EPR tests which seem to violate the inequality and support non-local effects are faulty and cannot be used to reject all local reality theories of the universe. It can also be shown that the overall logic used in the EPR tests is circular and so the results are non-rigorous.
In the early 1930’S, Einstein and Bohr had been discussing reality at the quantum level. Bohr believed that reality at the quantum level does not exist until it is measured. This view came to be known as ‘Copenhagen’ quantum mechanics. The usual view of quantum mechanics says that a wave function determines the probabilities of an actual experimental result and that it is the most complete possible specification of the quantum state, and there isno other reason for an event to occur.
Einstein, Podolsky, and Rosen (EPR) set forth a thought experiment which attempted to show that the quantum mechanics could not be not a complete theory if it assumed things happen for no reason. Contrary to what the Copenhagen Interpretation asserted, Einstein, et al. said that properties of quantum particles must be real even before you measure them if you can know exactly what those properties are. Their paper proposes a case where two particles (electrons?) are known to have the same momentum and equivalent positions from the source of their creation. Momentum and position are one of several ‘complimentary’ properties of matter at the quantum level which Copenhagen QM Physicists say can only be known to a certain degree. They argued that since there exist ways to know the properties of one particle then you will know the other particle has the exact same property, so by inference, those properties of the other particle must be real whether you measure them or not.
Einstein et al. believed the predictions of quantum mechanics to be correct, but only as the result of statistical distributions of other unknown but real properties of the particles.
Bohm (1951) presented a paper in which he described a modified form of the Einstein-Podolsky-Rosen thought experiment which he believed to be conceptually equivalent to that suggested by Einstein et al. (1935), but which was easier to treat mathematically. Bohm suggested using two atoms with a known total spin of zero, separated in a way that the spin of each atom points in a direction exactly opposite to that of the other. (If indeed this can be said since QM says the spins don’t exist yet!) In this situation, the angular momentum of one particle can be measured indirectly by measuring the corresponding vector of the other particle. (Unfortunately, you cannot actually get a good measurement on a quantum particle, you can only get a probability measurement.) John Bell (1964) subsequently put forth ‘Bell’s Inequality’, which seemed to be a physically reasonable condition of locality. This locality imposed restrictions on the maximum correlations on certain measurements. For example, a pair of spin 1/2 particles formed somehow in the singlet state and moving freely in opposite directions. This inequality appears to be testable in a laboratory experiment because the statistical predictions of quantum mechanics are incompatible with any local hidden-variable theory apparently satisfying only the natural assumptions of ‘locality’ as shown by the predictions of Bell’s Inequality.
Measuring a property of a single quantum particle gives one of two specific readings: 0 or 1, due to probability based the relative angle between the particle and the measuring device. Einstein would say this choice is not just probability, but is due to other real but unknown properties of quantum particles. Bohr says probability is the only thing which exists on this level and that is the complete picture. Einstein did not use the term ‘hidden variables’ but he believed a deeper reality exists which would someday be knowable and understood. One or the other man was wrong. The debate went on for 30 years before Bell published his inequality.
Bell’s paper in 1964 presented a method which seemed to provide a way to test between the two views in the laboratory. By measuring spins, as suggested by Bohm, instead of position and momentum, a real quantitative test could be performed and a situation seemed to exist where QM predicted a correlation will exist in the measurements which should be impossible in a reality as described by Einstein’s universe. Many such tests have now been run and the results do seem to violate Bell’s upper limit and so most physicists now believe the universe has non-local effects, at least at the quantum level.
Bell’s ‘theorem’ seemed to use only a few common assumptions and simple logic to calculate an upper limit for the maximum possible correlations which are possible in any test of this nature. The simple logic includes the assumption that in any real world situation you cannot have more than the sum of the parts summed. You can think of Bell’s inequality as the upper limit of the number of items which can disagree on two lists. For example, two students who take a test will have only a certain number of answers which can disagree. Suppose one student answered 95 questions correctly out of 100 and another student scored 98 out of 100 on the same test, then we can calculate, by a linear addition, that there can be no more than 7 questions that disagree between the two lists. So Bell’s Inequality thus plots out to be a straight line with a ‘kink’ at 0 degrees.?! There are reasons why this ‘two list’ logic cannot be applied to the ‘lists’ of data produced in the EPR tests. For one thing, there is no master list in the EPR test so we cannot know, even in principle, which events are actually errors.
The experiment done by Aspect, et al. in 1982 was considered the final nail in the coffin of local causality by many physicists. In many respects this particular test should NOT be considered a good starting point for the beginner to try to understand, as it was specifically designed to rule out a single possible loophole in all the previous tests, the fact that it was, in principle, possible for the ‘effect’ to be caused by signals traveling from A to B at speeds below the speed of light. Aspect showed that whatever was happening DOES actually happen faster than the speed of light. The effect which is measured was travelling FASTER THAN the speed of light. Einstein’s special relativity asserted that all cause and effect actions for physical objects happen at or below the speed of light, within a sphere which moves outward from the cause at the speed of light. Such effects are called ‘local’. Action at a distance beyond the light sphere would be a non-local effect. Aspect appears to have achieved two sets of measurements outside the light spheres of each other and showed that a correlation still exists. This was accomplished by using acoustical optical couplers to ‘randomly’ switch both ends of the test into differently aligned detectors, and showing that the excess correlations still show up which seem to violate Bell’s upper limit imposed by his inequality.
To really understand the test and the mistake made in each one you should start with some of the earlier tests which were simpler in their setup since what I am referring to is a fundamental misconception in the formulation of the test and not just a loophole. (See “Quantum Reality” by Nick Herbert fora good simplified descriptive explanation of the EPR tests).
The EPR experiments can take any one of several forms, but the underlying principles are the same. A central source generates photons or particles which have related properties, such as polarization or spin. These particles are separated by some distance and then the properties are measured in analyzers which can detect photons or particles which have passed into the detector.
detector polarizer source polarizer detector A B | | | | `-------------> coincidence detector
It makes little difference whether the test is done with neutrons or protons or photons, as long as we take into account the known correlations. In some photon tests the analyzers contain a polarizer such as polarizing plates or a calcite crystal to separate photons at 90 degrees from each other. These types of analyzers will detect photons that are polarized in the same orientation as analyzer but photons which are orientated perpendicular to the analyzer will be totally blocked by the polarized plates or deflected into an additional detector with the calcite. Photons which arrive at the analyzer at other angles in between, have the probability of detection which varies with their relative angle to the analyzer, forming a cosine squared type of graph. The ‘result’ referred to by Bell is a timed count which constitutes a measurement of matches at A and B. It is important to understand that this result is the relative output of the coincidence detector, not the separate results at each analyzer. This small point turns out to cause a major error in the logic of the test because the original reasoning had to do with measurements at A not being affected by actions at B and vice versa. The data which are measured at the coincidence detector (C) is something quite different even though Bell deals with this data as if the same logic can be applied.
The analyzers in some of these tests are designed such that they can be rotated to various angles and in some other tests predetermined settings are fixed. In all cases it can be seen that the angle between the two analyzers is related to the relative output in a shape described as acos^2 type of graph. The shape of the curve is considered proof by some authors that the inequality is violated since the inequality is a straight line and any curvature on the plus side will ‘prove’ something spooky going on.
The simplified experiment (See: Herbert) can be paraphrased like this:
We can demonstrate a ‘spooky’ QM effect by aligning both analyzers, let’s call them A and B, and counting events which pass through BOTH detectors and using this as your reference count. This count is extrapolated and graphed and it is assumed that it represents 100%correlation between the choices made at A and B. A drop in the measured results refers to mismatches or ‘errors’. The test continues by turning one detector through enough of an angle to produce a given percentage of errors, say 5%. Then this analyzer is turned back to normal then the other detector is turned in the opposite direction the same amount, and once again 5% errors occur. Now what percentage of errors should be expected when both detectors are turned and the testis done again? 10 percent?
Bell’s theorem is based in the idea that the total measurement for two analyzers cannot be more than the sum of the changes in each, if the local view of Einstein holds since changes at A should have no effect on the results measured at B and vice versa. But when the test is run there are MORE than 10% errors at the given angle. Thus the local views of Einstein had apparently been ‘proven’ to be wrong.
“Now we make the hypothesis, and it seems at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other.” (From Bell’s original paper “On the Einstein-Podolsky-Rosenparadox”(1964) in section 2 (Formulation) sentence 5).
Bell puts a footnote in this line referring to a statement made by Einstein: “But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of system S2is independent of what is done with the system S1, which is spatially separated from the former.” ‘Albert Einstein, philosopher Scientist,’ Edited by P. A. Schilp, p.85, Library of Living Philosophers, Evanston, Illinois (1949).
What Einstein calls s1 and s2 I will refer to as A and B. While Bell’s hypothesis certainly seems to follow the same premise of Einstein’s comment, there is a mistake in Bell’s use of the concept of ‘independent’ as described by Einstein. Einstein is saying quite clearly that the actual facts and events in system A are not influenced by any change in system B. Bell then applies this same logic to a different situation where ‘results’ change from actual facts at A and B to the ‘RESULTS’ measured at the output of the coincidence detector. Einstein’s idea of a real factual situation is a detector getting a hit or a miss, it is the information which appears at each end of the experimental setup. But what comes out of the coincidence detector will also depend on whether there is a hit or a miss at the other end of the experiment. Information from each end is compared and then (and only then) can it be said if there is an error. Although the hypothesis of Bell seems only to be a subtle and insignificant change from Einstein’s statement, it can be shown to be otherwise. Perhaps it would be easier to see the error if we substitute a more commonly understood situation to see how the logic is being applied:
Suppose two people are playing poker and each cuts and draws five cards from individual decks.
Einstein’s statements (in poker terms) would be:
“No matter what cards player A draws, it will not change the cards in player B’s hand.”
Bell’s statement (in poker terms) would be:
“Therefore, no matter what cards player A draws, it cannot change the ‘results’ in the comparison of the hands (who wins).”
Now it becomes clear that there is a real problem with this kind of logic. Winning the hand at A does depend on what B draws and vice versa since the wins are only measured at C and by definition and physical setup, this will be a ‘global’ effect.
Bell is asking us to believe that on the quantum level two interacting particles behave as though they are still connected. This is like saying I have a magic deck of cards and whatever card player A draws will be the same as player B draws. This is illogical.
Suppose you and I are astronauts and we are now 10 light years distant from each other when we each draw our cards at preset times. Lets say you draw a Jack and one second later I draw a King.
Then we travel to a common point to compare our results and compute our winnings. The change in the value of your Jack occurred instantly (faster than the speed of light) and at a distance when I drew the King. This can be argued to be a non-local ‘effect’ which can be measured.
It can be said that this is even an effect which has real physical consequances. Suppose we bet a dollar on the outcome. You will (later) feel the ‘result’ it in your walet.
The effect is real and it can be measured and yes it travels faster than the speed of light, but IT’S NOT A PHYSICAL OBJECT OR SIGNAL, it simply a change in ‘interpetation’ that you are counting.
In the case of poker it is easy to see that nothing physical changed and so it would be a mistake to call this a non-local effect. Bell’s inequality uses this kind of ‘logic’ but it is just a bit harder to see.
Consider another card game where two people each have a deck of cards and the rules of this game say when both players cut the same color card then they both get a token from the bank. Now one player cuts a red card and the other cuts a black card. Who made the mistake? The question is meaningless, but the effect of the second card on the value of the first card is instantaneous, faster than the speed of light, yet nothing changes except the ‘interpretation’ of the VALUE of the first card. Let’s call red cards 1 and black cards 0. Now suppose I were trying to cut only red cards and in a run of 6 cards I cut all red: 1,1,1,1,1,1. In the same 6 turns suppose my opponet cut 1,0,1,0,1,0. The resulting tokens from the bank (the results of the coincidence detector) are 1,0,1,0,1,0. Now does it follow that we are really dealing with a clasical situation where ‘A does not effect B’? There is a problem with this logic. The same faulty logic is happening in the EPR tests, when the photon at A goes through the ‘up’ channel and then the photon at B goes through the ‘down’ channel the effect on A is instantaneous since it suddenly becomes a mismatch instead of a match, but it is important to realize that nothing actually changes at A, only at the coincidence detector.
I believe there is this subtle mistake in all of the tests which causes the inequality to be incorrectly applied to the plotted measurements. Thus the ‘simple logic’ used by Bell seems to me to contain a vital flaw. I believe the mistake is manifested in the way this reasoning causes one to scale of the data to plot it against the graph of the upper limit as put forth by Bell.
It is argued that changes in the measurements at A should have no effect on the measurements made at B and vice versa. I follow Herbert’s lead and call these changes ‘errors’ since when we make a chart for analyzing the results, we always do something equivalent to plotting 0 degrees at the maximum and using this as our reference point, we plot points of our decreased measurements vs Bell’s upper limit. Each mismatch then is considered an error caused by the missalignment of the two polarizers and the poin plots lower on the chart.
It is assumed that a turning of analyzer A makes ‘errors’ at A and a turning of analyzer B makes ‘errors’ at B. This is wrong. The change which does occur in the output of the coincidence detector is a different kind of effect. It is a ‘global’ effect. It is something like passing photons through two polarizers in series:
source ——-> A ————————-> B —–> detector
This is a well known experiment where the measurement at the detector is 1/2 cos^2, where you are concerned with the angle between the orientations of the two polarizers. This result is known as the Law of Malus and plots out a type of a normal sine wave. When A and B are aligned you get the maximum amount of photons. In an ideal experiment 50% of the photons will pass through. When A and B are at 90 degrees from each other you will get no photons. Can the simple logic of Bell’s inequality be applied to this? Turn polarizer A through enough of an angle to produce a given percentage of errors, say 5% from the maximum. Then this polarizer is turned back to normal then the other polarizer is turned in the opposite direction the same amount, and once again 5% errors occur. Now what percentage of errors should be expected when both polarizers are turned and the test is done again? 10 percent? Of course not. Anyone would expect a larger change and simply say Bell’s limit does not apply. In this case it is obvious that the logic of Bell’s inequality does not apply. (Or have I just proven series polarizers are non-local?!) This is know as a global effect, the result depends on the state of both polarizers. A close look at the tests of Bell’s Inequality will show you that a similar (but different) global effect is all that is being measured.
source ——-> X1 ————————-> A —–> detector
source ——-> X2 ————————-> B —–> detector
Imagine two series experiments set up side by side. Polarizer x1 and x2 are set at the same angle. Polarizer A and B are set at the same angle but different from X1 and X2. Two photons from the two sources which have passed through the first set of polarizers are now known to have the same polarization. When these photons each reach their second polarizer, there will be separate probabilities that each will pass. The local hidden variable view would treat the two branches of the EPR experiment with the same logic. There will be separate probabilities that each photon at A or B will pass or miss. Of course, if you assume there is a QM correlation then the results will plot out differently.
Suppose we just consider 1/180 of the pairs, those which happen to arive at the polarizers at an angle of 13 degrees. There is normally one chance in 20 of each photon going into the down channel and 19 chances out of 20 of passing into the up channel. We do not know which one of twenty it will be for either path which will choose the down path, we would not expect it to always be on the 20th photon or at any other specific number.
The chance that both choose the up channel is 19/20 x 19/20 or 361 out of 400. The chance that both choose the down channel will be 1/20 x 1/20 or 1 out of 400. So 362 out of 400 pairs will NORMALLY MATCH at A and at B, the other 38 photon pairs will NORMALLY MISMATCH one choosing the up channel and one choosing the down channel. This is normal probability. At the other 179 angles different amounts of matching and mismatching will occur and if you add up all photon pairs at all angles then only 73% normally match with the polarizers aligned. Since QM assumes 100% match at 0 degrees they plot their results differently that a LHV physicist would and this is what causes the apparent violation of Bell’s Inequality.
No special ‘entanglement’ is required to derive a cos^2 curve with maximua clearly less than 100% and minimums clearly greater than 0%. The computer simulation of the CIRCLES AND SHADOWS game which I ran through 20,000 turns produced a maximum of 73% and a minimum of 37%.
“73% agrees very well with the analytical solution
2-4/pi = 0.72676
that I get for the aligned case (angle = 0).”
If you erroreously assume that both photons will make the same choice because they came from the same event then you get results which seem to support the QM view. Many people have said that we know both make the same choice due to the results of the EPR tests. It should be clear that this is both an assumption you are making and the result you are proving. If you prove what you assume then you have proven nothing.
The quantum mechanical prediction happens to agree with this scaled up cos^2 curve and so each time the test has been carried out the results seem to follow the curve of Quantum Mechanics and the straight line ascribed to the local reality view of Einstein is apparently exceeded.
Refering to “experimental test of Local Hidden-Variable Theories” by Freedman and Clauser (1972) Physical Review Letters, 28, 938-41. One derivation of Bell’s inequality which has convinced many people that the problems of this test can removed is the idea that all you need is a way to remove the polarizers from each path so that you will have a reference to compare with. This derivation was described by Clauser, Horne, Shimony, and Holt. In 1972 the test performed by Freedman and Clauser used this ‘improved’ inequality to verify that these tests really do produce the non-local effects. They conclude that their results are strong evidence against local hidden-variable theories. I believe the use of this form of the inequality does not remove the problem inherent in the previous math, it only provides an automatic form of scaling. You have to be especially careful when you try to derive a good equation from a bad one. The problem can be seen in the plot on page 417 of W&Z (Quantum Theory and measurement edited by Wheeler and Zurek) and in equation 3 on page 415.
The 1/4 term refers to the fact that the scaled up curve goes from 0 to 100% on a 0 to .5 scale and the range from 22 1/2 to 67 1/2 degrees is 1/2 of the total from 0 to 90 degrees so on a linear Bell type limit the change in the curve should be less than or equal to 1/4. This is the same straight line that Bell’s limit plots with the previous versions of the inequality. Of course, the scaled up results form a cosine curve as they should and so the vertical plot covers more than 1/4 in this part of the curve so they conclude that the inequality is violated. Using a reference and ratios does change the scale of the plot but it does not change the bad logic into good logic. Since I have shown that the inequality cannot be plotted so the 100% point coinsides with the maximum measurement in an LHV view you still have to account for that.
Bell’s logic appears to be correct from a QM point of view because of the fact that QM assumes the two photons or particles are somehow entangled with each other and still have a connection with each other after they are separated. They believe that these two photons will both make exactly the same choice if they encounter a polarizer at the same angle.
Nick Herbert gives a clear statment of this assumption on page 215 of “Quantum Reality”:
“In the twin state each beam by itself appears completely unpolarized– an unpredictibly random 50-50 mixture of ups and downs at whatever angle you choose to measure. Though separately unpolarized, each photon’s polarization is perfectly correlated with its partner’s. If you measure the ‘P’ of both photons at the same angle (a two photon attribute I call paired polarization), these polarizations always match.”
Herbert clearly states (and all other EPR testers in some way assert) that this is assumed at the outset of these important tests. But if you ask how this is known they say “The results of Aspect’s test show this”. How can the results of the test be used in the assumptions you start with? This circular proof makes all the tests non-rigorous.
Bell’s inequality is graphed as a straight line which represents the upper limit in the summation of the ‘errors’ from both ends of the experiment but Bell mistakenly believes he can plot the maximum measurement of the EPR tests to this same point. The inequality starts at 100% at the upper left corner of these graphs and descends linearly to 0 at the bottom right. So everyone erroneously plots the maximum ‘extrapolated’ measurement at this same point. This is why the normal curve of the results is interpreted as an excess correlation, the upper limit seems to be violated
(because a lot of the data is left out when we assume the maximum
measurement represents 100% agreement in the choices of ALL pairs when the analyzers are aligned).
It seems more logical that the detection of each photon arriving at each polarizer is simply a probability based on the relative angles between the polarizer and the spin vector of the photon.
In that case the measurements of these polarizations will NOT always match even when both polarizers are aligned with each other. It means a substantial amount of events will be missing from the data. And Bell’s inequality is only applicable if it includes all the data. Detector efficiencies are so low that all tests results must be extrapolated to scale. Even Bell explains the need for this. What it means is that they can easily ignore those events which normally miss-match. It is no wonder there are excess correlations if you start by assuming correlations that do not exist.
On the other hand if you do an EPR test an do not assume that both photons will make the same choice then when you extrapolate the results the maximum correlation will be around 73% and the curve will never exceed the inequality which starts at 100%. If you take into account the events which normally will not match, it shifts the graph. The inequality is well above all parts of the measured curve and no violation ever occurs.
Since the test only proves QM when you assume QM, it is circular logic and non-rigorous. ALL EPR experiments have used this faulty scaling as the basis of their determination of the validity, or lack of, of the local reality views and it so is clear that the local reality models cannot be rejected using these experiments.
Perhaps Einstein was right all along.
CIRCLES AND SHADOWS
copyright 1993,1996 by David A. Elm
Updated Feb 14, 1996
Here is an game that closely approximates the EPR test and shows that there is nothing special about the use of particles to run this test, It can be acted out with macro sized devices such as game boards, dice, spinners, and tokens:
Three students get together one weekend to play several games of CIRCLES AND SHADOWS, a new gaming adventure I made up. The CIRCLE MASTER brings the playing boards and tokens. The other two students are the players and they sit at a table with a barrier between them high enough that they cannot see each others game board or they are spaced out as far as necessary so they cannot communicate (not really necessary but we like to confuse them). The CIRCLE MASTER also has a game board which he uses for the spinner only.
The playing boards are square and have a circular disk attached in the center. The disk has a pivot in the center and can be turned to be orientated in any direction.
The disk has 360 marks around the edge with numbers every 10 degrees and a ‘spinner’ that extends across the disk. The numbers are inscribed in the circle, not on the board. The spinner can also be turned to point in any direction independent of the circle orientation. There is also another set of 100 marks on the disk in a smaller circle which can be used to determine “HITs.”
The gameboard has two marks on the left and the right sides of the circle and this is where each player can align his circle to his chosen number in each game and a horizontal scale below the circle to measure the SHADOW of the spinner. The larger the shadow the more likely you are to make points.
An ingenious device at the bottom of the board (a t-square) slides left and right and can be used to accurately project the position of the tip of the spinner onto the scale. The scale is linear and runs from 0 to 100 with 0 being the minimum shadow when the spinner is vertical and 100 being the maximum shadow when the spinner is horizontal.
The RULES of the game:
Each game consists of 100 turns. (or 1000 or 10000 etc) On each turn the players can turn their disk (or leave it where it is) and then must lock it in position. Then the CIRCLE MASTER spins his spinner and announces a number between 0 and 359. Each player then turns his spinner to the announced number (without moving the circle) and determines the length of the shadow using the t-square and scale. This number between 0 and 100 is the probability (percentage) that they will get a HIT. To see if they get a hit now they move the T-square to the side and spin their spinner. If the number that comes up on the inner circle is in the range of the number on the scale the player gets a hit and receives 1 token. (So if the shadow is 75 units in length you will get a hit 3/4 of the time). Some players prefer to roll dice which are provided just in case. The dice select a specific number between 0 and 99.
The object of the game for the players is to try and guess what number will be spun by the CIRCLE MASTER thereby setting your circle to a number which will give you the greatest probability of receiving a hit.
On each turn, the CIRCLE MASTER gets a token if both players get a hit or if neither player gets one. HE does not get a token if one player gets a hit and the other misses.
Both of our players tend to leave their circles on one setting for an entire game, believing this will be a winning system.
In game 1: Player A sets his circle on 0. and player B sets his circle on 0. At the end of 100 turns the CIRCLE MASTER won 73 tokens. (this 73% figure turns out to be the normal maximum of a long game.) In game 2: Player A sets his circle on 10. and player B sets his circle on 0. At the end of 100 turns the CIRCLE MASTER won 71 tokens, which he considers an 'error rate' of 2. In game 3: Player A sets his circle on 0. and player B sets his circle on -10 (350). At the end of 100 turns the CIRCLE MASTER won 71 tokens, which he considers an 'error rate' of 2. In game 4: Player A sets his circle on 10. and player B sets his circle on -10. At the end of 100 turns the CIRCLE MASTER won 67 tokens, which he considers an 'error rate' of 6.
You might think that the ‘errors’ in the match rate caused by player A turning his circle in game 2 are errors at end A of this game. Then in game 3 you could perhaps think that there are ‘errors’ at B when he turns his circle. It may perhaps even seem that when both players turn their circles in opposite directions that there should be no more than 4 ‘errors’ but then something spooky must be going on when there are actually 6 errors!
Actually nothing spooky is going on at all. The nature of the game has simply been misinterpreted by this kind of thinking. The real nature of the game is that the errors are not ‘at’ either end, all errors are at both ends since by definition an error is a mismatch of the two ends. It is an illusion to think that the errors are separate facts. Also, the error rate is not linear, it will follow the cos(theta)^2 curve where theta is total angle between the settings on the two circles as evidenced by the following games.
In the next 19 games of 20,000 turns each:
Player A sets his circle on 0. Player B sets his circle on 0 for the first game then on each following game he increments his angle by 5 degrees. Here are all the percentage of tokens the CIRCLE MASTER gets after many, many turns at each setting:
0 0.73 * 5 0.73 * 10 0.71 * 15 0.70 * 20 0.67 * 25 0.66 * 30 0.62 * 35 0.59 * 40 0.56 * 45 0.54 * 50 0.51 * 55 0.47 * 60 0.44 * 65 0.42 * 70 0.40 * 75 0.38 * 80 0.37 * 85 0.37 * 90 0.37 *
Now to make this experiment even more like the real EPR tests, you have to add one more person. This fourth player sits in another room and cannot see or hear any thing going on in the game room. He is especially not able to know how many turns have taken place, but he does have a black box with readouts on it which tells him exactly how many tokens the CIRCLE MASTER has earned. He also has the power to start and stop the game and control the settings used by the two players. He runs the game at the 0 setting until he gets a count of 100. Using the same time interval he tests out all the other settings as above and compares his results and yes it does apear that something spooky is going on. (But it is just an elaborate illusion and we are counting on his belief that the two bits of data are ‘entangled’ and make the same choice at 0 degrees).
This is only a game simulation, but I have set up as many of the physical constraints from the EPR test as I am aware of in its ‘rules’. What it tells me is that Bell’s inequality cannot be applied to just a special subset of the data, rather it must apply to the entire data set. In this kind of a test quantum mechanics predicts a cos^2 curve and so does local reality. Bell’s inequality is violated only when all the data are not included in the math. QM physicists believe that there is a 100% correlation between the choices of both particles at 0 degrees. This assumption leads them to interpret the data incorrectly which seems to prove there is a measured correlation in excess of Bell’s inequality. Circular logic is simply not valid.
Einstein was right all along. Things happen for a reason. God does not play dice with reality. (ACTUALLY HE DOES ON THE NEXT LEVEL BUT THATS ANOTHER STORY).
Ray Tomes writes:
I came across David Elm debating EPR with some physicists in the news groups in the mid 1990s. He was having a difficult time and receiving a lot of abuse. and yet it was clear that none of them actually got what he was suggesting. The reason for that is that they didn’t have the statistical comprehension to understand David’s quite clear explanations. I joined in and explained to them that in statistics there can be samples and subsamples and that conditional probabilities came into play when moving from one to the other. The physicists had not grasped the difference between the full sample of events and the subsample that was analysed and graphed. This subsample was altered from the original by the omission of events that were detected in only one half of the experiment. No allowance was made for this in the calculations.
A little later I looked at David’s web site again and found a message: Congratulations to Ray Tomes, the first person to understand what I have been saying for 2 years. I was amazed. What persistence in the face of such much abuse that he received. From what follows I gather that David found a second person who understood, and not only that gave references to other people that had said the same thing. I have found Ratko Tomic to be very knowledgeable and he has also given me references that support me on issues where I had lots of people disagreeing with me.
Subject: Bell’s Inequality Hoax
From: Ratko V. Tomic
To: David A. Elm
Date: 16-Oct-96 09:16
I’ve watched your debate on Bell’s theorem, here and on internet. You’ve surely taken a bit of verbal abuse.
The good news is that you’re right – the approach you suggest does show why the local hidden variables have not been excluded from QM by any empirical fact so far. In all EPR experiments the detection results are rejected based on the _obtained values_, which is the same kind of procedure as selecting a poll sample based on replies to the poll questions — with such post-selection you get a biased sample which can “confirm” whatever the polster wishes (e.g. so-called “likely voter”).
In order to reach desired conclusion (impossibility of local hidden variables, LHV), some experimenters outright postulate that their sample is “unbiased” regardless of post-selection being based on detection results. Others wrap their assumptions into some euphemistic technical phrase (e.g. “non-enhancement hypothesis”) which keeps the uninitiated unenlightened to the nature of selection.
The problematic (relative to locality question) rejection in EPR experiments is the rejection of the detection events which _don’t fit_ the result template of the Bell’s theorem which requires that only one detector for each particle A and B triggers. This rejection is obviously a problem since Bell’s theorem _doesn’t follow_ if other types of detection events are allowed (such as when only A or only B detector triggers, or when two A detectors trigger, etc).
The “bad” news is that your solution has been around for over quarter century, e.g. a 1970 paper by Philip Pearle “Hidden-Variable Example based upon Data Rejection” (Phys. Rev. D 2, p 1418, 1970) mentions an even earlier similar solution by Wigner. These type models have evolved since under name VDP interpretations (variable detection probability) and a good overview of their results can be found in the monograph “Quantum Mechanics Versus Local Realism” edited by F.Selleri (1988 Plenum Press, ISBN 0 306 42739 7). You can also find few articles with similar solutions in the QM e-print archives (http://xxx.lanl.gov/e-print/hep-qm). Unfortunately, since these interpretations lack the ‘quantum mystique’ of Copenhagen, Many-World, Pilot Wave,… interpretations, they haven’t received much popular publicity (and who would dare to ruin the whole quantum mystery industry with some common sense).
In the monograph you’ll find among others plain classical models (by A.O. Barut) of a particle with magnetic moment in a Stern-Gerlach magnetic field which reproduces EPR correlations. The key to these models is that the beam splitting & detection outcomes of the EPR measurement _depend_ on the values of the classical (or hidden) variables, which is perfectly normal in any classical model (what else could the outcome depend on, anyway?). As long as one rejects at least 17 percent of the single detection events (even allowing for perfect polarizer) LHV model is a valid solution. The realistic LHV models (those based on some physical model capable of reproducing all spin/polarization measurements, not only the Bell’s inequality) require 36 percent rejection. Note that in the most quoted Aspect’s experiment, the data rejection was over 99.5 percent (since cascades he uses have 3-body kinematics, 2 photons + atom, where atom can carry momentum, thus photons don’t fly in opposite directions). No experiment has come even close to disproving LHV as underlying mechanism of QM.
In all experiments there was a tradeoff between detection efficiency and beam-splitting objectives — while it is easy to improve detection efficiency above 83% bound by going to energetic enough particles (such as neutron epr experiments or e+ e- annihilation), you then lose even more on beam resolution since the magnetic moment interaction becomes negligible compared to orbital momentum and the + and – spin beams overlap too much and the inconclusive data gets rejected on that basis (as opposed to failed detections in atomic experiments). In the annihilation experiments, the gamma photons, while easily detectable beyond 83% efficiency, the beam cannot be split using polarizers, since the polarizer’s lattice spacing is too large compared to wave lenth of the photons. So the Compton scattering is used to split the beam, which again has poor sensitivity to the polarization and the + and – beams overlap too much and net data rejection is again far too large.
The ‘quantum mysterians’ will insist that it is unlikely that better detector efficiency will violate quantum mechanics. This is a straw-man argument since the violation of QM is not the only possibility. It’s perfectly possible that the material nature has (particle & interaction constants available in nature) precludes the _net_ efficiency (combined from detectors, splitters and sources) which would reach 83% LHV boundary.
Subject: Bell’s Inequality Hoax
From: Ratko V. Tomic
To: David A. Elm
Date: 28-Oct-96 04:37
> In the half silvered mirror test or the two slit > experiment, there *IS* something going on that we don’t fully > understand. Its as if you can pull a single particle into pieces and > yet only get one part of it to react with the measuring devices. This > IS very spooky because we don’t know the mechanism for this ‘wave > colapse’.
The half-silvered mirror experiment is explicable classically (e.g. in semiclassical or stohastic electrodynamics). Namely, experiments with photons are fully consistent with a picture of a wave packet splitting in two at the mirror and each half propagating and triggering its own detector (with reduced probability compared to the pre-splitting wave) independently of the other packet, i.e. no hypothesis of instant wave collapse is necessary to explain experimental detector counts.
There was a heated controversy in 1950s when Hanbury-Brown and Twiss discovered experimentally what appeared as photon splitting on half silvered mirror (the discovery had a strongly negative effect on the careers of the discoverers since it appeared as a heresy to the quantum orthodoxy, while it was easily explainable via classical electrodynamics). Eventually the explanation was found within the orthodox quantum optics and labeled “photon bunching” i.e. the photon doesn’t split but photon wave packets of a finite span must contain photons in bunches (see E.M.Purcell “The Question of Correlation Between Photons in Coherent Light Rays,” Nature 1956, 178 page 1449-50). There is also “photon anti-bunching” effect which occurs when the two packets are in “mixed” (as opposed to “superposed”) quantum state (“mixed” state occurs in the EPR photon or a photon going through polarizer, “superposed” state occurs on half-silvered mirror or double slit; Clauser has demonstrated “photon antibunching” in 1975 using his EPR setup). As noted in Purcell’s paper, such mixed state will not show the bunching effect. But that also is consistent with classical picture (semi-classical radiation theory) since the mixed state (described via statistical operator in QM) behaves statistically as a classical ensemble (i.e. it can be understood as only one packet emerging out of a polarizer in each ensemble instance, unlike half-silvered mirror where in each instance two packets emerge from the mirror).
But regardless of the quantum optics explanation of the Hanbury-Brown Twiss effect, the fact remains that, purely experimentally, the two wave packets going from the half-silvered mirror behave classically i.e. each wave packet triggers (or fails to trigger) its own detector independently of what happened on the other detector — when one detector triggers, the chance of the other detector triggering does not change at all (much less goes to zero as non-physical “process” of wave collapse would suggest). In fact, there is no experimental fact demonstrating the infamous instant wave collapse of QM (there is, of course, a continuous change of size and shape of a wave packets, as described by Schroedinger, Dirac… equations). The “instant wave collapse” is an interpretative folklore without any basis in either QM (or quantum field theory) formalism or experiment. The only reality in which it occurs are gendaken “experiments.”
Therefore, the situation with half-silvered mirror (or double slit) is similar to EPR situation — when actual detector counts and their timings are examined (_including_ the “rejected” counts and the “bunched” counts) the full statistics is always explicable via (or at least consistent with) local causes and effects.