I am writing a compiler. But to some extend, a compiler writes itself. It just needs someone to do the monkey business of typing it in, which is me. It really is an abusive relation. So, if I am clear, I write and meantime just end up thinking about circuit minimization.

A circuit consists of terms, a term describes a combination of a number of state spaces -at least, that is how I like to think about it,- and a state space describes an exponential number of states. So, the only thing interesting are the terms. What to do?

Again some silly idea: At every point insert a new term by using that E=ite(x, E, E) and rewrite part of the term below it. Which means, you got N points to insert one of the terms below it, do some minimization, and see if the term shrinks. Or just rewrite and take the minimal term. If P!=NP, it can't shrink, of course, because that would lead to a deterministic algorithm. It has prohibitive complexity, and you might end up just factoring out states one by one, but still, interesting. Yeah, I know, a definite 5+ on the Richter scale of weirdoness.

Anyway, if anyone reads this blog and isn't interested in boring stuff, the picture shows a light storage device. It collects light at day, emits it at night. I want one!

Gotta get me one of these...


Can't... Count...

A brief thought on semi-tractable decision tree minimization, for a moment I assume that a minimal circuit equals a minimal decision tree. It just popped into my head, I am not too confident on it, but thought it would be worthwhile to write down anyway.

Circuit minimization deals with the question: What is the minimal circuit for a boolean function f:{0,1}n->{0,1}? In general, the question is considered to be intractable since the algorithms probe the size 2n state space. I looked at some solutions, which looked like kludges to me. But I am bad at understanding things I don't understand, that seems to be true for a lot of people.

How would I go at it? Well, to distinguish between all vectors, you need to find the bit which halves the state space best, you could try that by counting the {0,1} x {0,1} relation between the value of a bit, and the result of the function, for each bit. By factoring out bits, you could derive the minimal circuit. And there you see minimal circuitry is related to the ability to count - both intractable.

Counting is prohibitive directly on the state space, but circuits distinguish between symbols of form v = v0 ∧ ¬v1 ∧ ..∧ vn, a conjunction of number |v| variables where a variable may be negated, and such a symbol encodes 2n-|v| states of the state space. That number can be decomposed and stored directly.

Let's rephrase the question to minimization of small Nand terms Φ(v), which only distinguishes between small numbers of symbols. Thus: How to minimize f:Φ(v)->{0,1}? Well, with the same procedure it could be tried. Or one could start of at the bottom and build minimized decision trees. Trees which are the combination of two other trees under a logical connective could be minimized by probing recursively the square of all endpoints of the trees. But that leads to an intractable algorithm.

A small example, first an informal description of the algorithm.

For a formula Φ depending on {v0, .., vn}:
   0. If Φ consists of one variable or a constant, return Φ.
   1. Count the correlation between each variable vi and the formula Φ 
      with sign 1.
   2. Count the correlation between each variable vi and the negation of the formula ¬Φ 
      with sign 0.
   3. Select either the formula or it's negation, whichever describes the 
      smallest space, Ψ = b ⊕ Φ.
   4. Select the variable vk which splits that state space the best in halves.
   5. Recursively determine the terms for Ψ[vk=1] and Ψ[vk=0].
   6. Return b ⊕ Ite(vk, Ψ[vk=1], Ψ[vk=0]). 
The algorithm for counting is trivial. It takes a DNF and a sign, DNFs can be obtained by sat solving the term. For each conjunction and for each variable occurring in the term, the entry with the variable and sign is increased with 2n-|v|. For each variable not occurring, it and its negation are increased with 2n-|v|-1 given the sign.

Note that decision trees encode DNFs, and the the combination of a DNF under most boolean connectives has at most squared complexity.

Lets try it on a(ba). This decomposes into ¬a ∨ (a ∧ b). The algorithm starts of with empty tables. The table as holds the counts for a variable a and a sign s, the sign states whether the elements in the proposition or the negation are counted.



After processing ¬a, two occurrences of ¬a and one occurrence of b and ¬b are counted.



After processing (a ∧ b), one occurrence of a and b are counted.



The algorithm proceeds with counting the negation of the formula of which the DNF is (a ∧ ¬ b).



The negation of the state space is selected, since the state space with sign 0 totals one element, the other three. The algorithm derives ¬ite(a, ¬b, 0) which trivializes to a(bb).

Counting is linear in the DNF of Φ. The algorithm seems to split the DNF too on every recursion, so its complexity seems to be the number of variables times the DNF, which may be exponential, but for certain decision trees may be tractable. Hence the semi-tractable part.

Guess what rests is determining all the mistakes I made. It looks way too simple not to be know.

012910: It might give small trees, but not minimal circuits. Shared variables among trees are not factored out. Also, it doesn't take into account that the left-hand side of a circuit may be expressed in terms of factors of the right-hand side.

This mind never stops.


Technology at an Unbelievable Price

When I read that Apple introduced a new device, the 'iPad,' at an unbelievable price, I thought it must be irony. They make quality products, but are hardly known for their competitive pricing. Still, at a starting price of five hundred bucks, you can own a tablet -a computer consisting of one touchscreen- build by Apple, which is always good for your coolness attribute.

The specs: 0.5 inches thin. 1.5 pounds. 9.87 inch iPS display, full capacity multi-touch, 1 ghz Apple A4 chip, 16, 32, or 64 GB Flash storage. Extensions and add-ons: 3G, dock, keyboard. Best of all, 10 hours of battery life.

Its smaller and lighter than any netbook, and just looks like an oversized iPhone. What they really did right is running a variant of the iPhone OS on top of it instead of MacOs. Instead of point-and-click you can now touch, scale, wave, prod, gesture around applications, folders, photos, puppets which is the exact right thing to do on a tablet. Yes, you can type, but that remains a kludge.

And that's the thing, from the start I disliked windows, because of its, well, windows. Its all about pixel real-estate, and I always want my application to hold the maximum numbers of em, preferably almost the whole screen. My MacBook pro has a functional multi-touch pad: Why am I still bound to a menu and windows, where I repeatedly need to point at a few pixels, when a wave and a click could easily do the same?

Point-and-click? The same future as VHS.


Susskind, 't Hooft and Billiards

Susskind, in his lectures at Stanford, made a bold claim that the universe has a unique future and a unique past. Moreover, his view was that the both future and past can be determined from any given point in time. A reasonable view for a physicist, and it follows our perception of time as linear. We may not hit rewind, still, and play, but, we experience it as a movie.

If I read 't Hooft correctly, who is developing an alternative to QM, then he beliefs that the future can have multiple pasts, i.e., two states may converge to one new state. There's loss of information. An equally bold claim, but somewhat predicted by mathematics. Multiply or add two numbers and you can't uniquely reconstruct the original values.

It seem the problem boils down to a game of billiards: Is it possible to hit the balls such that a new position doesn't tell what the original position was? Often, it would seem so, but there's a way out in the sense that we could consider the state of whole room, not only the table.

Anyway, in case of doubt, I suggest they ask Ben Davies.

Just kidding here, I don't know anything about physics. Now I am really going back to programming...



Why does SUBSET-SUM, sometimes, trivialize so fast? Say S = {a, b, ... } is a set of integers adding up to d, then I can represent each integer, say a, it with its bitwise encoding an, .., a0, and you end up with (a bit confusing but c is the carry, an expression in terms of variables of the following column):

    cn  cn-1 ... c1 c0 
    an  an-1 ... a1 a0 . s0
    bn  bn-1 ... b1 b0 . s1
    dn  dn-1 ... d1 d0

Now, a,b,.. and d are constants so if you would fill in the zeroes and ones, you'ld end up with something like:

    cn  cn-1 ... c1  
    s0   0   ... s0  0
    s1  s1   ...  0 s1
     0   1   ...  1  0

Now you can manipulate si out by replacing them with the xor of the rest of the column, the digit and the carry - especially when taking into account the carry expression and the fact that you can swap rows. Guess I should read somewhat. [Abraham D. Flaxman and Bartosz Przydatek, Solving Medium-Density Subset Sum Problems in Expected Polynomial Time.] Right, there's a whole folkflore around this problem, the article was on mod M problems with medium density, its a probabilistic algorithm with an expected running time.

Tinkering... Its not too exciting...


AKS, Pascal, Primality and Subset-sum

Something in the back of my mind nags about the square of Pascal, primality and subset-sum. I read some of Scott Aaronson lectures which got me to think again on irreversibility, XOR-SAT, linear algorithms, BPP=P and BQP.

Why is testing primality on the nth line of Pascal's square seen as exponential? Looks like cubic, but prohibitive, to me. Missing stuff. Having a look at Karatsuba and Strassen again.

Stopped programming for a while and am looking what gives if you simplify SUBSET-SUM by manipulating out unknowns. Guess I need a hobby.

Quantum dump, while programming.


It Doesn't Fit

Meme of today: How much information fits into a digital circuit which is of squared form?

If I can combine two minimal terms to derive a minimal term, then that gives a bottom-up strategy on solving instances of problems. How hard is it to check you're near minimal given the number of operations performed?

It becomes interesting when you look at the encoding of SUBSET-SUM, where SUBSET-SUM I trivialize by assuming the numbers and set have the same dimension n. In the digital encoding you end up with a 'square' term, corresponding to the space-time complexity, with width and breadth cn, and depth therefore 2dn. FACTOR looks similar. (Note that I am looking at propositional formulas encoding instances.)

Now we look at what a variable means, say we have two, a and b, gives 2**(2**2) functions, thus we should get 16 different functions. For a variable, you can assume it encodes a mapping on the state space. For example, a encodes 0011 and b encodes 0101. Similarly, (aa), we only have one operator so I omit it, thus encodes 1100, (ab) encodes 1110. My favorite, (aa)a, see the logo/favicon, encodes a tautology. Enumerating the terms gives all functions eventually.

There is only a squared number of operations performed, the depth is also bounded, so, the maximal information in the term, and the minimal term size, is, as far as I can see, the Shannon density? Guess its standard complexity theory, its been a while.

012610: Thought it over, I defined a tautology here. If you define the 'Shannon density' as the minimal term for a proposition, then of course everything sticks. For factor, the encoding of a product is quadratic in time-space, but the minimal term size -of a concrete instance- is upper bounded by the number of divisors (up to the square root) after manipulating the term and algebraically removing one unknown.

012610: Perfect, I found the bound again on the number of divisors: The divisor bound asserts that, as n gets large,

d(n) \leq C_\varepsilon n^\varepsilon

or the more precise bound

 \displaystyle d(n) \leq n^{O( 1/ \log \log n)} = \exp( O( \frac{\log n}{\log \log n} ) )

012610: Of course, its rather easy to describe an exponential number of bit strings with a term recognizing bit vectors. So the answer is: Quadratic information even though a term may describe an exponential number of symbols.

012710: Oh, right, a variable encodes an exponential number of bit vectors. A term encodes an exponential number of paths to a variable. Variables can occur on even and odd length paths. Can we hope to compress along those paths? I think I tried that, its And-Or tree compression, and close to a variable picking heuristic from SAT solving.

(Great, in a day I went from logarithmic (which I remembered), to linear, to an exponential bound on the Shannon density of divisors. Messy. Ah well, been five years since I looked at this.)

Oi, Shannon, you home?


An Old Strange Observation

Assume FACTOR and SUBSET-SUM are independent of the digital base you choose for their constituents. Then why would they have different complexity? Its a, meaningless, observation that FACTOR just looks harder than SUBSET-SUM in it's digital circuit form.

Let subsum({5,3,4}) = 7 be the question if summation of a subset equals 7. Now, that is equal to the question, is there a vector v such that (5,4,3).v = 7? We could encode that digitally such that <5><4><3><v> = <x><7><y>, where constants are zero padded and rest are unknowns. Now, a blackboard multiplication:

            <5><3><4> . vi
         <5><3><4>    . vj 
      <5><3><4>       . vk
      < x  ><7>< y  >

Where the bits vi..vk are part of v. There seems to be some structural relation between FACTOR and SUBSET-SUM, if you could divide unknowns NP would be easy.

So, more precise, FACTOR equals <x><y> = <c>, where c a constant, and SUBSET-SUM -apart from its trivial encoding- also reduces to <c><v> = <x><d><y>, where c and d constants, v intermixed with zeroes. Great, now what? It feels like the same problem.


Silliness... How did XOR-SAT work again? Right, carry cannot be expressed in XOR form.


On Conway-Kochen

The free will theorem of John H. Conway and Simon B. Kochen states that, if we have a certain amount of "free will", then, subject to certain assumptions, so must some elementary particles. The proof of the theorem relies on three axioms.
  • Fin: There is a maximum speed for propagation of information, not necessarily the speed of light. This assumption rests upon causality.
  • Spin: The squared spin component of certain elementary particles of spin one, taken in three orthogonal directions, will be a permutation of (1,1,0).
  • Twin: It is possible to "entangle" two elementary particles, and separate them by a significant distance, so that they have the same squared spin results if measured in parallel directions. This is a consequence of, but more limited than, quantum entanglement.
The theorem states that, given the axioms, if the two experimenters in question are free to make choices about what measurements to take, then the results of the measurements cannot be determined by anything previous to the experiments. Note that this is a weaker claim than the anthropomorphic claim, this post only discusses free will.

Cool, what are we looking at?

Spin is not interesting, we have a particle with a certain behavior, it consistently shows a measurable property in two dimensions. Twin is entanglement, I'll come back to that. Fin is the assumption that information cannot travel faster than the speed of light.

All axioms are true, at least, according to quantum mechanics. The only silly thing is that 'entanglement' is nothing more than a mathematical property, and 'a state space collapse' is nothing more than a mathematical action. Its entirely similar to, if I have 'x = 3 + y,' than determining a value for 'x,' or 'y,' immediately determines the other side of the equation. Einstein and Bohr had fervent debates about this, Bohr decided that the equations just work, a pragmatic argument. A lot of philosophers unfortunately decided that 'equations' are related to the world, and build 'free will' arguments on that.

Casually, there is a lot wrong with the free will theorem. Free will is normally associated with nondeterminism, and there's where the argument goes wrong immediately. Nondeterminism cannot be observed. If I give you a coin, it is impossible to state anything probabilistic about the outcome of what you give back to me, except for the fact that it'll be head or tail. This is essential different behavior than a coin flip, which follows stochastic rules. So say we have a nondeterministic system E which forces a system P into a state. It remains impossible to observe whether system P is a deterministic or nondeterministic system since nondeterminism cannot be observed. We will never know if P flips or turns a coin. Thus the implication, free will for experimenters plus axioms implies free will for the particle, cannot be true. At least the anthropomorphic claim cannot hold, even given all the axioms.

Quantum mechanics predicts a 50%/50% distribution of spin up or down, and 100% correlation at two sites. This cannot be nondeterminism -since that cannot be observed- it is stochastic behavior. Stated differently, if particles have free will we should observe traces where particles just decided to be up for no apparent reason. But if we have stochastic behavior then the underlying model can be expected to be fully deterministic by Einstein's 'God doesn't throw dice' argument. Assuming a dice leads to a world where some guy with a beard does something when we observe something and also keeps the distribution in order. Moreover, now the anthropomorphic argument leads to the immediate conclusion we don't have free will, which is irrelevant, since following the above reasoning, it was flawed anyway.

So what do we have? A theorem where the implication cannot hold, the axioms are debatable, and the conclusion contradicts experiment. Great.

I didn't know about Conway-Kochen, this is the condensed result of a discussion over at LtU, for which I should thank the other contributors.


Back in Nand Land

Stopped for a while on QM, its either the work of a genius or a madman, and I don't really feel like thinking about it anymore. There are a lot of fuzzy conclusions even after looking at it for a short while, it works, but the interpretations of it and derived theorems sometimes are a mess.

A question I've been wondering about deals with minimal Nand terms. It is trivial to enumerate them. The simplest is just start of with a fixed number of variables, call that set S0, then S1 is the set of all combinations up to isomorphism of S0, and S2 the set of all combinations of S0 and S1, etc.

Now of course that leads to a numbering on terms, that is if you consider a set to be a vector for a moment. A minimization strategy is a function mapping a number on a number of terms which encode the same state space. Note that calculating a specific number for any given term by a computer would be prohibitive, since a term encodes a function out of 2**(2**n), and a term will have a number at least as big. Question is, can you do without the number, and what is the mapping which relates terms by trivial rewrites, and also, the in-degree and out-degree of terms? And also, what are the trajectories in Sx, how far, if anything, do you go up or down in x?

P!=NP says there is no trivial strategy, or trivial strategies are of exponential length, but still, you can study the relationships and 'possible' confluent functions. With a big computer, it gives an empirical study of the question.

The interesting thing is, if PRIME would correspond to a polynomial deterministic minimization, why wouldn't FACTOR?

Its been years since I was interested in this, seems I forgot a lot. Some terms have polynomial rewrite strategies, should look it up again.

Armchair programming and loose thinking.


Are Phenomena on Two Axes in Three Dimensions Strange?

There is a problem known as the Kochen-Specker theorem, this theorem is often trivially represented as the SPIN theorem. You can measure a 1-particle on three axes, and observe that the spin is parallel to the direction (+1), perpendicular to the direction (0), or anti-parallel to the direction (-1). The SPIN axiom states that the square of the spin is a permutation of (0,1,1), something is happening on two axes.

A particle being measured from three orthogonal directions

I really don't know what is going on here, but let's ask another question. Is it strange to observe something on only two axes in three dimensions? The answer: No!

Imagine you throw a ball against a wall, or bounce a fluid in space. If you look at it from three axes, where two axes are parallel to the wall, it will alternate between a flattened and stretched circle on two of them and on the remainder axis it'll look like a shrinking and expanding circle. If you assume that it is easy to observe the flattening/stretching, but not the shrinking/expanding, you end up with the SPIN axiom. If you assume that it can only fibrate alligned to the spin, you don't need to square. (In the picture you'ld bounce in the 0-axis direction.)

Note that this is the reverse question, it says nothing about the paradox or the spin axiom. It is a model which just satisfies the axiom, nothing more. Though now I am wondering if people are actually measuring spin...

The essence of QM: Its damned hard to get periodic behavior out of linear transformations? Or just a gimbal lock?

A Scientific Test for Free Will

Associated with quantum theory is the observation that if a particle can remain in an unknown state until observed, that would give use free will. Now, it is assumed that from entanglement we could derive free will since it can be observed, even over several hundred kilometers.

Does entanglement imply free will? No, because it is not a proof of any nondeterministic behavior. The "I don't know what is in a box until I look in it" kind of nondeterminism is invariant to the fact whether you look into two boxes or one, entangled or not.

If anything, correspondence between two boxes shows determinism by Occam's razor, and from that a local relativistic causal world.

Is there a scientific test for free will/nondeterminism. Yes, it's trivial, with some hand waving. If there is any process which is the direct translation of ( red | ( red | green) ), a nondeterministic choice between red and the nondeterministic choice between red and green. By definition, that equals ( red | green ) the nondeterministic choice between red and green.

Say you build a device like that and run it a hundred thousand times and it gives different distribution when repeated, I would accept life is nondeterministic.

For the sceptic, no there cannot be a conclusive proof of nondeterminism.

From Quantum Lambda Calculus to M&Ms to Free Will? Nice trip... Back to compilers.


On M&Ms

I have two M&Ms, one red one green. I shake them, don't look at them, and put them into two match-boxes. I send one matchbox to Alice and one to Bob. Alice opens her's, collapses the state, and observes its green.

Do the M&Ms have choice in choosing which color they are while in transit? Is there a 'spooky,' but physical, connection between them? I would say, only if we can exploit that connection and send messages between Alice and Bob, otherwise we are just observing statistical correspondence between undetermined, but absolute, states.

QM looks at what probabilistic relation exists between the shaker and the M&Ms. I don't see anything magical yet.

I prefer M&Ms over marbles since they taste better.


QE by Leonard Susskind

Some excellent videos by Leonad Susskind.

If you wanna know, you wanna know.


Why True Things are often Humbug

Assume x = y. Then x2 = xy. Thus x2 - y2 = xy - y2. Divide both sides by x - y, arrive at x + y = y. Since x = y, we have 2y = y. Divide by y, arrive at 1=2.

We violated a side-condition, cannot divide by zero, to arrive at the wrong result. Pretty nice workable theory though.

011610: Decided I am a complete nitwit when it comes to science and just asked the experts... Waiting: http://www.physicsforums.com/showthread.php?t=369914

This turned into nagging about something I shouldn't meddle in, this is food for physicists. I'll see what gives. Something completely else, C programming.

Why can't we call the Future?

I build a machine which sends a bit one second out of the future to it. On 'true' it says whether a balloon in the machine is there, on 'zero' it is not. On 'true' the machine immediately sticks a needle into the balloon.

The Dopfer experiment is interesting, the only way out of the above reasoning is to see if God keeps the universe in a consistent state. Otherwise, given time delay in the order of nanoseconds and an optical switch which switches between two (Heisenberg) observers in opposite states in the Dopfer experiment, you can derive a paradox. I.e., measure a particle below and derive in which of the two Heisenberg detectors the above particle should end up, use the switch to let it end in the wrong detector. Then again, I don't understand anything of Quantum Theory.

More interesting would be, of course, if we build the machine, let it switch between two states: coherence or not, and it either gives coherent or nonsensical behavior.

This thought train should end. Guess the Dopfer experiment is flawed, I don't know ****, or the theory is just incomplete and every physicist and his/her dog knows that. Btw, people like John Cramer are trying to build a phone on the Dopfer experiment, so far there is no result.


Just for completeness. This is the Dopfer experiment. First for starters:

The two slit experiment is famous and confirmed. Repeatedly shoot a photon at two slits, and an interference pattern will emerge. Which is rather odd, since classically, if you understand that a photon is a particle, it should go trough either one, and you'ld expect to see to two stripes. However, even when shooting one photon, it will end up in the traditional collection of bands you'd expect of interfering waves. The only explanation: the photon passed through both, and interfered with itself.

Now, say you'ld want to confirm that, and observe through which slit a photon went. The ridiculous but true observation is that when you place an observer at one side, the photon will just behave classically and only two bands appear. Hence Quantum Theory, a theory where the photon is in an undetermined superstate, and observers, by observing, collapse it to a state. 

The Dopfer experiment above is trivial, a photon is split, in the path above its either observed or not, therefore collapsed to a particle or not, in the path below you'ld expect an interference pattern or two bands. I didn't do any math on it, I assume others did and QT predicts that result.

To me, the strange thing in the Dopfer experiment is the placement of the topmost observer before the double slit. Why would a photon in the path below know anything what happened to a photon in the path above? [I am still reading, welcome to the world of quantum.] That's also called non-locality. Non-locality has been proven often on that splitted photons are correlated, but in the end is just a technical prediction of the theory, its not confirmed there's any form of real entanglement, i.e. some strange 'illusive' immediate non-relativistic connection between the residu of a split photon, that's what the Christian paper is about. One might measure correlation, but that doesn't imply entanglement in the quantum theoretical sense.

The Dopfer experiment is all hearsay, I cannot find the original document, I don't know if she did exactly this experiment, but it is said to be confirmed. I don't believe it, then again, life is weird.

(A weird extension of the Dopfer experiment is that if you'ld make the path above longer, the detector below would know the position of the detector above before it is placed somewhere. I.e. you can detect information in the future, which by induction, can be repeated by coupling multiple devices and would give a phone line into the future, 'a phone to God.')

Okay, my interpretation. Is a photon a wave or a particle, or both, or one of two before/after it collapses? I would say neither, its just something we don't understand, and QT is a simplification which works in most circumstances, like Newton's classical mechanics model works in everyday life. Entanglement doesn't exist, neither does collapse of superpositional states. The above experiment is humbug, and the experiments in the video below, if anything, prove that entanglement doesn't exist. Put in more soft terms: QT is right in it's prediction, but wrong it it's interpretation.

I am no physicist. I was just looking up some background on a paper on Quantum Lambda Calculus.

Interference Computer

I wonder wether they ever tried this: an interference computer where you use the fact that light behaves as waves to draw interference patterns of massively parallel computations. More simple, would it be possible to build logic gates which work on light? Say, each variable has a frequency 1, 2, 4, 8x time some base, and you observe the output of a computation on a Boolean circuit by either observing the light emitted at the end, or the arising pattern?

Long Gui-Lu. General Quantum Interference Principle and Duality Computer. 2006. Commun. Theor. Phys. 45 825-844. doi: 10.1088/0253-6102/45/5/013

Nice to try on water first. Bubbles!


From Bell's Inequalities to Entangled Qubits: A New Quantum Age?

Also of interest, Anton Zeilinger, Experiment and the Foundations of Quantum Physics, Institute of Experimental Physics, University of Vienna. His holidays in the Canaries are interesting too. What happened to the Dopfer experiment?

Guess non-locality but no non-causility, how improbable.

011410: Oh great, its all an illusion: http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.4259v2.pdf. Seems Bell didn't get his types/dimensions right. Ouch, if there's no collapse, there's no quantum computer, just probability? Might boil down to 'interference=parallelism.' Sigh, if you live on a line, its pretty difficult to observe a circle.

011410: I am with Joy Christian on this one. Either there's entanglement, and you can send information, or entanglement is an illusion created by observing 'higher-dimensional' objects. And since physicists are now working around the the fact that there's no information being send, its Joy's view.

011510: Read a bit more; come to think of it, if the two slit experiment showed anything, then its that the particle interpretation is an illusion too, same for the only wave interpretation, so Christian might be wrong here. So what do do we have, traveling potential/wave/vibration? Then again, I cannot hope to understand this. (Looking at some weird science by Cramer, seems like the observer is placed at the wrong site, I expected you want to catch it after the double slit? Which doesn't make sense also. Same holds for Dopfer, if anything, I would have expected her to have proven quantum theory wrong in its prediction in this particular case.)

011510: Rediscovering string theory without a physics degree and some high-school background is a bit too much for a day, back to compilers.

011510: Great conclusion for the end of the day, put into extremes: Bell is an idiot, I don't see how Christian can explain the two slit experiment, Dopfer probably cheated -or she measured something which is really wrongly interpretated,- and Cramer shouldn't do science. Then again, there's a good chance I don't understand anything and we're calling Andromeda next year.

Just catching up. Ultimately, a 'Phone to God?' That is, the future? Unlikely, QM is flawed.

Man Gets Life In Order For 36 Minutes

JANUARY 9, 2010, ISSUE 46-01 of The Onion:

Man washing dishes

JACKSONVILLE, FL—Briefly overcoming a near-continuous streak of disorganization, area man Terry Oberlin, 37, got his life together for exactly 36 minutes, sources confirmed Monday.

According to family reports, Oberlin's bills for the month were paid, the living room was vacuumed, the dishes from dinner were all washed and put away, and the father of two was sitting in his favorite chair in the living room without a single thing in his life out of place.

"It was nice to get some chores out of the way," Oberlin told reporters later, acknowledging that for more than half an hour he experienced no regrets, despair, or frustration of any kind. "Felt really good."

The crucial worry-free period reportedly began at 7:50 p.m., when Oberlin took the garbage out to the curb and then returned to the house, where his back, which had been bothering him all day, finally cracked back into alignment. Upon entering his kitchen, he spotted a month-old magazine sitting on the counter where it didn't belong, and dropped it into the garbage.

At that precise moment, sources said, Oberlin achieved a state of total order in his life.

Witnesses indicated that upon entering into his relaxed state, Oberlin—who had no e-mails to respond to and was finally caught up with everything at the office—spent a full 90 seconds staring quietly at nothing in particular, and then approximately 8.5 minutes paging leisurely through the evening newspaper.

During this period, he did not once concern himself with his finances, his in-laws, or his dental coverage. And as his mind began to wander freely, he neither relived painful humiliations from his past, nor felt any anxiety about his personal shortcomings.

"He really seemed to be taking it easy," said wife Kay Oberlin, noting the pronounced absence of a deep furrow from her husband's brow. "He had his feet up on the ottoman."

She added that on Sunday her husband seemed greatly pleased to have found the time to change the oil in his car and mow the lawn, and that during dinner Monday evening he appeared relieved upon hearing the news that the wine stain on his good white shirt had come out in the laundry.

Oberlin spent the remainder of his temporary period of tranquility watching the first 26 minutes of House M.D.

Monday's incident marks the first time the 37-year-old has had his life in order in more than six years. In 2003, he experienced a moment of complete inner peace that spanned a brief minute and 15 seconds, lasting from the time Oberlin finally got around to removing the training wheels from his son Tyler's bicycle to the time the boy rode it off a curb, requiring a trip to the hospital for a few stitches.

"Luckily our insurance covered most of that," said Oberlin, recalling the accident and how it made him realize that, even though he had a decent health plan, his policy wasn't going to cover the cost of his children's orthodontic needs, and that if he wanted to send them to college he really needed to find a way to start contributing to their 529 funds while still keeping up with his mortgage. "And fixing the bike only took the better part of a Saturday."

Household sources reported that Monday's 36 minutes of perfect order came to an end when the phone rang and Oberlin picked up the extension in his living room. It was his mother-in-law, calling to say their family vacation plans would have to be changed, since her best friend Gwyneth had just been hospitalized.

"So then I had to deal with that," Oberlin said. "Which was fine, I'm not complaining. I expect I can catch the rest of that episode of House on a rerun sometime."

Feels Familiar



Just an N0.753 moment where N=Nihilism.

Understanding SAT

This is an old hobby, just skimming the content of this. Not sure I like the direction satsolving is in. Basically, satsolving boils down to translating boolean terms to CNF and using heuristics on a DPLL solver to solve a problem. CNF is a bit more unknown than DNF which is more used in industry, but very well known by complexity theorists who will often reduce from 3-SAT.

What is good about this research, is that solvers are very fast, since CNF is very 'computer friendly.' What I find bad about it is that the CNF form of formulas often 'hide' what a solver actually does. I.e., often the behavior of a satsolver is better understood as a 'pebbling game,' not the mathematical term, or a 'partitioning' on the original formula, or even, clause rewriting, like resolution, of the original term.

I have been interested in Nand-encoded terms for a long time, but gave it up after some sociologically induced headache. What is nice about those terms is that all basic propositional boolean terms in most algebras directly rewrite to Nand terms only growing a small constant factor. Moreover, a lot of 'normal' boolean axioms directly translate to a smaller set of axioms, there's even a proof that one axiom suffices on the boolean algebra for Nand; also, Nand encoded terms 'maximize' sharing, since terms which look superficially different given Or's and And's, may turn out to share a lot of structure.

At the heart of satsolving lies the P=NP question, satsolving is NP-complete, and it is commonly believed that P=NP is not true. I am one of the few people who actually believe it may be true, since there are no real reasons to assume otherwise, and, moreover, when it comes to basic questions, like minimal circuit size for boolean circuits, there are fundamental questions which are still unanswered and which should be solved. Designing algorithms feels like trying to reinvent Edmond's Blossom algorithm often, a partitioning scheme for graphs which was assumed to be hard for a long time until Edmond formulated an answer. [1]

What is nice about a satsolving Nand terms is that it is more easy to formulate heuristics directly on terms. Ultimately, the question is whether there exists a heuristic which would solve most, if not all, terms polynomially by exploiting the structure of the term. Since there are polynomial paths which rewrite a Nand term to minimal size, I found it interesting to study those for a while - or study those on particular structures. [2]

When looking at satsolvers based on CNF -guess I skimmed a few, cough, thousand articles in that direction- one finds heuristics which are often just 'computer friendly' array algorithms and reduce to specific strategies on ordinary Boolean circuits, and by obfuscuating the problem CNF satsolving sometimes lead to its own folkflore. Like believing that there are borders which encode phase transitions between easy and hard, like a clause/variable ratio of 4.2something, whereas it can be easily proven that simple and very hard terms can be encoded on that border with exactly that ratio. Even when it would hint at some result on random generated instances, I don't think it produces any fundamental insight.

My believe is that this research, even if there will always be a lot to be done to give better, faster, provers which are of industrial value, is 'stuck' fundamentally - i.e., it will not generate new, possibly fundamental, better provers which might turn out to be faster, or stronger, in the end. And may even prove P=NP, or otherwise.

[1] I suspect the believe is actually a rather pompous sociological construction. Since scientists are smart, and they didn't find an answer, it can't be true.
[2] For example, if PRIME is P, is there a 'loose informal' relation between the AKS algorithm, a propositional term, and a heuristic on it? Or, does/should it imply a polynomial heuristic?

A small joke should be inserted here.



Something which can only backfire. Criminals will never use a phone if it would incriminate them, and criminals will use phones to incriminate others. Senseless.

Weeding through EFF stuff.

EFF Heroes

Just an EFF xkcd sponsor ad.

Radiohead - Paranoid Android (after Marvin)

"The best conversation I had was over forty million years ago.... And that was with a coffee machine." - Marvin the Paranoid Robot.

Oliver Koletzki feat Pyur - These Habits


Ad-Hoc Spying

Just some random thoughts which occurred to me. Trojan horses, back-doors, or exploits are pretty well known manners to break a system. Let's use the following terminology. An exploit is some bug you can exploit remotely, like a buffer overrun in ftpd, or an insecurity in domain checks of Javascript where by cross-scripting over different windows you can access local ports. A back-door is a remote functionality placed in the system where you can gain access, for example, a hacked ftpd where the account 'devil666' gives instant access to the machine. A Trojan horse is something all together different, its part of a program or OS which actively searches contact with the outside world, preferably undetected. An example would be your good old-fashioned key-logger build into some kernel module which searches for passwords and sends them out over ICMP.

Now, the above is pretty well-known 'old-skool' hacking. Question is: What can you gain extra by ad-hoc or wireless hacks?

Trojan horses fall into the worst category of hacks and are the most interesting, Trojan horses in the kernel being more severe, and Trojan horses in the hardware the worst category. For spying, owning the root account what most sysadmins fear is not even interesting; ad-hoc gossiping OS parts over wireless would be. I.e., it gives the ability to bypass firewalls and tethered lines locally. [1]

What about dynamically planting memory scanners for honey keys, a code you plant into a document such that it can be traced, and gossiping those to a remote server outside the firewall (FBI)? Or, an Intel hack so you can shutdown all processors in some environment by gossiping a key (US army)? Or, gossiping credit-card information from a PC over Bluetooth to a phone and send that out over the Internet through a browser cookie (black-hat)? Or, just using hacked WiFi phones in the environment for building a remote login tunnel (old-skool hippy)?

A large deal of these hacks are technically possible, but would be too difficult at the moment. But in a few years, where wireless access between desktop/laptop, phone, and -what-do-I-care- a dishwasher will be conceived normal, the door will be wide open.

[1] Some people are unaware how easy it is to place a backdoor in software. But placing a hardhack in hardware is almost as easy as most processors are 'programmed' in a high-level language like Verilog, or VHDL, and subsequently post-processed. Bypassing industrial verification is pretty difficult, and placing a radio on a die and hope that it works -a processor is pretty radio unfriendly- and goes undetected, for that you'll need an EE with extraordinary skills. But still, doable.

A good case against gossiping protocols? One other thought: Why I god's name do I need proprietary software from the KNP to access my USB dongle? Annoying...


Roof is on Fire!

Security is an interesting thought experiment. Again a post, but now for those whose paranoia level actually went through the roof.

Nothing provides security better than hardware. You can place all your trust in 'capabilities,' and 'prime number factorizations,' but any black-hat OS programmer will happily code around those with a one-liner. A separate ring of company machines with a good hardware firewall gives good protection, like the set-up of the previous post. But you can tunnel through X11, I even have the software on my machine to do that. Of course, for a startup, nothing then of course beats just having the machines with your assets just unconnected to the outside world.

Question is: What would break that? You would need to exploit the fact that some machines have wireless access, which could be done, given any of the two following Trojan horses. One, your OS cannot be trusted, there are means to exploit the hardware through kernel drivers. Two, your hardware cannot be trusted, there are means to access the hardware through exploits.

One is easy, it would assume some hacker on some module providing Internet access through slightly modified kernel drivers. I would say I'll give it a probability of around 40-70% that some modules have hacks.

For two, you'll need a total different level of paranoia. Internet connectivity is one thing, but what about Bluetooth? Ad-hoc gossiping networks can be implemented in a few hundred lines of assembly, add code-division multiplexing into the mix, and you end up with a hack almost no-one can detect. [1]

Who would do the latter? You need a plot where say the CIA works together with Intel 'for the common good.' Unlikely, yes, or given 9/11, maybe. Impossible, no.

What can you do? Walk around with a soldering iron is an option. Build a big cage around your servers is another one. Live in a hole a hundred feet in the dirt with your own electrical supply and feed of worms, a bit over the top I guess. But then, the latter is exactly what banks in the Netherlands do, except for the worms, of course.

[1] You could do this hack on the firmware/OS level too. The hack would be quite trivial, given enough access to hardware/firmware implementations. It basically involves gossiping around a new part of the OS through ad-hoc Bluetooth, or one square mm CIA-owned Intel transistors.
Of course, to some people the idea might seem preposterous, I don't believe it to be true too. But, when looking at possibilities, yes, in sensor networks they are now aiming at one cubic mm nodes, and a radio receiver on an Intel die would be like fitting a shoe into a baseball stadium.

Gave up on kernel modules like e100, Bluetooth, and Hamming Radio.

Extra: Welcome to our brave new 'interconnected' world. In all honesty, its far fetched, but the mere fact that a phone works inside a bank, should be assessed as a liability in the coming decades. On the flip-side, when it comes to exploits and Trojan horses, I didn't even start yet.


Securing IT

Httpd was running file sharing turned on without authentication? [1] Now thats a security breach if ever. Not sure about the rest, 'chkrootkit' and 'rkhunnter' are apparently unreliable on Linux systems, so there's no way of determining the system is hosed. Apart from that, its impossible to protect against an exploit in Mozilla, since that would go unnoticed. I.e., a Javascript/plugin exploit is a 'runtime' exploit, no files are changed on the system, and forensics can only be done on a complete memory dump.
Jailing Mozilla in a separate account would be a partial answer.

There is no way to determine if the system is broken, except for the fact that I don't like the number of outbound connections made, and some behavior of the machine doesn't seem okay. I sometimes play around designing my own server park for a small company, so far, it would boil down to:
  • A server with the OS with a HD with a big red button which says: 'READ-ONLY.'
  • A file server.
  • Company work machines, light clients.
  • A firewall, blocking everything except for X11/VNC.
  • A secured server for Mozilla, only to be used for that application.
  • A separate mail/webserver which handles mails through a http client.
  • A firewall which protects most ports, probably even blocks outbound https.
  • Windows machines.
  • A logger which spies on traffic going in and out.
I guess it boils down to rings of trust where applications which interact with the outside world should run in a separate outer 'hardware' ring. And the change is, that now includes Mozilla.

[1] Webdav, file sharing for the masses. Yeah right, have javascript connect to the local webdav port on a machine?

Doesn't solve the USB stick, or the sysadmin, problem. A substantial number of your average medium-company-sized under-payed sysadmin is a black-hat by nature, driven by 'Oh! You think your system is secure? Alright then.." and most of em can be found on freenet anyway, or some illegal IRC, exchanging whatever company info they feel like. But, it stops somewhere.



God, I might even be looking at Google/Ajax, or I might be looking at a compromised browser Google/Ajax hack. Guess it is starting to fall into the whatever category. ET phone home?

ET phoned home, system hosed. Httpd running at places it shouldn't. Installs of rootkit checks don't show up in history/logs. Login behaves erratically. Ah well.

For those not sufficiently paranoid: At one company I worked in, machines were hacked 5-10% continuously, so far for Windows or Linux, and a continuous practice of spying through rich content mails. The latter is/was rather easy, instead of loading a picture, Javascript is loaded and the content of the mail/forward/reply is pushed back. Mails could be tracked throughout whole companies.[1]

For an exploit in Mozilla, you just need an exploit in HTML/ Javascript/ css or a plugin, and you can start pulling/pushing stuff to some Google account at will. On a last note, on the Internet, its hardly a question of 'if,' but 'when.'

Actually, in my experience, for most companies, the biggest problem probably isn't even security but the sysadmin.

[1] A good reason to quit, btw.


Can you Trust your Friend, the Browser?

A post for the sufficiently paranoid, that is, aimed at system administrators.

Usually, sysadmins aim at securing their machines by making sure nobody can gain root access to it,  since that would give outside users the privileges to do anything: Install software, spambots, illegal data, spy on the machine.

But what about something as simple as a browser? A browser has user level privileges, but can both connect to the outside world and run local commands. Hacking a browser gives an outside party all information on the surfing behavior of a person, the work he is doing, and contacts he has. Moreover, since a browser accesses the outside world 'erratically,' its hard to see its compromised. One outside going http connection to some machine infrequently is hard to notice, and it has a big cache of temporary data where a lot can be hidden. Moreover, a browser is a large body of software, it consists of hundreds to thousands of packages, where any compromised package may mean a compromised browser. To an outside party, it may be more interesting to hack the browsers of a company, or al Qa'ida, than to gain root access to the machines.

And now for the paranoid plot. The hackers on Mozilla can normally be trusted, we can assume they are either teenagers, or your normal smelly hippy do-good nerd. But when it comes to trust: Do you trust the malignant hacker aimed at making money through spam? Or, for that matter, the Chinese, the Israeli, IBM?

One backdoor in the open-source project Mozilla, which may be as big as ten lines, would give, say the Chinese, who have a substantial cybercrime division, spying access to each and every user who installed Firefox. And subsequently all data, or, through exploits, root access. That is not to say that IE can be trusted that much more, but still...

I guess if you would derive the probability from all probabilities of all possibly interested parties who subsequently also did it and got away with it, you'll end up with a chance bigger than one.


System is Compromised

I use an old web-browser Galeon. Turns out it is compromised, at least, it might be? Sometimes I run netstat. When starting up, the browser connects to various machines, it seems someone is tunneling to my machine. Except for leaking information, of a maybe hopefully expensive compiler, and connecting to machines I might not like, I don't think a lot has happened.

Its a Unix machine, so people can do 'everything' on my machine remotely, so that's usually pretty bad. It happened before. Ten years ago when I had a server with a broadband connection, scripted hacking attempts averaged about 3-10 times a day since outside hackers can only see its a Unix machine and subsequently assume its a company server since they can hardly differentiate. But I never had anything worthwhile on my machine, so, then it was a whatever.

See if there's a rootkit installed. There don't seem to be extra files.

Its either in the Mozilla package or Javascript engine. Its pretty much confirmed, even when loading a trivial file, after a while, tcpdump shows that http connections are opened to a series of machines and there's a lot of information going over them continuously.

Its always pretty bad under linux, only a complete reinstall fixes it, and I don't feel like.

Guess I'll second check. It might just be Mozilla pinging home.