12/29/10
12/23/10
12/22/10
No News
I've been posting about the WikiLeaks soap a bit, also to see what I personally think about it. If you take down all rhetorics, the whole enchilada boils down to:
"Diplomats, and government institutions, overestimate their own importance."
We all know the game and the players. The general public is not stupid, and shouldn't be sheepherded into dumb flocks. The strikingly simple reaction by the Russian government was the correct one: "We don't care, and will just ignore whatever is leaked." They have their own problems and the dealings of the US government are just wellknown by the players even if never stated publicly until now. What probably will be found in the US cables is the extension of a nationalistic companyinterestsdriven policy. At worst, it will cause some embarrassment. Where's the news exactly?
The question is what the US will do in response. More secrecy, or more transparency? As a citizen of this world, I'ld hope for more transparency. There are just not that many interesting secrets a government can, or should, hold.
CIA Launches WTF to Investigate Wikileaks
In an hilarious move, the CIA has launched the WikiLeaks Task Force (WTF) to investigate, possibly handle, the damage done by the leaked diplomatic cables. Hilarious probably only to Internetsavvy people, since WTF is the common acronym for "What the F*ck?!," which nicely summarizes some of the US government's reaction.
12/21/10
Handshake Solutions
A spinoff of Philips, named Handshake Solution, has recently been buried. I am rather surprised at this. Handshake Solutions designed clockless asynchronous integrated circuits with low power consumption. Normal ICs spend a lot of their space budget on forwarding the clock signal, and clocked chips dissipate a lot of heat and noise just to keep everything in step, even when doing nothing that clock just ticks away.
I thought this was difficult but revolutionary technology with a bright future given the coming "Internet of Things" and multicore machines with hundreds of cores. The spinoff was terminated but NXP still manufactures the chips.
Looks like a case where too early in being right means being wrong.
I thought this was difficult but revolutionary technology with a bright future given the coming "Internet of Things" and multicore machines with hundreds of cores. The spinoff was terminated but NXP still manufactures the chips.
Looks like a case where too early in being right means being wrong.
12/18/10
Wikileaks and Bank of America
The Bank of America has decided that it will follow in the footsteps of PayPal, MasterCard, and Visa, and halt all its transactions that it believes are intended for WikiLeaks, including donations in support of the organization. 'This decision, is based upon our reasonable belief that WikiLeaks may be engaged in activities that are, among other things, inconsistent with our internal policies for processing payments.'
Please don your tinfoil hat for the following analyses from Slashdot.
To me, Wikileaks is journalism, not terrorism. Is the US giving up on principal democratic rights under the flag of ultranationalistic, or even outright fascist, tendencies? By extension, can we expect a more aggressive nationalistic US in the future, with possible Chinalike monitoring of the Internet as to avoid thoughtcrimes by its citizens? The overall effect on me, as a European, is roughly: 'If the US feels like, let them dig their own nationalistic grave.'
(Note that fascism has little to do with Jews and such, that is nationalsocialism, but has everything to do with suppression and organizing a society in a military manner.)
Please don your tinfoil hat for the following analyses from Slashdot.
 Wikileaks is supposedly in possession of the hard drive of a Bank of America executive.
 Wikileaks may become subject to criminal charges on terrorism. The bank may be avoiding future heavy fines for supporting 'terrorists'.
 Wikileaks/Assange are hindered in gathering funds for lawyers.
To me, Wikileaks is journalism, not terrorism. Is the US giving up on principal democratic rights under the flag of ultranationalistic, or even outright fascist, tendencies? By extension, can we expect a more aggressive nationalistic US in the future, with possible Chinalike monitoring of the Internet as to avoid thoughtcrimes by its citizens? The overall effect on me, as a European, is roughly: 'If the US feels like, let them dig their own nationalistic grave.'
(Note that fascism has little to do with Jews and such, that is nationalsocialism, but has everything to do with suppression and organizing a society in a military manner.)
12/16/10
12/15/10
Wikileaks in Court
A Dutch judge made a strange move today. In the decision to send back Afghanistan refugees, the judge decided to take the statements by the ministry used during due process less into account. Instead, given information supplied by Wikileaks, the refugees may stay.
The relevant part of the decision:
The relevant part of the decision:
" (...) De rechtbank is er niet langer van overtuigd dat het feitencomplex waarop de ambtsberichten zijn gebaseerd voor juist moet worden gehouden."Translated:
"(...) The court is no longer convinced that the complex of facts on which the official messages are based can still be seen as valid."In short, the judge thinks governments may be lying.
12/13/10
DDoS, Teenagers and Laws
In my country, the police arrested two teenagers for their part in the DDoS of sites by the flock Anonymous in response to the arrest of Assange, the freezing of accounts, and the denial to host Wikileaks. It's a bit of a bullshit game with an ignorant police showing their muscle against cybergame playing teenagers. I find it laughable to pick on teenagers for what can only be considered silly pestering games and am a bit appalled by the media coverage of it.
What interests me is whether a DDoS 'attack' is actually a crime or can be seen as a valid means of cyber protest. Despite the wargames like rhetorics such as "Low Orbit Ion Cannon," "Operation Payback," and "Fire in the Hole!", there is no breaking and entering of digital goods, no theft of information. Instead, a machine is hindered in doing its job, the processing of information.
But that is pretty normal in a democratic protest. It's entirely equivalent to laying in front of train moving nuclear material, taking part in a picket line in front of a bank, or bothering office employees by protesting through making lots of noise.
I am not sure about (Dutch) law here. But, it seems ridiculous to condemn teenagers to jail for normal democratic processes.
(One of the kids was on the radio; of course, he claimed a similar defense.)
What interests me is whether a DDoS 'attack' is actually a crime or can be seen as a valid means of cyber protest. Despite the wargames like rhetorics such as "Low Orbit Ion Cannon," "Operation Payback," and "Fire in the Hole!", there is no breaking and entering of digital goods, no theft of information. Instead, a machine is hindered in doing its job, the processing of information.
But that is pretty normal in a democratic protest. It's entirely equivalent to laying in front of train moving nuclear material, taking part in a picket line in front of a bank, or bothering office employees by protesting through making lots of noise.
I am not sure about (Dutch) law here. But, it seems ridiculous to condemn teenagers to jail for normal democratic processes.
(One of the kids was on the radio; of course, he claimed a similar defense.)
12/10/10
12/8/10
Free (Condoms for) Assange?
Apparently Wikileaks founder Assange is in a British jail, for a week, since his condoms broke on two separate occasions of consensual sex and the horny guy didn't know when to quit.
Malicious intent? Seems more like bad luck. Or rather, maybe someone should explain to the guy to clip his damned dirty fingernails before applying the rubber to his bazooka. Whatever, give the guy a reprimand, a sex education and a cigar and the women two bunches of flowers.
Malicious intent? Seems more like bad luck. Or rather, maybe someone should explain to the guy to clip his damned dirty fingernails before applying the rubber to his bazooka. Whatever, give the guy a reprimand, a sex education and a cigar and the women two bunches of flowers.
11/29/10
A Turing Machine  Overview
A Turing machine is a theoretical concept used as a precise vehicle for describing computation. As such, one would never build one since, as the video shows, it is way too slow in practice and mostly only interesting as an abstract machine to prove properties over. But still, to finally see one implemented in practice in a gorgeous apparatus is overwhelming. Every CS department should have one as a teaching aid.
10/21/10
10/20/10
10/19/10
10/14/10
10/10/10
10/9/10
Making Stuff Tick
Once in a while I read stuff on sensor networks, and it is interesting how bafflingly difficult the simplest problems are in that context.
A sensor network node is (usually) defined as a small piece of hardware including a radio, a battery, and a few simple sensors and actuators. The most important aspect to optimize against is battery life; in many applications nodes will just sleep most of the time. Academically, a sensor network comprises a distributed loosely connected network of uniform nodes. Nonacademically, admittedly, sensor networks will often be small (less than a hundred nodes), consist of a sink, usually tethered to a network and a power supply, and anchor nodes may be developed in a network, for instance, for positioning/triangualization reasons. Often, just the addition of a few specialized nodes greatly simplifies design and treespanning protocols with a root node just work well.
However, for robust development of protocols for selforganizing networks, often the academic view is taken. So, a small, exercise:
A sensor network node is (usually) defined as a small piece of hardware including a radio, a battery, and a few simple sensors and actuators. The most important aspect to optimize against is battery life; in many applications nodes will just sleep most of the time. Academically, a sensor network comprises a distributed loosely connected network of uniform nodes. Nonacademically, admittedly, sensor networks will often be small (less than a hundred nodes), consist of a sink, usually tethered to a network and a power supply, and anchor nodes may be developed in a network, for instance, for positioning/triangualization reasons. Often, just the addition of a few specialized nodes greatly simplifies design and treespanning protocols with a root node just work well.
However, for robust development of protocols for selforganizing networks, often the academic view is taken. So, a small, exercise:
Assume a thousand nodes, make them all blink at 1Hz at roughly the same time while preserving battery life.
10/6/10
10/2/10
9/27/10
9/25/10
9/17/10
Javascript Fractal Tree
I was wondering how to teach programming to kids these days, and I ended up with that teaching them javascript and general recursion would probably be a great start. So, I hacked this for fun. (You should see a tree, otherwise you'll need a browser with javascript and HTML5 canvas support.)
8/29/10
Mathematician needs to be Shot
I was reading up on Galois connections, which seems all the thong, and all that.
Now you've got lower, and upper adjoins, denoted f^{*} and f_{*}.
One can shoot people for wasting one's time, right?
Puhleease?
Now you've got lower, and upper adjoins, denoted f^{*} and f_{*}.
One can shoot people for wasting one's time, right?
Puhleease?
8/27/10
8/26/10
8/22/10
8/20/10
8/18/10
What about Now?
This is just a funny post, for, erm, well, fun, really.
Time has a past, a present and a future. And, for some reason, time is fixed. Westerners fix it around the presumed birth date of a guy, Chinese are not really sure, but time surely started with the reign of an emperor, and Arabs are sure time started when some guy took his holidays very serious sometime ago.
I say, we get rid of all the prejudice and start a calendar which starts time at, well, now. Just imagine how convenient it'll be. Make an appointment two weeks from now, just look ahead two weeks in your calendar, rip out a page each day. Or, your favorite birthday of your pet dog, well just three months and fifteen days from your own birthdate.
The good thing, no computer ever needs to synchronize anything anymore, since everything is relative to now, which is well, just now, at this exact moment in time.
Sure, historians might not like it, but hey, it's surely fixable.
Lets face it. The fact is that stuff happens now, not yesterday, and surely not tomorrow.
Time has a past, a present and a future. And, for some reason, time is fixed. Westerners fix it around the presumed birth date of a guy, Chinese are not really sure, but time surely started with the reign of an emperor, and Arabs are sure time started when some guy took his holidays very serious sometime ago.
I say, we get rid of all the prejudice and start a calendar which starts time at, well, now. Just imagine how convenient it'll be. Make an appointment two weeks from now, just look ahead two weeks in your calendar, rip out a page each day. Or, your favorite birthday of your pet dog, well just three months and fifteen days from your own birthdate.
The good thing, no computer ever needs to synchronize anything anymore, since everything is relative to now, which is well, just now, at this exact moment in time.
Sure, historians might not like it, but hey, it's surely fixable.
Lets face it. The fact is that stuff happens now, not yesterday, and surely not tomorrow.
Why I think Deolalikar's Proof Probably is Wrong
First off, I don't hold a PhD in mathematics and I didn't read his proof in full, that is to the experts. However, I know a few things, and this is a highlevel comment on his attempt, mostly from what I got from other people.
What's Up With That P=NP Question Anyway?
It dives into a fundamental question of computer science which just states that it is sometimes very hard to find a solution to a problem, but very easy to verify it.
A good analogy is doing groceries. Say, you did your groceries and you want to fit them into the back of your trunk. Will it fit?
Now, it is hard to find whether a solution exists to the above problem. You might need to try all possible ways of packing them into your trunk, before you find out that they do, or maybe don't, fit. But is easy to verify any given solution, just pack your stuff according to a description and see if it fits, and you're done.
What's Up With That Phase Transition Anyway?
Phase transition is also an easy concept. When given the above problem 'fitting groceries in my trunk,' it basically deals with for what list of groceries you might find a solution fast.
In case you bought two matchboxes, and try to fit them, you can easily determine a solution. The problem is underconstrained. In case you also bought two refrigerators, you can easily see there isn't a solution, just making one fit may already be hard. The problem is overconstrained. But for a lot of lists of groceries, which might fit seemingly, or might just not, you're on a phase transition bound where, suddenly, there are no computer programs which can determine in a fast manner whether they'll fit or not. The best computer programs may even take the rest of the life of this universe spending time determining that.
Does That Matter?
The problem is that we don't know whether better programs can exist. And that's the heart of the problem. Oversimplifying the fitting problem again: If P!=NP, which it does pragmatically at the moment, no algorithms exists, and if P=NP, somewhere an algorithm may exist.
What Did Deolalikar Probably Do?
I didn't read his proposal. But he starts of with a construction, assumes P=NP, and shows that that, according to what we know now, cannot compute fast solutions since we'll hit phase transitions. But that's the crux: Phase transitions only exist since we don't have fast algorithms, therefor most of us assume P!=NP. He just proved his own assumption.
Pragmatically, he may be right. But, pragmatically, for a lot of applications, 22/7 just equals π.
What's Up With That P=NP Question Anyway?
It dives into a fundamental question of computer science which just states that it is sometimes very hard to find a solution to a problem, but very easy to verify it.
A good analogy is doing groceries. Say, you did your groceries and you want to fit them into the back of your trunk. Will it fit?
Now, it is hard to find whether a solution exists to the above problem. You might need to try all possible ways of packing them into your trunk, before you find out that they do, or maybe don't, fit. But is easy to verify any given solution, just pack your stuff according to a description and see if it fits, and you're done.
What's Up With That Phase Transition Anyway?
Phase transition is also an easy concept. When given the above problem 'fitting groceries in my trunk,' it basically deals with for what list of groceries you might find a solution fast.
In case you bought two matchboxes, and try to fit them, you can easily determine a solution. The problem is underconstrained. In case you also bought two refrigerators, you can easily see there isn't a solution, just making one fit may already be hard. The problem is overconstrained. But for a lot of lists of groceries, which might fit seemingly, or might just not, you're on a phase transition bound where, suddenly, there are no computer programs which can determine in a fast manner whether they'll fit or not. The best computer programs may even take the rest of the life of this universe spending time determining that.
Does That Matter?
The problem is that we don't know whether better programs can exist. And that's the heart of the problem. Oversimplifying the fitting problem again: If P!=NP, which it does pragmatically at the moment, no algorithms exists, and if P=NP, somewhere an algorithm may exist.
What Did Deolalikar Probably Do?
I didn't read his proposal. But he starts of with a construction, assumes P=NP, and shows that that, according to what we know now, cannot compute fast solutions since we'll hit phase transitions. But that's the crux: Phase transitions only exist since we don't have fast algorithms, therefor most of us assume P!=NP. He just proved his own assumption.
Pragmatically, he may be right. But, pragmatically, for a lot of applications, 22/7 just equals π.
8/9/10
A Proof That P Is Not Equal To NP?
Interesting, as blogged before, I am one of the few who believe that P might equal NP. I couldn't follow the whole argument from the blog, but it seems he uses results from randomized sat examples and phase transitions. As far as I know, it's just possible to generate sat examples either easy or hard exactly on transition bounds, so, interesting how that will turn out. (That it is no to say that in general, random satstructures with a certain ratio aren't `harder' than others, it's just not a law.)
Guess I am wrong anyway in my interpretation of his proof. Feels too much like: P doesn't equal NP since satsolving is hard, which reverses an arrow.
Still, I don't see how he lifts phase transitions: He needs to show that all possible satsolvers have phasetransition behavior on (specific) sat instances, and I just don't see how one can generalize that from the behavior of a few current DPLL based solvers. (Got bored, read some of the arguments against his construction, starts to make some sense, again.)
A distinguished author comments on the proof at the CACM. It didn't pass scrutiny, both on the use of finite model theory which I don't understand and if I read it correctly the author has similar doubts on whether a characterization of an NP problem as being hard on phase transition boundaries of some sort is actually a characterization of an NP problem.
The latter interests me. To most people in the field, it feels natural, therefor it must be true. I don't know, I guess one of the results of his failed attempt might actually be that people start looking into whether such an assumption is actually correct. I.e., does the feeling hold up to mathematical rigor?
(Hmm, I wonder whether I got the second part of his argument right. Should look in to it. Read a bit further, of course his constructions are way more elaborate than my weak simplification of his approach. He starts of by assuming P=NP, and tries to derive a contradiction, no idea at the moment. I find it strange that he actually can derive a contradiction. He poses specific clauses, and derives that determining a solution is impossible, though he assumes a P=NP algorithm?)
Guess I am wrong anyway in my interpretation of his proof. Feels too much like: P doesn't equal NP since satsolving is hard, which reverses an arrow.
Still, I don't see how he lifts phase transitions: He needs to show that all possible satsolvers have phasetransition behavior on (specific) sat instances, and I just don't see how one can generalize that from the behavior of a few current DPLL based solvers. (Got bored, read some of the arguments against his construction, starts to make some sense, again.)
A distinguished author comments on the proof at the CACM. It didn't pass scrutiny, both on the use of finite model theory which I don't understand and if I read it correctly the author has similar doubts on whether a characterization of an NP problem as being hard on phase transition boundaries of some sort is actually a characterization of an NP problem.
The latter interests me. To most people in the field, it feels natural, therefor it must be true. I don't know, I guess one of the results of his failed attempt might actually be that people start looking into whether such an assumption is actually correct. I.e., does the feeling hold up to mathematical rigor?
(Hmm, I wonder whether I got the second part of his argument right. Should look in to it. Read a bit further, of course his constructions are way more elaborate than my weak simplification of his approach. He starts of by assuming P=NP, and tries to derive a contradiction, no idea at the moment. I find it strange that he actually can derive a contradiction. He poses specific clauses, and derives that determining a solution is impossible, though he assumes a P=NP algorithm?)
8/1/10
Qualcomm's EReading Screen
If they can get it to work cheaply then this tech is going to end up everywhere.
A Sin and a Shame
From the NYT article:
Oh my god, Marx was right?
“They threw out far more workers and hours than they lost output,” said Professor Sum. “Here’s what happened: At the end of the fourth quarter in 2008, you see corporate profits begin to really take off, and they grow by the time you get to the first quarter of 2010 by $572 billion. And over that same time period, wage and salary payments go down by $122 billion.”
Oh my god, Marx was right?
7/31/10
Fly Equilibrium
So, I had a fly swarm in my house. Must have a lot to do with being in the midst of horses country in the Netherlands. I killed the swarm, body count of about thirty within a few hours and now keep my windows closed. But I keep stuck with in an equilibrium of two/three flies even if I kill those each day? Is there another flyentrance to my house I don't know of?
Mundanity
Mundanity
7/29/10
Apple Trackpad
The Apple trackpad is here. I've blogged on this before. I think for a lot of daytoday use, window applications are on their way out, since gesturing is in, and that just doesn't go well with pixel based pointandclick.
I don't think it's a mousepad replacement though, are we at the end? If I see how I use my MacBook pro, I now often grab a mouse, but that could be the windows interface I am looking at. It's both comfortable if you can use only the mouse to, say, surf the web and uncomfortable if you find yourself switching between keyboard and mouse a lot; gesturing stuff just goes nicer.
Windows? If I want to play a firstperson shooter, I prefer Quake.
I don't think it's a mousepad replacement though, are we at the end? If I see how I use my MacBook pro, I now often grab a mouse, but that could be the windows interface I am looking at. It's both comfortable if you can use only the mouse to, say, surf the web and uncomfortable if you find yourself switching between keyboard and mouse a lot; gesturing stuff just goes nicer.
Windows? If I want to play a firstperson shooter, I prefer Quake.
7/28/10
7/26/10
7/18/10
Runnin' (Dying To Live)  2Pac (feat. Notorious B.I.G)
Going through some old CDs. Best musical shunt in hiphop.
Storage Solutions?
I have a few hundred, maybe a thousand, mostly electronic music CDs back at my dad's place, and a few hundred DVDs. What to do with them? I just want to put them into a few ordners, like take the paper out and put them into a plastic cover?
7/16/10
7/15/10
7/13/10
6/24/10
Crazy Meds US
Effexor's Pros: There are two last resorts among the modern meds to cure the deepest, blackest depression when your doctor is just switching you from one horsie to another on the medgoround: Effexor XR (venlafaxine hydrochloride) and Remeron (mirtazapine). Either in combination with an antipsychotic would really get you out of that hole of despair, but first you should throw away every mirror and scale in your house and buy expandable clothing. But for deep, despairing clinical depression that needs to respond to the standard tweaking of the three most popular neurotransmitters, Effexor XR (venlafaxine hydrochloride) often pulls people out of the abyss.
Effexor's Cons: For many people Effexor XR has the absolute worst discontinuation syndrome of an antidepressant. Effexor (venlafaxine hydrochloride) is a medication people utterly loathe to have taken. It is not uncommon for someone to fire doctors during or immediately after they quit taking Effexor XR(venlafaxine hydrochloride).
Effexor's Typical Side Effects: The usual for SSRIs and NRIs  headache, nausea, dry mouth, sweating, sleepiness or insomnia, and diarrhea or constipation, weight gain, loss of libido and a host of other sexual dysfunctions. Most everything but the weight gain and sexual dysfunctions usually goes away within a couple of weeks. Although some women will notice that the sexual side effects will diminish above 200225mg a day when the norepinephrine kicks in. Maybe.
Effexor's Not So Common Side Effects: Increased or lowered blood pressure, sweating, farting, anorexia, twitching, shocklike sensations. Also alcohol intolerance and/or alcohol abuse. So Effexor XR (venlafaxine hydrochloride) is going to be just the thing to talk about at AA meetings. I used to have these last two listed as rare side effects, but I've received way too many emails and have read far too many similar reports on various other sites after putting up this page about both of them. As is often the case here, the anecdotal evidence will often trump what is in the US PI Sheet. Best guess to date as to why both of these side effects can happen  Paul of Leeds (in the U.K.) posits that Effexor's broad spectrum use of liver enzymes probably interferes with alcohol clearance and tolerance, thus leading to the type of alcoholism that affects people without the proper enzymes to effectively metabolize alcohol. Between that and the way Effexor XR works your liver, you're probably better off giving up booze entirely if you're taking this med.
These may or may not happen to you don't, so don't be surprised one way or the other. Although I make no promises about the alcohol abuse
Effexor's Freaky Rare Side Effects: Someone's reflexes increased and someone else's breasts got bigger, proving that there is no pleasing some people. Someone else's hair changed color and, really, no Revlon was involved. But the most disturbing freaky rare side effect with Effexor XR (venlafaxine hydrochloride) is what Wyeth disingenuously calls "withdrawal syndrome," that once you acclimate to Effexor (venlafaxine hydrochloride) you are basically hooked for life. If not on Effexor XR then at least on some SSRI to take the worst of the edge off. The discontinuation syndrome never goes away if you try to stop. For someone with unipolar depression that's a pain in the ass, but something you might be able to work around barring any really adverse side effects, but for someone who is bipolar you can be royally screwed because Effexor XR (venlafaxine hydrochloride) can really aggravate mania and especially rapid cycling.
You aren't going to get these. I promise..
Interesting Stuff Your Doctor Probably Won't Tell You: Few, if any doctors, will discuss the possibility that Effexor XR (venlafaxine hydrochloride) could become a permanent part of your life, whether you like the results of Effexor XR (venlafaxine hydrochloride) or not. Granted that is a very rare adverse effect, but it does happen. It's hard enough to get them to discuss SSRI discontinuation syndrome., let alone get them to admit that Effexor's symptoms are the absolute worst and the longest lasting of all serotonergic drugs. The discontinuation from Paxil (paroxetine) is bad enough, it's much, much worse with Effexor (venlafaxine hydrochloride).
And the way Effexor XR (venlafaxine hydrochloride) works on neurotransmitters is very complicated. Your doctor may or may not explain this to you. Here's how it works: First it starts to work on your serotonin. Then somewhere around 200 mg a day it starts to work on norepinephrine. Then around 300 mg a day it starts to work on your dopamine. Mileage will vary for each individual, and there's no guarantee on getting all that much dopamine action.
Effexor's Dosage and How to Take Effexor: Effexor (venlafaxine hydrochloride) comes in immediate and extended release flavors, although hardly anyone takes the immediate release form anymore. Just be sure to check your prescription for that XR to make sure you are getting the extended release form. For the XR flavor, you start at 37.5 to 75mg a day, taken with food, at either breakfast or dinner, depending on if you're apt to get wired or tired. Once you get the wired/tired issue straightened out, you take the med all at once at the same time every day. If you start at 37.5mg you can move up to 75mg after a week. As with any antidepressant, it takes a month to feel any positive effect, so give it a month. Seriously, don't move up above 75mg a day for at least a month. You'll know if it's going to do anything then. If you feel nothing, give up and take a much easier discontinuation. After that you can move up in 37.575 mg increments, allowing at least a week between each increase until you reach the maximum of 375mg a day for the most severely depressed of patients. The older immediate release version is pretty much the same, except that the dose is divided into two or three doses a day.
Days to Reach a Steady State: Three days.
When you're fully saturated with the medication and less prone to peaks and valleys of effects. You still might have peaks of effect after taking many meds, but with a lot of the meds you'll have fewer valleys after this point. In theory anyway.
How Long Effexor Takes to Work: Up to one month.
Effexor's HalfLife & Average Time to Clear Out of Your System: Effexor (venlafaxine hydrochloride) does the double metabolism trick, so its halflives are 37 hours and 913 hours. That means the combined halflife is anywhere from 1220 hours, so it takes anywhere from two to five days to clear out of your system. This is a huge part of why Effexor's discontinuation syndrome is so harsh. No popular SSRI does the double metabolism, and the halflives of each metabolism is so bloody short. So while you clean out of one metabolite, you still have another one in your system. Your body is completely confused! Wyeth states in the pharmacokinetics section that there's only one active metabolite worth mentioning. Who the hell knows about other metabolites and what part they play in Effexor's discontinuation syndrome, or how long you should take in stepping down your dosage!
How to Stop Taking Effexor: Unless you need to discontinue Effexor XR at a more rapid rate, your doctor should be recommending that you reduce your dosage by 37.5mg a day every week if you need to stop taking it, if not more slowly than that. For more information, please see the page on how to safely stop taking these crazy meds. You shouldn't be doing it any faster than that unless it's an emergency. Yes, that means if you've maxed out at 375mg a day it'll take 10 weeks to get off of Effexor (venlafaxine hydrochloride). Believe me, it's better that way. You can try it faster and hope it works out. The odds are with you, but it's hardly a sure thing. Once you get down to that last 37.5mg a day you have several options:
If the discontinuation symptoms you're experiencing are mild, if you're experiencing any at all, then you may as well stop taking it. You're in the plurality of people who have taken either version of Effexor who could stop taking Effexor (venlafaxine) without too much of a hassle.
If the brain zaps or shivers and other discontinuation symptoms are still bad you can try taking one 37.5mg capsule every other day, or getting a prescription for generic venlafaxine in the immediaterelease form and working your way down. As immediaterelease venlafaxine comes in a variety of dosages you have all sorts of ways you and your doctor can work out a discontinuation schedule from there.
If you still can't stop taking it at a low dosage, you and your doctor may want to try Prozac (fluoxetine) prescription or samples. Generic fluoxetine will even do. 10mg a day is all you should need. Even with the proper discontinuation stopping the last 37.5mg can be hellish. Taking two weeks worth of Prozac (fluoxetine) will make the discontinuation a lot easier. So when you're off of Effexor and you cannot function, get on the Prozac for a week or two, then stop taking the Prozac. By that time you should find you'll have either no discontinuation syndrome, or it won't be nearly as bad.
If worse comes to worst, there's always the liquid Prozac. Then you can work your way down from the equivalent of 10mg, or higher if 10mg was too low, to eversoslowly try to wean yourself off of the serotonergic part of Effexor that had its claws in you.
If you've worked your way up to a particular dosage, it's usually best to spend this many days at the next lowest dosage before going down the next lowest dosage before that and so forth. This is the least sucky way to avoid problems when stopping any psychiatric medication. Presuming you have the option of slowly tapering off them.
Effexor's Typical Side Effects: The usual for SSRIs and NRIs  headache, nausea, dry mouth, sweating, sleepiness or insomnia, and diarrhea or constipation, weight gain, loss of libido and a host of other sexual dysfunctions. Most everything but the weight gain and sexual dysfunctions usually goes away within a couple of weeks. Although some women will notice that the sexual side effects will diminish above 200225mg a day when the norepinephrine kicks in. Maybe.
Effexor's Not So Common Side Effects: Increased or lowered blood pressure, sweating, farting, anorexia, twitching, shocklike sensations. Also alcohol intolerance and/or alcohol abuse. So Effexor XR (venlafaxine hydrochloride) is going to be just the thing to talk about at AA meetings. I used to have these last two listed as rare side effects, but I've received way too many emails and have read far too many similar reports on various other sites after putting up this page about both of them. As is often the case here, the anecdotal evidence will often trump what is in the US PI Sheet. Best guess to date as to why both of these side effects can happen  Paul of Leeds (in the U.K.) posits that Effexor's broad spectrum use of liver enzymes probably interferes with alcohol clearance and tolerance, thus leading to the type of alcoholism that affects people without the proper enzymes to effectively metabolize alcohol. Between that and the way Effexor XR works your liver, you're probably better off giving up booze entirely if you're taking this med.
These may or may not happen to you don't, so don't be surprised one way or the other. Although I make no promises about the alcohol abuse
Effexor's Freaky Rare Side Effects: Someone's reflexes increased and someone else's breasts got bigger, proving that there is no pleasing some people. Someone else's hair changed color and, really, no Revlon was involved. But the most disturbing freaky rare side effect with Effexor XR (venlafaxine hydrochloride) is what Wyeth disingenuously calls "withdrawal syndrome," that once you acclimate to Effexor (venlafaxine hydrochloride) you are basically hooked for life. If not on Effexor XR then at least on some SSRI to take the worst of the edge off. The discontinuation syndrome never goes away if you try to stop. For someone with unipolar depression that's a pain in the ass, but something you might be able to work around barring any really adverse side effects, but for someone who is bipolar you can be royally screwed because Effexor XR (venlafaxine hydrochloride) can really aggravate mania and especially rapid cycling.
You aren't going to get these. I promise..
Interesting Stuff Your Doctor Probably Won't Tell You: Few, if any doctors, will discuss the possibility that Effexor XR (venlafaxine hydrochloride) could become a permanent part of your life, whether you like the results of Effexor XR (venlafaxine hydrochloride) or not. Granted that is a very rare adverse effect, but it does happen. It's hard enough to get them to discuss SSRI discontinuation syndrome., let alone get them to admit that Effexor's symptoms are the absolute worst and the longest lasting of all serotonergic drugs. The discontinuation from Paxil (paroxetine) is bad enough, it's much, much worse with Effexor (venlafaxine hydrochloride).
And the way Effexor XR (venlafaxine hydrochloride) works on neurotransmitters is very complicated. Your doctor may or may not explain this to you. Here's how it works: First it starts to work on your serotonin. Then somewhere around 200 mg a day it starts to work on norepinephrine. Then around 300 mg a day it starts to work on your dopamine. Mileage will vary for each individual, and there's no guarantee on getting all that much dopamine action.
Effexor's Dosage and How to Take Effexor: Effexor (venlafaxine hydrochloride) comes in immediate and extended release flavors, although hardly anyone takes the immediate release form anymore. Just be sure to check your prescription for that XR to make sure you are getting the extended release form. For the XR flavor, you start at 37.5 to 75mg a day, taken with food, at either breakfast or dinner, depending on if you're apt to get wired or tired. Once you get the wired/tired issue straightened out, you take the med all at once at the same time every day. If you start at 37.5mg you can move up to 75mg after a week. As with any antidepressant, it takes a month to feel any positive effect, so give it a month. Seriously, don't move up above 75mg a day for at least a month. You'll know if it's going to do anything then. If you feel nothing, give up and take a much easier discontinuation. After that you can move up in 37.575 mg increments, allowing at least a week between each increase until you reach the maximum of 375mg a day for the most severely depressed of patients. The older immediate release version is pretty much the same, except that the dose is divided into two or three doses a day.
Days to Reach a Steady State: Three days.
When you're fully saturated with the medication and less prone to peaks and valleys of effects. You still might have peaks of effect after taking many meds, but with a lot of the meds you'll have fewer valleys after this point. In theory anyway.
How Long Effexor Takes to Work: Up to one month.
Effexor's HalfLife & Average Time to Clear Out of Your System: Effexor (venlafaxine hydrochloride) does the double metabolism trick, so its halflives are 37 hours and 913 hours. That means the combined halflife is anywhere from 1220 hours, so it takes anywhere from two to five days to clear out of your system. This is a huge part of why Effexor's discontinuation syndrome is so harsh. No popular SSRI does the double metabolism, and the halflives of each metabolism is so bloody short. So while you clean out of one metabolite, you still have another one in your system. Your body is completely confused! Wyeth states in the pharmacokinetics section that there's only one active metabolite worth mentioning. Who the hell knows about other metabolites and what part they play in Effexor's discontinuation syndrome, or how long you should take in stepping down your dosage!
How to Stop Taking Effexor: Unless you need to discontinue Effexor XR at a more rapid rate, your doctor should be recommending that you reduce your dosage by 37.5mg a day every week if you need to stop taking it, if not more slowly than that. For more information, please see the page on how to safely stop taking these crazy meds. You shouldn't be doing it any faster than that unless it's an emergency. Yes, that means if you've maxed out at 375mg a day it'll take 10 weeks to get off of Effexor (venlafaxine hydrochloride). Believe me, it's better that way. You can try it faster and hope it works out. The odds are with you, but it's hardly a sure thing. Once you get down to that last 37.5mg a day you have several options:
If the discontinuation symptoms you're experiencing are mild, if you're experiencing any at all, then you may as well stop taking it. You're in the plurality of people who have taken either version of Effexor who could stop taking Effexor (venlafaxine) without too much of a hassle.
If the brain zaps or shivers and other discontinuation symptoms are still bad you can try taking one 37.5mg capsule every other day, or getting a prescription for generic venlafaxine in the immediaterelease form and working your way down. As immediaterelease venlafaxine comes in a variety of dosages you have all sorts of ways you and your doctor can work out a discontinuation schedule from there.
If you still can't stop taking it at a low dosage, you and your doctor may want to try Prozac (fluoxetine) prescription or samples. Generic fluoxetine will even do. 10mg a day is all you should need. Even with the proper discontinuation stopping the last 37.5mg can be hellish. Taking two weeks worth of Prozac (fluoxetine) will make the discontinuation a lot easier. So when you're off of Effexor and you cannot function, get on the Prozac for a week or two, then stop taking the Prozac. By that time you should find you'll have either no discontinuation syndrome, or it won't be nearly as bad.
If worse comes to worst, there's always the liquid Prozac. Then you can work your way down from the equivalent of 10mg, or higher if 10mg was too low, to eversoslowly try to wean yourself off of the serotonergic part of Effexor that had its claws in you.
If you've worked your way up to a particular dosage, it's usually best to spend this many days at the next lowest dosage before going down the next lowest dosage before that and so forth. This is the least sucky way to avoid problems when stopping any psychiatric medication. Presuming you have the option of slowly tapering off them.
Comments: This is a multiple reuptake inhibitor, acting sort of as both an SSRIand NRI, so be sure to read up on all three classes of meds, as those pages will cover a lot of stuff common to all meds similar to Effexor (venlafaxine hydrochloride).
Everybody hates their meds because of the costs and the side effects, but people just loathe Effexor (venlafaxine hydrochloride) because the discontinuation can be so harsh; it's the med everyone wishes they never took. Yes, people will change doctors because some doctor had the nerve to punish them with Effexor (venlafaxine hydrochloride). Yet for many people it is a godsend, because the combination of serotonin, norepinephrine and dopamine reuptake is literally just what the doctor ordered for the darkest of depressions. Of course Effexor (venlafaxine hydrochloride) has to be complicated about it, it can't just work on everything all at once from the beginning. Oh, no. First it starts to work on your serotonin. Then somewhere around 200 mg a day it starts to work on norepinephrine. Then around 300 mg a day it starts to work on your dopamine. Mileage will vary for each individual, and there's no guarantee on getting all that much dopamine action. Of course as you up your dosage to get to the next neurotransmitter, you keep pushing the previous neurotransmitter, whether you need more action on them or not. And that's what leads to problems, and why people have to stop taking Effexor (venlafaxine hydrochloride). So they stop taking it from a higher dosage, and they stop taking it quickly, and they learn about things like brain shivers.
For people in the bipolar spectrum Effexor (venlafaxine hydrochloride) should really be the last of the modern antidepressants that is tried. I feel that the risk/reward benefit runs too high on the risk side of things. More than most SSRIs Effexor (venlafaxine hydrochloride) is likely to trigger not just mania, but rapid cycling. Combine that with the very rare, but still real chance that you could be stuck taking Effexor (venlafaxine hydrochloride) for the rest of your life, even if it doesn't work. That complicates things greatly in Bipolarland.
Try everything else first, and if you just react badly to SSRIs, forget about Effexor (venlafaxine hydrochloride) entirely.
As for unipolar depression, if you're in the blackest pit of despair and your doctor recommends Effexor (venlafaxine hydrochloride), go for it. What? You don't think I care about you people? I do. For people with unipolar depression a lifelong addiction to Effexor (again, this is a very rare side effect) is just a pain in the ass. Of course Effexor (venlafaxine hydrochloride) works with popular liver enzymes, so there would be dosage adjustments required for some meds, and you'd have extra side effects for having to take 37.575mg of Effexor every day, but it wouldn't be making you manic or triggering rapid cycling. As long as the reason why you had to stop taking Effexor (venlafaxine hydrochloride) wasn't too bad, and that reason isn't too harsh at the low dosage, the exceedingly small risk of permanent Effexor (venlafaxine hydrochloride) maintenance is well worth running when weighed against the benefits you'd potentially receive with Effexor (venlafaxine hydrochloride).
Effexor (venlafaxine hydrochloride) is also approved for GAD. Yet it frequently makes the anxiety that is part of bipolar much worse. I can't honestly give a good risk/reward analysis for Effexor (venlafaxine hydrochloride) and anxiety. Given the experiences I've read of everyone who has taken it for bipolar and depression, I'm surprised it was even approved for anxiety.
Everybody hates their meds because of the costs and the side effects, but people just loathe Effexor (venlafaxine hydrochloride) because the discontinuation can be so harsh; it's the med everyone wishes they never took. Yes, people will change doctors because some doctor had the nerve to punish them with Effexor (venlafaxine hydrochloride). Yet for many people it is a godsend, because the combination of serotonin, norepinephrine and dopamine reuptake is literally just what the doctor ordered for the darkest of depressions. Of course Effexor (venlafaxine hydrochloride) has to be complicated about it, it can't just work on everything all at once from the beginning. Oh, no. First it starts to work on your serotonin. Then somewhere around 200 mg a day it starts to work on norepinephrine. Then around 300 mg a day it starts to work on your dopamine. Mileage will vary for each individual, and there's no guarantee on getting all that much dopamine action. Of course as you up your dosage to get to the next neurotransmitter, you keep pushing the previous neurotransmitter, whether you need more action on them or not. And that's what leads to problems, and why people have to stop taking Effexor (venlafaxine hydrochloride). So they stop taking it from a higher dosage, and they stop taking it quickly, and they learn about things like brain shivers.
For people in the bipolar spectrum Effexor (venlafaxine hydrochloride) should really be the last of the modern antidepressants that is tried. I feel that the risk/reward benefit runs too high on the risk side of things. More than most SSRIs Effexor (venlafaxine hydrochloride) is likely to trigger not just mania, but rapid cycling. Combine that with the very rare, but still real chance that you could be stuck taking Effexor (venlafaxine hydrochloride) for the rest of your life, even if it doesn't work. That complicates things greatly in Bipolarland.
Try everything else first, and if you just react badly to SSRIs, forget about Effexor (venlafaxine hydrochloride) entirely.
As for unipolar depression, if you're in the blackest pit of despair and your doctor recommends Effexor (venlafaxine hydrochloride), go for it. What? You don't think I care about you people? I do. For people with unipolar depression a lifelong addiction to Effexor (again, this is a very rare side effect) is just a pain in the ass. Of course Effexor (venlafaxine hydrochloride) works with popular liver enzymes, so there would be dosage adjustments required for some meds, and you'd have extra side effects for having to take 37.575mg of Effexor every day, but it wouldn't be making you manic or triggering rapid cycling. As long as the reason why you had to stop taking Effexor (venlafaxine hydrochloride) wasn't too bad, and that reason isn't too harsh at the low dosage, the exceedingly small risk of permanent Effexor (venlafaxine hydrochloride) maintenance is well worth running when weighed against the benefits you'd potentially receive with Effexor (venlafaxine hydrochloride).
Effexor (venlafaxine hydrochloride) is also approved for GAD. Yet it frequently makes the anxiety that is part of bipolar much worse. I can't honestly give a good risk/reward analysis for Effexor (venlafaxine hydrochloride) and anxiety. Given the experiences I've read of everyone who has taken it for bipolar and depression, I'm surprised it was even approved for anxiety.
Why in God's name do people call this medication?
It is stated in the above review that nobody takes Effexor's predecessor Venlafaxine anymore. This is a gross misrepresentation of facts, a few years ago a number of papers appeared in major public psychiatric journals describing the adverse effects of Venlafaxine. In layman's terms: It is psychiatric poison, recognized as such, and subsequently was reduced to junkstatus by professionals. Effexor XR is just a rebranding for financial purposes of the same poison by Wyeth since noone touches Venlafaxine anymore.
It is stated in the above review that nobody takes Effexor's predecessor Venlafaxine anymore. This is a gross misrepresentation of facts, a few years ago a number of papers appeared in major public psychiatric journals describing the adverse effects of Venlafaxine. In layman's terms: It is psychiatric poison, recognized as such, and subsequently was reduced to junkstatus by professionals. Effexor XR is just a rebranding for financial purposes of the same poison by Wyeth since noone touches Venlafaxine anymore.
6/7/10
Smokescreen: Flash in HTML5 & JavaScript
The ad network company RevShock created Smokescreen, an open source product that converts Flash to HTML5 & Javascript. While mainly designed for ads, not fully conformant, and lacking in performance for the higherend flash applications, it solves a lot of problems.
I doubt they'll ever reach full conformance, but it is interesting, and I guess Flash will for banner ads and other simple applications just be a great frontend to it.
Smokescreen?
I doubt they'll ever reach full conformance, but it is interesting, and I guess Flash will for banner ads and other simple applications just be a great frontend to it.
Smokescreen?
Who Needs More Than 640kB?
I am thinking about bit representations in my compiler, and I wonder about the x8664 bit model. Most modern desktops have about 416GB memory, I think. A byte is 8 bits, therefore encoding one out of 2^{8} values. A kilobyte is 1024^{1} = 2^{10} bytes, a megabyte is 1024^{2} = 2^{20} bytes, a gigabyte is 1024^{3} = 2^{30} bytes. At least according to binary number interpretation, your local vendor may disagree.
Now, in a 32 bit address space, you can index 2^{32} bytes, or 2^{2} x 2^{30}, is 4GB of data. Thing is, for desktop models, we are somewhat at the end of what we need for memory. The 64 bit encoding generates pretty fat binaries, and I am really clueless where all the memory goes. At the other end, the computer word to address space ratio is ridiculous at the moment. Why do we address individual bytes on a 64 bit address space machine? It is just as ridiculous as being able to address individual bits on a 64kB machine.
In my opinion, they shouldn't have doubled the address space, but they should have quadrupled the word size first.
On a 32 bit machine with 32 bit words, you can have 16GB of byte data which is just enough for current standards. My best guess is they didn't want to because the C language, in which most operating systems are written, just assumes characters are 8 bits wide, and because highend systems will want more addressable space. But still, 32 bit words are a good choice, and in the Internet era it makes sense to standardize on 32 bit wide characters and just bit pack older data.
Then again, who cares about 2 bits when you got 64 of em?
Now, in a 32 bit address space, you can index 2^{32} bytes, or 2^{2} x 2^{30}, is 4GB of data. Thing is, for desktop models, we are somewhat at the end of what we need for memory. The 64 bit encoding generates pretty fat binaries, and I am really clueless where all the memory goes. At the other end, the computer word to address space ratio is ridiculous at the moment. Why do we address individual bytes on a 64 bit address space machine? It is just as ridiculous as being able to address individual bits on a 64kB machine.
In my opinion, they shouldn't have doubled the address space, but they should have quadrupled the word size first.
On a 32 bit machine with 32 bit words, you can have 16GB of byte data which is just enough for current standards. My best guess is they didn't want to because the C language, in which most operating systems are written, just assumes characters are 8 bits wide, and because highend systems will want more addressable space. But still, 32 bit words are a good choice, and in the Internet era it makes sense to standardize on 32 bit wide characters and just bit pack older data.
Then again, who cares about 2 bits when you got 64 of em?
6/6/10
Silent Killers
From Slashdot.
As someone with bipolar disorder all I can say to you is "fuck you".
Diagnosis and treatment has allowed me to become a fullyfunctioning member of society rather than a burden on society and everyone around me. Absent medication and psychotherapy, I'm at the mercy of horrible mood swings and psychosis. My parents listened to a quack of a child psychologist who felt that diagnosing and "labelling" a 10year old was more damaging than any disorder that might be present. The result of that was a slow decline into madness, and as an adult, I was too sick to seek treatment on my own, and not sick enough for involuntary commitment. I was finally diagnosed at 41 years old as a result of some circumstances that I don't care to share with someone like you. Do you have any idea what it's like to lose half your life to untreated mental illness?
Treatment probably saved my life  and there is no treatment without diagnosis and as you put it, "labelling". The suicide rates for persons with bipolar disorder are truly staggering  and those who don't take their own lives frequently have abbreviated lives due to irrational choices made as a result of the disorder.
"Trying harder" hardly factors into it when you're at the mercy of a very real and debilitating disorder.
Try a little empathy, fuckwit.
It's back, again...
5/12/10
Of Brain and Bone: The Unusual Case of Dr. A
Frontotemporal dementia (FTD) is a clinical syndrome characterized by progressive decline in social conduct and a focal pattern of frontal and temporal lobe damage. Its biological basis is still poorly understood but the focality of the brain degeneration provides a powerful model to study the cognitive and anatomical basis of social cognition. Here, we present Dr. A, a patient with a rare hereditary bone disease [HME] (hereditary multiple exostoses) and FTD (pathologically characterized as Pick's disease), who presented with a profound behavioral disturbance characterized by acquired sociopathy. We conducted a detailed genetic, pathological, neuroimaging and cognitive study, including a battery of tests designed to investigate Dr. A's abilities to understand emotional cues and to infer mental states and intentions to others (theory of mind). Dr. A's genetic profile suggests the possibility that a mutation causing hereditary multiple exostoses, Ext2, may play a role in the pattern of neurodegeneration in frontotemporal dementia since knockout mice deficient in the Ext gene family member, Ext1, show severe CNS defects including loss of olfactory bulbs and abnormally small cerebral cortex. Dr. A showed significant impairment in emotion comprehension, second order theory of mind, attribution of intentions, and empathy despite preserved general cognitive abilities. Voxelbased morphometry on structural MRI images showed significant atrophy in the medial and right orbital frontal and anterior temporal regions with sparing of dorsolateral frontal cortex. This case demonstrates that social and emotional dysfunction in FTD can be dissociated from preserved performance on classic executive functioning tasks. The specific pattern of anatomical damage shown by VBM emphasizes the importance of the network including the superior medial frontal gyrus as well as temporal polar areas, in regulation of social cognition and theory of mind. This case provides new evidence regarding the neural basis of social cognition and suggests a possible genetic link between bone disease and FTD.
Great. Since taking the Effexor and the subsequent symptoms relate to acquired schizophrenia through neurodegeneration, I may have ended up with something akin to FTD, and got only ten years to live? Hmm, doesn't fit anyway.
Ah well.
4/6/10
Ursula Rucker  Humbled
Going through some old music again. The latest Gil album remembered me of 4hero, but mostly because of their cooperation with Ursula Rucker a semi wellknown US poet often performing in hiphop, dance and ambient music. Most of 4hero is great music, but nobody really convinced me so far that nailing the largest number of jazz progressions to the nanosecond is actual music, and, yeah, trumpets and strings? Anyway, that's her, she has a new album out too.
Love the voice, love the poetry, love the expression, love the face, love it all.
4/5/10
Gil ScottHeron  "Me And The Devil"
From the album "I'm new here." Mostly poetry slam blues meets 303/808 based dubtech, I liked the syncoped newtech Zydeco, but in the end, one great voice.
Contemporary East River bikeway blues.
3/10/10
2/25/10
Bloom Box Solid Fuel Cell
Some CO2 emmited, no idea about the math on it. It needs to be produced, maintained, and trashed/ recycled. It might be more efficient given the fact that there is almost none energy transport cost.
Green, or will it just double the energy consumption?
2/24/10
Lou Reed  Heroin
Except for electro, psytrance and house, yeah, I like the old stuff like Lou Ree, Nico, the Velvet Underground, or Iggy Pop too. Below, one old Lou Reed, not really the song I was looking for but I liked the old rawness of it.
Can't remember that other song? Fistful of dollars? Something... ?? Yeah, later question was it, 1966 Velvet Underground and Nico  Waiting for the Man, get the whole album if you can.
Old Stuff
Can't remember that other song? Fistful of dollars? Something... ?? Yeah, later question was it, 1966 Velvet Underground and Nico  Waiting for the Man, get the whole album if you can.
Old Stuff
2/23/10
Wiretapping the Internet: Giving Weapons of Mass Destruction to Idiots
Various jurisdictions around the world have lawfulintercept implementations and even require ISPs to implement these. In the U.S., lawful intercept capabilities on Internet infrastructure are a legal requirement under the Communications Assistance for Law Enforcement Act (CALEA)
This is done wholly for the common good, and to protect your lawabding citizens against online money scams, illegitimate porn, child abuse and seeking terrorist activity. The problem: A substantial part of our population consists of idiots, and that doesn't stop at the doors of your local lawenforcement agency.
To the IRS we are all taxavoiders, and to the police everyone is a criminal until proven otherwise. You need thugs to beat thugs, and in their hunger to go after every crime, and their in incapability to often do so, your average cop will fabricate evidence, crossuse lawful wiretapping to wiretap other cases, use breakandentry when not permitted, resort to lies, libel and slander to incite witch hunts, provoke crimes even legal in some countries, send innocent people to jail, or even degrade, mutilate or kill people all in a day's work.
It depends on your country, maybe you do feel safer if your police can spy on you. But, if you live in a wealthy country like I do, where the average biggest crime daily committed is by a set of ducks causing a traffic jam, you might end up sending your own antisocial family to jail since mom cheated once, daddy got bored and looked at some boobs, daughter did a research project on online terrorism, son used an illegal creditcard list to pay his online gambling habit, and your neighbour was smart enough to break the wepencryption of the wireless access point to satisfy his own extreme needs.
It may come as a surprise to you, but given everything, the biggest criminal organization in your vicinity is probably your lawenforcement agency, and yes, you do have stuff to hide. It's called your freedom.
The US model of shooting everything on sight, probably isn't that bad.
This is done wholly for the common good, and to protect your lawabding citizens against online money scams, illegitimate porn, child abuse and seeking terrorist activity. The problem: A substantial part of our population consists of idiots, and that doesn't stop at the doors of your local lawenforcement agency.
To the IRS we are all taxavoiders, and to the police everyone is a criminal until proven otherwise. You need thugs to beat thugs, and in their hunger to go after every crime, and their in incapability to often do so, your average cop will fabricate evidence, crossuse lawful wiretapping to wiretap other cases, use breakandentry when not permitted, resort to lies, libel and slander to incite witch hunts, provoke crimes even legal in some countries, send innocent people to jail, or even degrade, mutilate or kill people all in a day's work.
It depends on your country, maybe you do feel safer if your police can spy on you. But, if you live in a wealthy country like I do, where the average biggest crime daily committed is by a set of ducks causing a traffic jam, you might end up sending your own antisocial family to jail since mom cheated once, daddy got bored and looked at some boobs, daughter did a research project on online terrorism, son used an illegal creditcard list to pay his online gambling habit, and your neighbour was smart enough to break the wepencryption of the wireless access point to satisfy his own extreme needs.
It may come as a surprise to you, but given everything, the biggest criminal organization in your vicinity is probably your lawenforcement agency, and yes, you do have stuff to hide. It's called your freedom.
The US model of shooting everything on sight, probably isn't that bad.
2/22/10
2/19/10
Its Wrong!
Great, I remembered a rule wrongly, and have been consistently writing something wrong for the last year or so, so a new sticky for my forehead:
Grrr....
It's is a contraction of "it is" or "it has." Its is the possessive form of "it."Ah well, not fix post but fix my writing.
Grrr....
School Spies on Student?
According to BoingBoing.
The bunnies are okay, but the feathers itch.
According to the filings in Blake J Robbins v Lower Merion School District (PA) et al, the laptops issued to highschool students in the wellheeled Philly suburb have webcams that can be covertly activated by the schools' administrators, who have used this facility to spy on students and even their families. The issue came to light when the Robbins's child was disciplined for "improper behavior in his home" and the Vice Principal used a photo taken by the webcam as evidence. The suit is a class action, brought on behalf of all students issued with these machines.Uh, as far as I know, my home is my home, and I can freely walk around halfnaked wearing a purple tutu, yellow feathers up my *ss, look at Megan Fox pictures while liberally indulging in a lot of depraved sex acts with green stuffed bunnies imagining one of them is Bill Clinton, and I still wouldn't be doing anything wrong?
The bunnies are okay, but the feathers itch.
Lego Problem Solvers
Just some geeky stuff. A Lego solver for Rubik's Cubes:
And a Lego solver for Sudoku (not shown, the embedding Flash is too large).
Thanks to singularityhub.com.
Puzzling.
And a Lego solver for Sudoku (not shown, the embedding Flash is too large).
Thanks to singularityhub.com.
Puzzling.
2/15/10
Mechanical Computing
2/14/10
Jet, Ink and Math
I decided to take the goahead and buy a printer. Since I don't expect to print a lot, but thought it would be nice if I could print the occasional photo, I bought a cheap inkjet.
Still, I was kinda hungry for the cheap laserjet too. At a price of Euro 99, it would have been a nice deal. The annoying part is the consumables of course, ink cartridges dry out if I don't use them enough, and colour laserjet toners are very expensive at about a Euro 160 per change. Driving home, I puzzled a bit further, and decided that the absurd cheapest deal would have been to buy one cheap inkjet, and about three laserjets  one to print, the rest for cheap toners.
Uh... Bubbles? ... Anyone? ...
Still, I was kinda hungry for the cheap laserjet too. At a price of Euro 99, it would have been a nice deal. The annoying part is the consumables of course, ink cartridges dry out if I don't use them enough, and colour laserjet toners are very expensive at about a Euro 160 per change. Driving home, I puzzled a bit further, and decided that the absurd cheapest deal would have been to buy one cheap inkjet, and about three laserjets  one to print, the rest for cheap toners.
Uh... Bubbles? ... Anyone? ...
2/12/10
Algorithm Fined for Bad Conduct
NYSE Euronext, the EuroAmerican operator of several securities exchanges, has fined Credit Suisse's trading division for failing to monitor a computer trading algorithm. The algorithm misconducted hundreds of thousands of stock transactions.
Earlier this month, a malfunctioning algorithm accidentally traded 200,000 futures contracts to itself. And last year, the London Stock Exchange shut down after a rash of computergenerated orders.
With an estimated sixty percent of all trading done by algorithms, a number which can be expected to grow, the algorithms seem to have found a smarter path to world domination than through global warfare.
I can't do that, Dave.
Earlier this month, a malfunctioning algorithm accidentally traded 200,000 futures contracts to itself. And last year, the London Stock Exchange shut down after a rash of computergenerated orders.
With an estimated sixty percent of all trading done by algorithms, a number which can be expected to grow, the algorithms seem to have found a smarter path to world domination than through global warfare.
I can't do that, Dave.
Afrikanerhart
I was looking whether an old friend singer/songwriter would have placed some videos on youtube and ended up with this, this guy has the same family name. No idea what to think of it...
2/11/10
Single Window Gimp. Finally.
The Photo editor most commonly used at Linux systems, the Gimp, gets a revamp, a single window mode.
Its a breakthrough. Not because this wasn't possible before, but because for years people, developers, insisted that multiple windows mode are the preferred way of working and disregarded all opposing views by what was then often called 'the Photoshop' league. Good to see the debate is over.
No sense or nonsense.
Its a breakthrough. Not because this wasn't possible before, but because for years people, developers, insisted that multiple windows mode are the preferred way of working and disregarded all opposing views by what was then often called 'the Photoshop' league. Good to see the debate is over.
No sense or nonsense.
2/6/10
333 Posts
The threehundredandthirtythird post. A good moment to go back to the roots on how bits are created. A movie derived from the development of the twitter source code: Twitter Code Swarm.
Icons are developers, particles are files.
Oh golly, now bits are particles, too.
Icons are developers, particles are files.
Oh golly, now bits are particles, too.
2/1/10
Back to the Past!
I decided to place myself about thirty years back in the past, since I don't really grasp any computing science literature written after, say, 1985. Gone are the days of mathematical heroes who typed on PDP11s and were dreaming of the day a computer could be programmed with math only. Instead, current computing engineers hack around in languages like Java or C#, and computer scientists are entirely happy writing the biggest number of compressed greek symbols per page, in the hope someone will understand their voodoo.
I say, no more! I am looking for old typewriter fonts and handwritten greek to write a good critique, or possibly the documentation of the compiler I am writing.
Retro science.
Retro science.
1/30/10
Bored...
I am writing a compiler. But to some extend, a compiler writes itself. It just needs someone to do the monkey business of typing it in, which is me. It really is an abusive relation. So, if I am clear, I write and meantime just end up thinking about circuit minimization.
A circuit consists of terms, a term describes a combination of a number of state spaces at least, that is how I like to think about it, and a state space describes an exponential number of states. So, the only thing interesting are the terms. What to do?
Again some silly idea: At every point insert a new term by using that E=ite(x, E, E) and rewrite part of the term below it. Which means, you got N points to insert one of the terms below it, do some minimization, and see if the term shrinks. Or just rewrite and take the minimal term. If P!=NP, it can't shrink, of course, because that would lead to a deterministic algorithm. It has prohibitive complexity, and you might end up just factoring out states one by one, but still, interesting. Yeah, I know, a definite 5+ on the Richter scale of weirdoness.
Anyway, if anyone reads this blog and isn't interested in boring stuff, the picture shows a light storage device. It collects light at day, emits it at night. I want one!
Gotta get me one of these...
1/28/10
Can't... Count...
A brief thought on semitractable decision tree minimization, for a moment I assume that a minimal circuit equals a minimal decision tree. It just popped into my head, I am not too confident on it, but thought it would be worthwhile to write down anyway.
Circuit minimization deals with the question: What is the minimal circuit for a boolean function f:{0,1}^{n}>{0,1}? In general, the question is considered to be intractable since the algorithms probe the size 2^{n} state space. I looked at some solutions, which looked like kludges to me. But I am bad at understanding things I don't understand, that seems to be true for a lot of people.
How would I go at it? Well, to distinguish between all vectors, you need to find the bit which halves the state space best, you could try that by counting the {0,1} x {0,1} relation between the value of a bit, and the result of the function, for each bit. By factoring out bits, you could derive the minimal circuit. And there you see minimal circuitry is related to the ability to count  both intractable.
Counting is prohibitive directly on the state space, but circuits distinguish between symbols of form v = v0 ∧ ¬v1 ∧ ..∧ vn, a conjunction of number v variables where a variable may be negated, and such a symbol encodes 2^{nv} states of the state space. That number can be decomposed and stored directly.
Let's rephrase the question to minimization of small Nand terms Î¦(v), which only distinguishes between small numbers of symbols. Thus: How to minimize f:Î¦(v)>{0,1}? Well, with the same procedure it could be tried. Or one could start of at the bottom and build minimized decision trees. Trees which are the combination of two other trees under a logical connective could be minimized by probing recursively the square of all endpoints of the trees. But that leads to an intractable algorithm.
A small example, first an informal description of the algorithm.
Note that decision trees encode DNFs, and the the combination of a DNF under most boolean connectives has at most squared complexity.
Lets try it on a(ba). This decomposes into ¬a ∨ (a ∧ b). The algorithm starts of with empty tables. The table as holds the counts for a variable a and a sign s, the sign states whether the elements in the proposition or the negation are counted.
After processing ¬a, two occurrences of ¬a and one occurrence of b and ¬b are counted.
After processing (a ∧ b), one occurrence of a and b are counted.
The algorithm proceeds with counting the negation of the formula of which the DNF is (a ∧ ¬ b).
The negation of the state space is selected, since the state space with sign 0 totals one element, the other three. The algorithm derives ¬ite(a, ¬b, 0) which trivializes to a(bb).
Counting is linear in the DNF of Î¦. The algorithm seems to split the DNF too on every recursion, so its complexity seems to be the number of variables times the DNF, which may be exponential, but for certain decision trees may be tractable. Hence the semitractable part.
Guess what rests is determining all the mistakes I made. It looks way too simple not to be know.
012910: It might give small trees, but not minimal circuits. Shared variables among trees are not factored out. Also, it doesn't take into account that the lefthand side of a circuit may be expressed in terms of factors of the righthand side.
This mind never stops.
Circuit minimization deals with the question: What is the minimal circuit for a boolean function f:{0,1}^{n}>{0,1}? In general, the question is considered to be intractable since the algorithms probe the size 2^{n} state space. I looked at some solutions, which looked like kludges to me. But I am bad at understanding things I don't understand, that seems to be true for a lot of people.
How would I go at it? Well, to distinguish between all vectors, you need to find the bit which halves the state space best, you could try that by counting the {0,1} x {0,1} relation between the value of a bit, and the result of the function, for each bit. By factoring out bits, you could derive the minimal circuit. And there you see minimal circuitry is related to the ability to count  both intractable.
Counting is prohibitive directly on the state space, but circuits distinguish between symbols of form v = v0 ∧ ¬v1 ∧ ..∧ vn, a conjunction of number v variables where a variable may be negated, and such a symbol encodes 2^{nv} states of the state space. That number can be decomposed and stored directly.
Let's rephrase the question to minimization of small Nand terms Î¦(v), which only distinguishes between small numbers of symbols. Thus: How to minimize f:Î¦(v)>{0,1}? Well, with the same procedure it could be tried. Or one could start of at the bottom and build minimized decision trees. Trees which are the combination of two other trees under a logical connective could be minimized by probing recursively the square of all endpoints of the trees. But that leads to an intractable algorithm.
A small example, first an informal description of the algorithm.
For a formula Î¦ depending on {v_{0}, .., v_{n}}: 0. If Î¦ consists of one variable or a constant, return Î¦. 1. Count the correlation between each variable v_{i} and the formula Î¦ with sign 1. 2. Count the correlation between each variable v_{i} and the negation of the formula ¬Î¦ with sign 0. 3. Select either the formula or it's negation, whichever describes the smallest space, Î¨ = b ⊕ Î¦. 4. Select the variable v_{k} which splits that state space the best in halves. 5. Recursively determine the terms for Î¨[v_{k}=1] and Î¨[v_{k}=0]. 6. Return b ⊕ Ite(v_{k}, Î¨[v_{k}=1], Î¨[v_{k}=0]).The algorithm for counting is trivial. It takes a DNF and a sign, DNFs can be obtained by sat solving the term. For each conjunction and for each variable occurring in the term, the entry with the variable and sign is increased with 2^{nv}. For each variable not occurring, it and its negation are increased with 2^{nv1} given the sign.
Note that decision trees encode DNFs, and the the combination of a DNF under most boolean connectives has at most squared complexity.
Lets try it on a(ba). This decomposes into ¬a ∨ (a ∧ b). The algorithm starts of with empty tables. The table as holds the counts for a variable a and a sign s, the sign states whether the elements in the proposition or the negation are counted.


After processing ¬a, two occurrences of ¬a and one occurrence of b and ¬b are counted.


After processing (a ∧ b), one occurrence of a and b are counted.


The algorithm proceeds with counting the negation of the formula of which the DNF is (a ∧ ¬ b).


The negation of the state space is selected, since the state space with sign 0 totals one element, the other three. The algorithm derives ¬ite(a, ¬b, 0) which trivializes to a(bb).
Counting is linear in the DNF of Î¦. The algorithm seems to split the DNF too on every recursion, so its complexity seems to be the number of variables times the DNF, which may be exponential, but for certain decision trees may be tractable. Hence the semitractable part.
Guess what rests is determining all the mistakes I made. It looks way too simple not to be know.
012910: It might give small trees, but not minimal circuits. Shared variables among trees are not factored out. Also, it doesn't take into account that the lefthand side of a circuit may be expressed in terms of factors of the righthand side.
This mind never stops.
1/27/10
Technology at an Unbelievable Price
When I read that Apple introduced a new device, the 'iPad,' at an unbelievable price, I thought it must be irony. They make quality products, but are hardly known for their competitive pricing. Still, at a starting price of five hundred bucks, you can own a tablet a computer consisting of one touchscreen build by Apple, which is always good for your coolness attribute.
The specs: 0.5 inches thin. 1.5 pounds. 9.87 inch iPS display, full capacity multitouch, 1 ghz Apple A4 chip, 16, 32, or 64 GB Flash storage. Extensions and addons: 3G, dock, keyboard. Best of all, 10 hours of battery life.
Its smaller and lighter than any netbook, and just looks like an oversized iPhone. What they really did right is running a variant of the iPhone OS on top of it instead of MacOs. Instead of pointandclick you can now touch, scale, wave, prod, gesture around applications, folders, photos, puppets which is the exact right thing to do on a tablet. Yes, you can type, but that remains a kludge.
And that's the thing, from the start I disliked windows, because of its, well, windows. Its all about pixel realestate, and I always want my application to hold the maximum numbers of em, preferably almost the whole screen. My MacBook pro has a functional multitouch pad: Why am I still bound to a menu and windows, where I repeatedly need to point at a few pixels, when a wave and a click could easily do the same?
Pointandclick? The same future as VHS.
Waving...
The specs: 0.5 inches thin. 1.5 pounds. 9.87 inch iPS display, full capacity multitouch, 1 ghz Apple A4 chip, 16, 32, or 64 GB Flash storage. Extensions and addons: 3G, dock, keyboard. Best of all, 10 hours of battery life.
Its smaller and lighter than any netbook, and just looks like an oversized iPhone. What they really did right is running a variant of the iPhone OS on top of it instead of MacOs. Instead of pointandclick you can now touch, scale, wave, prod, gesture around applications, folders, photos, puppets which is the exact right thing to do on a tablet. Yes, you can type, but that remains a kludge.
And that's the thing, from the start I disliked windows, because of its, well, windows. Its all about pixel realestate, and I always want my application to hold the maximum numbers of em, preferably almost the whole screen. My MacBook pro has a functional multitouch pad: Why am I still bound to a menu and windows, where I repeatedly need to point at a few pixels, when a wave and a click could easily do the same?
Pointandclick? The same future as VHS.
Waving...
Susskind, 't Hooft and Billiards
Susskind, in his lectures at Stanford, made a bold claim that the universe has a unique future and a unique past. Moreover, his view was that the both future and past can be determined from any given point in time. A reasonable view for a physicist, and it follows our perception of time as linear. We may not hit rewind, still, and play, but, we experience it as a movie.
If I read 't Hooft correctly, who is developing an alternative to QM, then he beliefs that the future can have multiple pasts, i.e., two states may converge to one new state. There's loss of information. An equally bold claim, but somewhat predicted by mathematics. Multiply or add two numbers and you can't uniquely reconstruct the original values.
It seem the problem boils down to a game of billiards: Is it possible to hit the balls such that a new position doesn't tell what the original position was? Often, it would seem so, but there's a way out in the sense that we could consider the state of whole room, not only the table.
Anyway, in case of doubt, I suggest they ask Ben Davies.
Just kidding here, I don't know anything about physics. Now I am really going back to programming...
If I read 't Hooft correctly, who is developing an alternative to QM, then he beliefs that the future can have multiple pasts, i.e., two states may converge to one new state. There's loss of information. An equally bold claim, but somewhat predicted by mathematics. Multiply or add two numbers and you can't uniquely reconstruct the original values.
It seem the problem boils down to a game of billiards: Is it possible to hit the balls such that a new position doesn't tell what the original position was? Often, it would seem so, but there's a way out in the sense that we could consider the state of whole room, not only the table.
Anyway, in case of doubt, I suggest they ask Ben Davies.
Just kidding here, I don't know anything about physics. Now I am really going back to programming...
1/26/10
SubsetSum?
Why does SUBSETSUM, sometimes, trivialize so fast? Say S = {a, b, ... } is a set of integers adding up to d, then I can represent each integer, say a, it with its bitwise encoding an, .., a0, and you end up with (a bit confusing but c is the carry, an expression in terms of variables of the following column):
Now, a,b,.. and d are constants so if you would fill in the zeroes and ones, you'ld end up with something like:
Now you can manipulate si out by replacing them with the xor of the rest of the column, the digit and the carry  especially when taking into account the carry expression and the fact that you can swap rows. Guess I should read somewhat. [Abraham D. Flaxman and Bartosz Przydatek, Solving MediumDensity Subset Sum Problems in Expected Polynomial Time.] Right, there's a whole folkflore around this problem, the article was on mod M problems with medium density, its a probabilistic algorithm with an expected running time.
Tinkering... Its not too exciting...
cn cn1 ... c1 c0 an an1 ... a1 a0 . s0 bn bn1 ... b1 b0 . s1 + dn dn1 ... d1 d0
Now, a,b,.. and d are constants so if you would fill in the zeroes and ones, you'ld end up with something like:
cn cn1 ... c1 s0 0 ... s0 0 s1 s1 ... 0 s1 + 0 1 ... 1 0
Now you can manipulate si out by replacing them with the xor of the rest of the column, the digit and the carry  especially when taking into account the carry expression and the fact that you can swap rows. Guess I should read somewhat. [Abraham D. Flaxman and Bartosz Przydatek, Solving MediumDensity Subset Sum Problems in Expected Polynomial Time.] Right, there's a whole folkflore around this problem, the article was on mod M problems with medium density, its a probabilistic algorithm with an expected running time.
Tinkering... Its not too exciting...
1/25/10
AKS, Pascal, Primality and Subsetsum
Something in the back of my mind nags about the square of Pascal, primality and subsetsum. I read some of Scott Aaronson lectures which got me to think again on irreversibility, XORSAT, linear algorithms, BPP=P and BQP.
Why is testing primality on the nth line of Pascal's square seen as exponential? Looks like cubic, but prohibitive, to me. Missing stuff. Having a look at Karatsuba and Strassen again.
Stopped programming for a while and am looking what gives if you simplify SUBSETSUM by manipulating out unknowns. Guess I need a hobby.
Quantum dump, while programming.
Why is testing primality on the nth line of Pascal's square seen as exponential? Looks like cubic, but prohibitive, to me. Missing stuff. Having a look at Karatsuba and Strassen again.
Stopped programming for a while and am looking what gives if you simplify SUBSETSUM by manipulating out unknowns. Guess I need a hobby.
Quantum dump, while programming.
1/23/10
It Doesn't Fit
Meme of today: How much information fits into a digital circuit which is of squared form?
If I can combine two minimal terms to derive a minimal term, then that gives a bottomup strategy on solving instances of problems. How hard is it to check you're near minimal given the number of operations performed?
It becomes interesting when you look at the encoding of SUBSETSUM, where SUBSETSUM I trivialize by assuming the numbers and set have the same dimension n. In the digital encoding you end up with a 'square' term, corresponding to the spacetime complexity, with width and breadth cn, and depth therefore 2dn. FACTOR looks similar. (Note that I am looking at propositional formulas encoding instances.)
Now we look at what a variable means, say we have two, a and b, gives 2**(2**2) functions, thus we should get 16 different functions. For a variable, you can assume it encodes a mapping on the state space. For example, a encodes 0011 and b encodes 0101. Similarly, (aa), we only have one operator so I omit it, thus encodes 1100, (ab) encodes 1110. My favorite, (aa)a, see the logo/favicon, encodes a tautology. Enumerating the terms gives all functions eventually.
There is only a squared number of operations performed, the depth is also bounded, so, the maximal information in the term, and the minimal term size, is, as far as I can see, the Shannon density? Guess its standard complexity theory, its been a while.
012610: Thought it over, I defined a tautology here. If you define the 'Shannon density' as the minimal term for a proposition, then of course everything sticks. For factor, the encoding of a product is quadratic in timespace, but the minimal term size of a concrete instance is upper bounded by the number of divisors (up to the square root) after manipulating the term and algebraically removing one unknown.
012610: Perfect, I found the bound again on the number of divisors: The divisor bound asserts that, as n gets large,
or the more precise bound
012610: Of course, its rather easy to describe an exponential number of bit strings with a term recognizing bit vectors. So the answer is: Quadratic information even though a term may describe an exponential number of symbols.
012710: Oh, right, a variable encodes an exponential number of bit vectors. A term encodes an exponential number of paths to a variable. Variables can occur on even and odd length paths. Can we hope to compress along those paths? I think I tried that, its AndOr tree compression, and close to a variable picking heuristic from SAT solving.
(Great, in a day I went from logarithmic (which I remembered), to linear, to an exponential bound on the Shannon density of divisors. Messy. Ah well, been five years since I looked at this.)
Oi, Shannon, you home?
If I can combine two minimal terms to derive a minimal term, then that gives a bottomup strategy on solving instances of problems. How hard is it to check you're near minimal given the number of operations performed?
It becomes interesting when you look at the encoding of SUBSETSUM, where SUBSETSUM I trivialize by assuming the numbers and set have the same dimension n. In the digital encoding you end up with a 'square' term, corresponding to the spacetime complexity, with width and breadth cn, and depth therefore 2dn. FACTOR looks similar. (Note that I am looking at propositional formulas encoding instances.)
Now we look at what a variable means, say we have two, a and b, gives 2**(2**2) functions, thus we should get 16 different functions. For a variable, you can assume it encodes a mapping on the state space. For example, a encodes 0011 and b encodes 0101. Similarly, (aa), we only have one operator so I omit it, thus encodes 1100, (ab) encodes 1110. My favorite, (aa)a, see the logo/favicon, encodes a tautology. Enumerating the terms gives all functions eventually.
There is only a squared number of operations performed, the depth is also bounded, so, the maximal information in the term, and the minimal term size, is, as far as I can see, the Shannon density? Guess its standard complexity theory, its been a while.
012610: Thought it over, I defined a tautology here. If you define the 'Shannon density' as the minimal term for a proposition, then of course everything sticks. For factor, the encoding of a product is quadratic in timespace, but the minimal term size of a concrete instance is upper bounded by the number of divisors (up to the square root) after manipulating the term and algebraically removing one unknown.
012610: Perfect, I found the bound again on the number of divisors: The divisor bound asserts that, as n gets large,
or the more precise bound
012610: Of course, its rather easy to describe an exponential number of bit strings with a term recognizing bit vectors. So the answer is: Quadratic information even though a term may describe an exponential number of symbols.
012710: Oh, right, a variable encodes an exponential number of bit vectors. A term encodes an exponential number of paths to a variable. Variables can occur on even and odd length paths. Can we hope to compress along those paths? I think I tried that, its AndOr tree compression, and close to a variable picking heuristic from SAT solving.
(Great, in a day I went from logarithmic (which I remembered), to linear, to an exponential bound on the Shannon density of divisors. Messy. Ah well, been five years since I looked at this.)
Oi, Shannon, you home?
1/22/10
An Old Strange Observation
Assume FACTOR and SUBSETSUM are independent of the digital base you choose for their constituents. Then why would they have different complexity? Its a, meaningless, observation that FACTOR just looks harder than SUBSETSUM in it's digital circuit form.
Let subsum({5,3,4}) = 7 be the question if summation of a subset equals 7. Now, that is equal to the question, is there a vector v such that (5,4,3).v = 7? We could encode that digitally such that <5><4><3>∗<v> = <x><7><y>, where constants are zero padded and rest are unknowns. Now, a blackboard multiplication:
Where the bits vi..vk are part of v. There seems to be some structural relation between FACTOR and SUBSETSUM, if you could divide unknowns NP would be easy.
So, more precise, FACTOR equals <x>∗<y> = <c>, where c a constant, and SUBSETSUM apart from its trivial encoding also reduces to <c>∗<v> = <x><d><y>, where c and d constants, v intermixed with zeroes. Great, now what? It feels like the same problem.
Hogwash.
Silliness... How did XORSAT work again? Right, carry cannot be expressed in XOR form.
Let subsum({5,3,4}) = 7 be the question if summation of a subset equals 7. Now, that is equal to the question, is there a vector v such that (5,4,3).v = 7? We could encode that digitally such that <5><4><3>∗<v> = <x><7><y>, where constants are zero padded and rest are unknowns. Now, a blackboard multiplication:
<5><3><4> . vi <5><3><4> . vj <5><3><4> . vk + < x ><7>< y >
Where the bits vi..vk are part of v. There seems to be some structural relation between FACTOR and SUBSETSUM, if you could divide unknowns NP would be easy.
So, more precise, FACTOR equals <x>∗<y> = <c>
Silliness... How did XORSAT work again? Right, carry cannot be expressed in XOR form.
1/21/10
On ConwayKochen
The free will theorem of John H. Conway and Simon B. Kochen states that, if we have a certain amount of "free will", then, subject to certain assumptions, so must some elementary particles. The proof of the theorem relies on three axioms.
Cool, what are we looking at?
Spin is not interesting, we have a particle with a certain behavior, it consistently shows a measurable property in two dimensions. Twin is entanglement, I'll come back to that. Fin is the assumption that information cannot travel faster than the speed of light.
All axioms are true, at least, according to quantum mechanics. The only silly thing is that 'entanglement' is nothing more than a mathematical property, and 'a state space collapse' is nothing more than a mathematical action. Its entirely similar to, if I have 'x = 3 + y,' than determining a value for 'x,' or 'y,' immediately determines the other side of the equation. Einstein and Bohr had fervent debates about this, Bohr decided that the equations just work, a pragmatic argument. A lot of philosophers unfortunately decided that 'equations' are related to the world, and build 'free will' arguments on that.
Casually, there is a lot wrong with the free will theorem. Free will is normally associated with nondeterminism, and there's where the argument goes wrong immediately. Nondeterminism cannot be observed. If I give you a coin, it is impossible to state anything probabilistic about the outcome of what you give back to me, except for the fact that it'll be head or tail. This is essential different behavior than a coin flip, which follows stochastic rules. So say we have a nondeterministic system E which forces a system P into a state. It remains impossible to observe whether system P is a deterministic or nondeterministic system since nondeterminism cannot be observed. We will never know if P flips or turns a coin. Thus the implication, free will for experimenters plus axioms implies free will for the particle, cannot be true. At least the anthropomorphic claim cannot hold, even given all the axioms.
Quantum mechanics predicts a 50%/50% distribution of spin up or down, and 100% correlation at two sites. This cannot be nondeterminism since that cannot be observed it is stochastic behavior. Stated differently, if particles have free will we should observe traces where particles just decided to be up for no apparent reason. But if we have stochastic behavior then the underlying model can be expected to be fully deterministic by Einstein's 'God doesn't throw dice' argument. Assuming a dice leads to a world where some guy with a beard does something when we observe something and also keeps the distribution in order. Moreover, now the anthropomorphic argument leads to the immediate conclusion we don't have free will, which is irrelevant, since following the above reasoning, it was flawed anyway.
So what do we have? A theorem where the implication cannot hold, the axioms are debatable, and the conclusion contradicts experiment. Great.
I didn't know about ConwayKochen, this is the condensed result of a discussion over at LtU, for which I should thank the other contributors.
 Fin: There is a maximum speed for propagation of information, not necessarily the speed of light. This assumption rests upon causality.
 Spin: The squared spin component of certain elementary particles of spin one, taken in three orthogonal directions, will be a permutation of (1,1,0).
 Twin: It is possible to "entangle" two elementary particles, and separate them by a significant distance, so that they have the same squared spin results if measured in parallel directions. This is a consequence of, but more limited than, quantum entanglement.
Cool, what are we looking at?
Spin is not interesting, we have a particle with a certain behavior, it consistently shows a measurable property in two dimensions. Twin is entanglement, I'll come back to that. Fin is the assumption that information cannot travel faster than the speed of light.
All axioms are true, at least, according to quantum mechanics. The only silly thing is that 'entanglement' is nothing more than a mathematical property, and 'a state space collapse' is nothing more than a mathematical action. Its entirely similar to, if I have 'x = 3 + y,' than determining a value for 'x,' or 'y,' immediately determines the other side of the equation. Einstein and Bohr had fervent debates about this, Bohr decided that the equations just work, a pragmatic argument. A lot of philosophers unfortunately decided that 'equations' are related to the world, and build 'free will' arguments on that.
Casually, there is a lot wrong with the free will theorem. Free will is normally associated with nondeterminism, and there's where the argument goes wrong immediately. Nondeterminism cannot be observed. If I give you a coin, it is impossible to state anything probabilistic about the outcome of what you give back to me, except for the fact that it'll be head or tail. This is essential different behavior than a coin flip, which follows stochastic rules. So say we have a nondeterministic system E which forces a system P into a state. It remains impossible to observe whether system P is a deterministic or nondeterministic system since nondeterminism cannot be observed. We will never know if P flips or turns a coin. Thus the implication, free will for experimenters plus axioms implies free will for the particle, cannot be true. At least the anthropomorphic claim cannot hold, even given all the axioms.
Quantum mechanics predicts a 50%/50% distribution of spin up or down, and 100% correlation at two sites. This cannot be nondeterminism since that cannot be observed it is stochastic behavior. Stated differently, if particles have free will we should observe traces where particles just decided to be up for no apparent reason. But if we have stochastic behavior then the underlying model can be expected to be fully deterministic by Einstein's 'God doesn't throw dice' argument. Assuming a dice leads to a world where some guy with a beard does something when we observe something and also keeps the distribution in order. Moreover, now the anthropomorphic argument leads to the immediate conclusion we don't have free will, which is irrelevant, since following the above reasoning, it was flawed anyway.
So what do we have? A theorem where the implication cannot hold, the axioms are debatable, and the conclusion contradicts experiment. Great.
I didn't know about ConwayKochen, this is the condensed result of a discussion over at LtU, for which I should thank the other contributors.
1/20/10
Back in Nand Land
Stopped for a while on QM, its either the work of a genius or a madman, and I don't really feel like thinking about it anymore. There are a lot of fuzzy conclusions even after looking at it for a short while, it works, but the interpretations of it and derived theorems sometimes are a mess.
A question I've been wondering about deals with minimal Nand terms. It is trivial to enumerate them. The simplest is just start of with a fixed number of variables, call that set S0, then S1 is the set of all combinations up to isomorphism of S0, and S2 the set of all combinations of S0 and S1, etc.
Now of course that leads to a numbering on terms, that is if you consider a set to be a vector for a moment. A minimization strategy is a function mapping a number on a number of terms which encode the same state space. Note that calculating a specific number for any given term by a computer would be prohibitive, since a term encodes a function out of 2**(2**n), and a term will have a number at least as big. Question is, can you do without the number, and what is the mapping which relates terms by trivial rewrites, and also, the indegree and outdegree of terms? And also, what are the trajectories in Sx, how far, if anything, do you go up or down in x?
P!=NP says there is no trivial strategy, or trivial strategies are of exponential length, but still, you can study the relationships and 'possible' confluent functions. With a big computer, it gives an empirical study of the question.
The interesting thing is, if PRIME would correspond to a polynomial deterministic minimization, why wouldn't FACTOR?
Its been years since I was interested in this, seems I forgot a lot. Some terms have polynomial rewrite strategies, should look it up again.
Armchair programming and loose thinking.
A question I've been wondering about deals with minimal Nand terms. It is trivial to enumerate them. The simplest is just start of with a fixed number of variables, call that set S0, then S1 is the set of all combinations up to isomorphism of S0, and S2 the set of all combinations of S0 and S1, etc.
Now of course that leads to a numbering on terms, that is if you consider a set to be a vector for a moment. A minimization strategy is a function mapping a number on a number of terms which encode the same state space. Note that calculating a specific number for any given term by a computer would be prohibitive, since a term encodes a function out of 2**(2**n), and a term will have a number at least as big. Question is, can you do without the number, and what is the mapping which relates terms by trivial rewrites, and also, the indegree and outdegree of terms? And also, what are the trajectories in Sx, how far, if anything, do you go up or down in x?
P!=NP says there is no trivial strategy, or trivial strategies are of exponential length, but still, you can study the relationships and 'possible' confluent functions. With a big computer, it gives an empirical study of the question.
The interesting thing is, if PRIME would correspond to a polynomial deterministic minimization, why wouldn't FACTOR?
Its been years since I was interested in this, seems I forgot a lot. Some terms have polynomial rewrite strategies, should look it up again.
Armchair programming and loose thinking.
1/18/10
Are Phenomena on Two Axes in Three Dimensions Strange?
There is a problem known as the KochenSpecker theorem, this theorem is often trivially represented as the SPIN theorem. You can measure a 1particle on three axes, and observe that the spin is parallel to the direction (+1), perpendicular to the direction (0), or antiparallel to the direction (1). The SPIN axiom states that the square of the spin is a permutation of (0,1,1), something is happening on two axes.
I really don't know what is going on here, but let's ask another question. Is it strange to observe something on only two axes in three dimensions? The answer: No!
Imagine you throw a ball against a wall, or bounce a fluid in space. If you look at it from three axes, where two axes are parallel to the wall, it will alternate between a flattened and stretched circle on two of them and on the remainder axis it'll look like a shrinking and expanding circle. If you assume that it is easy to observe the flattening/stretching, but not the shrinking/expanding, you end up with the SPIN axiom. If you assume that it can only fibrate alligned to the spin, you don't need to square. (In the picture you'ld bounce in the 0axis direction.)
Note that this is the reverse question, it says nothing about the paradox or the spin axiom. It is a model which just satisfies the axiom, nothing more. Though now I am wondering if people are actually measuring spin...
The essence of QM: Its damned hard to get periodic behavior out of linear transformations? Or just a gimbal lock?
I really don't know what is going on here, but let's ask another question. Is it strange to observe something on only two axes in three dimensions? The answer: No!
Imagine you throw a ball against a wall, or bounce a fluid in space. If you look at it from three axes, where two axes are parallel to the wall, it will alternate between a flattened and stretched circle on two of them and on the remainder axis it'll look like a shrinking and expanding circle. If you assume that it is easy to observe the flattening/stretching, but not the shrinking/expanding, you end up with the SPIN axiom. If you assume that it can only fibrate alligned to the spin, you don't need to square. (In the picture you'ld bounce in the 0axis direction.)
Note that this is the reverse question, it says nothing about the paradox or the spin axiom. It is a model which just satisfies the axiom, nothing more. Though now I am wondering if people are actually measuring spin...
The essence of QM: Its damned hard to get periodic behavior out of linear transformations? Or just a gimbal lock?
A Scientific Test for Free Will
Associated with quantum theory is the observation that if a particle can remain in an unknown state until observed, that would give use free will. Now, it is assumed that from entanglement we could derive free will since it can be observed, even over several hundred kilometers.
Does entanglement imply free will? No, because it is not a proof of any nondeterministic behavior. The "I don't know what is in a box until I look in it" kind of nondeterminism is invariant to the fact whether you look into two boxes or one, entangled or not.
If anything, correspondence between two boxes shows determinism by Occam's razor, and from that a local relativistic causal world.
Is there a scientific test for free will/nondeterminism. Yes, it's trivial, with some hand waving. If there is any process which is the direct translation of ( red  ( red  green) ), a nondeterministic choice between red and the nondeterministic choice between red and green. By definition, that equals ( red  green ) the nondeterministic choice between red and green.
Say you build a device like that and run it a hundred thousand times and it gives different distribution when repeated, I would accept life is nondeterministic.
For the sceptic, no there cannot be a conclusive proof of nondeterminism.
From Quantum Lambda Calculus to M&Ms to Free Will? Nice trip... Back to compilers.
Does entanglement imply free will? No, because it is not a proof of any nondeterministic behavior. The "I don't know what is in a box until I look in it" kind of nondeterminism is invariant to the fact whether you look into two boxes or one, entangled or not.
If anything, correspondence between two boxes shows determinism by Occam's razor, and from that a local relativistic causal world.
Is there a scientific test for free will/nondeterminism. Yes, it's trivial, with some hand waving. If there is any process which is the direct translation of ( red  ( red  green) ), a nondeterministic choice between red and the nondeterministic choice between red and green. By definition, that equals ( red  green ) the nondeterministic choice between red and green.
Say you build a device like that and run it a hundred thousand times and it gives different distribution when repeated, I would accept life is nondeterministic.
For the sceptic, no there cannot be a conclusive proof of nondeterminism.
From Quantum Lambda Calculus to M&Ms to Free Will? Nice trip... Back to compilers.
Subscribe to:
Posts (Atom)