Now you've got lower, and upper adjoins, denoted

*f*and

^{*}*f*.

_{*}One can shoot people for wasting one's time, right?

Puhleease?

skip to main |
skip to sidebar
## 8/29/10

###
Mathematician needs to be Shot

## 8/27/10

## 8/26/10

## 8/22/10

## 8/20/10

## 8/18/10

###
What about Now?

###
Why I think Deolalikar's Proof Probably is Wrong

## 8/9/10

###
A Proof That P Is Not Equal To NP?

## 8/1/10

###
Qualcomm's E-Reading Screen

If they can get it to work cheaply then this tech is going to end up everywhere.
###
A Sin and a Shame

Bits of Marco Devillers

I was reading up on Galois connections, which seems all the thong, and all that.

Now you've got lower, and upper adjoins, denoted*f*^{*} and *f*_{*}.

One can shoot people for wasting one's time, right?

Puhleease?

Now you've got lower, and upper adjoins, denoted

One can shoot people for wasting one's time, right?

Puhleease?

This is just a funny post, for, erm, well, fun, really.

Time has a past, a present and a future. And, for some reason, time is fixed. Westerners fix it around the presumed birth date of a guy, Chinese are not really sure, but time surely started with the reign of an emperor, and Arabs are sure time started when some guy took his holidays very serious sometime ago.

I say, we get rid of all the prejudice and start a calendar which starts time at, well, now. Just imagine how convenient it'll be. Make an appointment two weeks from now, just look ahead two weeks in your calendar, rip out a page each day. Or, your favorite birthday of your pet dog, well just three months and fifteen days from your own birthdate.

The good thing, no computer ever needs to synchronize anything anymore, since everything is relative to now, which is well, just now, at this exact moment in time.

Sure, historians might not like it, but hey, it's surely fixable.

Lets face it. The fact is that stuff happens now, not yesterday, and surely not tomorrow.

Time has a past, a present and a future. And, for some reason, time is fixed. Westerners fix it around the presumed birth date of a guy, Chinese are not really sure, but time surely started with the reign of an emperor, and Arabs are sure time started when some guy took his holidays very serious sometime ago.

I say, we get rid of all the prejudice and start a calendar which starts time at, well, now. Just imagine how convenient it'll be. Make an appointment two weeks from now, just look ahead two weeks in your calendar, rip out a page each day. Or, your favorite birthday of your pet dog, well just three months and fifteen days from your own birthdate.

The good thing, no computer ever needs to synchronize anything anymore, since everything is relative to now, which is well, just now, at this exact moment in time.

Sure, historians might not like it, but hey, it's surely fixable.

Lets face it. The fact is that stuff happens now, not yesterday, and surely not tomorrow.

First off, I don't hold a PhD in mathematics and I didn't read his proof in full, that is to the experts. However, I know a few things, and this is a high-level comment on his attempt, mostly from what I got from other people.

*What's Up With That P=NP Question Anyway?*

It dives into a fundamental question of computer science which just states that it is sometimes very hard to find a solution to a problem, but very easy to verify it.

A good analogy is doing groceries. Say, you did your groceries and you want to fit them into the back of your trunk. Will it fit?

Now, it is*hard* to find whether a solution exists to the above problem. You might need to try all possible ways of packing them into your trunk, before you find out that they do, or maybe don't, fit. But is *easy* to verify any given solution, just pack your stuff according to a description and see if it fits, and you're done.

*What's Up With That Phase Transition Anyway?*

Phase transition is also an easy concept. When given the above problem 'fitting groceries in my trunk,' it basically deals with for what list of groceries you might find a solution fast.

In case you bought two match-boxes, and try to fit them, you can easily determine a solution. The problem is*underconstrained*. In case you also bought two refrigerators, you can easily see there isn't a solution, just making one fit may already be hard. The problem is *overconstrained*. But for a lot of lists of groceries, which might fit seemingly, or might just not, you're on a *phase transition bound* where, suddenly, there are no computer programs which can determine in a fast manner whether they'll fit or not. The best computer programs may even take the rest of the life of this universe spending time determining that.

*Does That Matter?*

The problem is that we don't know whether better programs can exist. And that's the heart of the problem. Oversimplifying the fitting problem again: If P!=NP, which it does pragmatically at the moment, no algorithms exists, and if P=NP, somewhere an algorithm may exist.

*What Did Deolalikar Probably Do?*

I didn't read his proposal. But he starts of with a construction, assumes P=NP, and shows that that, according to what we know now, cannot compute fast solutions since we'll hit phase transitions. But that's the crux: Phase transitions only exist since we don't have fast algorithms, therefor most of us assume P!=NP. He just proved his own assumption.

Pragmatically, he may be right. But, pragmatically, for a lot of applications, 22/7 just equals π.

It dives into a fundamental question of computer science which just states that it is sometimes very hard to find a solution to a problem, but very easy to verify it.

A good analogy is doing groceries. Say, you did your groceries and you want to fit them into the back of your trunk. Will it fit?

Now, it is

Phase transition is also an easy concept. When given the above problem 'fitting groceries in my trunk,' it basically deals with for what list of groceries you might find a solution fast.

In case you bought two match-boxes, and try to fit them, you can easily determine a solution. The problem is

The problem is that we don't know whether better programs can exist. And that's the heart of the problem. Oversimplifying the fitting problem again: If P!=NP, which it does pragmatically at the moment, no algorithms exists, and if P=NP, somewhere an algorithm may exist.

I didn't read his proposal. But he starts of with a construction, assumes P=NP, and shows that that, according to what we know now, cannot compute fast solutions since we'll hit phase transitions. But that's the crux: Phase transitions only exist since we don't have fast algorithms, therefor most of us assume P!=NP. He just proved his own assumption.

Pragmatically, he may be right. But, pragmatically, for a lot of applications, 22/7 just equals π.

Interesting, as blogged before, I am one of the few who believe that P might equal NP. I couldn't follow the whole argument from the blog, but it seems he uses results from randomized sat examples and phase transitions. As far as I know, it's just possible to generate sat examples either easy or hard exactly on transition bounds, so, interesting how that will turn out. (That it is no to say that in general, random sat-structures with a certain ratio aren't `harder' than others, it's just not a law.)

Guess I am wrong anyway in my interpretation of his proof. Feels too much like: P doesn't equal NP since sat-solving is hard, which reverses an arrow.

Still, I don't see how he lifts phase transitions: He needs to show that*all* possible sat-solvers have phase-transition behavior on (specific) sat instances, and I just don't see how one can generalize that from the behavior of a few current DPLL based solvers. (Got bored, read some of the arguments against his construction, starts to make some sense, again.)

A distinguished author comments on the proof at the CACM. It didn't pass scrutiny, both on the use of finite model theory -which I don't understand- and -if I read it correctly- the author has similar doubts on whether a characterization of an NP problem as being hard on phase transition boundaries of some sort is actually a characterization of an NP problem.

The latter interests me. To most people in the field, it feels natural, therefor it must be true. I don't know, I guess one of the results of his failed attempt might actually be that people start looking into whether such an assumption is actually correct. I.e., does the feeling hold up to mathematical rigor?

(Hmm, I wonder whether I got the second part of his argument right. Should look in to it. Read a bit further, of course his constructions are way more elaborate than my weak simplification of his approach. He starts of by assuming P=NP, and tries to derive a contradiction, no idea at the moment. I find it strange that he actually can derive a contradiction. He poses specific clauses, and derives that determining a solution is impossible, though he assumes a P=NP algorithm?)

Guess I am wrong anyway in my interpretation of his proof. Feels too much like: P doesn't equal NP since sat-solving is hard, which reverses an arrow.

Still, I don't see how he lifts phase transitions: He needs to show that

A distinguished author comments on the proof at the CACM. It didn't pass scrutiny, both on the use of finite model theory -which I don't understand- and -if I read it correctly- the author has similar doubts on whether a characterization of an NP problem as being hard on phase transition boundaries of some sort is actually a characterization of an NP problem.

The latter interests me. To most people in the field, it feels natural, therefor it must be true. I don't know, I guess one of the results of his failed attempt might actually be that people start looking into whether such an assumption is actually correct. I.e., does the feeling hold up to mathematical rigor?

(Hmm, I wonder whether I got the second part of his argument right. Should look in to it. Read a bit further, of course his constructions are way more elaborate than my weak simplification of his approach. He starts of by assuming P=NP, and tries to derive a contradiction, no idea at the moment. I find it strange that he actually can derive a contradiction. He poses specific clauses, and derives that determining a solution is impossible, though he assumes a P=NP algorithm?)

If they can get it to work cheaply then this tech is going to end up everywhere.

From the NYT article:

Oh my god, Marx was right?

“They threw out far more workers and hours than they lost output,” said Professor Sum. “Here’s what happened: At the end of the fourth quarter in 2008, you see corporate profits begin to really take off, and they grow by the time you get to the first quarter of 2010 by $572 billion. And over that same time period, wage and salary payments go down by $122 billion.”

Oh my god, Marx was right?

Subscribe to:
Posts (Atom)