Are there really hard problems? We know there are from our experience. But how do we know they are really hard? Maybe we are just not smart enough to solve them? Maybe we haven’t checked all the possibilities yet?
So I will claim something like this: There is no mathematical proof that hard problems exist. If they do exist, their existence is taken to be as an axiom. It can’t be proved mathematically.
So let’s define what a hard problem is. A hard problem p is any decision problem that is computable, but is proved not to be in O(n). Or more generally speaking, there is a function f which is increasing and unbounded such that for any positive integer N, there is n>N such that the number of steps to calculate p(n) is at least f(N). f can be, for example, n, 2n, log2(n) or a(n) defined above. It doesn’t matter, as long as it’s computable.
So lets assume that p is a decision function (again, computable) that is proved not to be in O(n) [or more generally speaking, O(f(n)) ]. So if N is a sequence of natural numbers, there is a computable function n(N) for which the number of steps to calculate p(n(N)) is at least f(N). It can be seen that n defines a subset of N for which for any N, the number of steps to calculate p(n(N)) is at least f(N).
Now, we have a series of bits (0 or 1) p(n(N)) which is proved not to be in O(f(n)). Calculating each bit has been proved to be a hard problem. Now we can define a new subset of n, lets call it m, such that m(n) returns 1 if p(m) = p(1). It can be seen that calculating m(n) takes at least f(N) steps for each n(N), and remember that f(N) is increasing. But how long does it take to calculate p(m) for every m? It takes exactly one step, since p(1) is already known.
So we are now left with an infinite subset of N for which it has been proved that calculating each bit is a hard problem, yet we proved it’s not hard. And if m is not an infinite subset, we can take its complement. The only way to get out of this is to conclude that at least one of the functions we used here is not computable, and therefore not well defined.
But it does not matter how we prove such a thing. Any proof will lead to a contradiction. The only conclusion is that there is no proof that any problem is hard. But does it mean that every problem is easy? It doesn’t. It just means that there is no proof it is hard. We can take one part of the proof for granted, for example the function n, and define its existence as an axiom. Or we can conclude it exists if there is no proof that it doesn’t exist. It’s infinitely complicated. No proof will ever be found.
Even if we allow functions to exist without being computable, their existence leads to the contradiction above. Does it mean that every computable problem is easy? At least in theory they are. No computable problem can ever be proved to be hard. It is possible to display the first million bits of s(n) above in no time. We just don’t know how.