Alan Turing proposed the "halting problem", paraphrased:
He also proved that there is no general solution.
The proof is easy, if you know how to think about this kind of problem: If there were a halting-detector-program, one could write a program that invokes the halting-detector-program on itself (it would simply contain the halting-detector code), and then does the opposite of what the detector said it would do.
Imagine that you, with all your smarts and your aptitude for algorithms and abstraction, have found employment as a halting detector, sitting in the software equivalent of a lemonade stand with a sign.
Some program is submitted. You read the program code, and soon come upon the part where it uses a halting detector - yup, there's your exact duplicate over there, waiting to get called and think exactly what you're thinking now.
You completely understand what's going on. (And so will your duplicate.) The program will take your answer, and then do the opposite. You don't have to understand how you yourself actually work; all you have to do is recognize yourself and know that whatever you end up doing, the program then will do the opposite.
What answer are you going to give? What is the program's behavior?
There is a related problem that at first seems simpler, but isn't. In this problem, the program that's submitted for your judgement doesn't do the opposite of what you say it does; it does exactly what you say it does. If you say it terminates, it terminates. If you say it goes off into a corner and plays with itself, well, that's what it does.
Again, how do you decide which answer to give? What is the program's behavior?
Seen like this, the halting problem is not really about computability, or about understanding what a program does. Our programmer in a box completely understands what the input program does. (There are programs that are hard, or perhaps impossible, to understand - I nominate the troff source code - but this isn't one of them!)
The reason the halting problem is unsolvable for the weird program Turing proposes is that the halting problem asks the wrong question. It limits the set of answers to "terminates" or "does not terminate", and doesn't allow for answers like "it does the opposite of what I'm going to say it does" or "it does what I'm going to say it does".
The old result space was too small in the same way that real numbers are too small for "what's the square root of X" - there are negative X, and their roots are going to be complex numbers to represent. That doesn't mean that the square root function is a particularly hard problem to compute.
When making statements about systems that the statements are part of, we need to allow for a larger vocabulary!
The question would should have asked is this:
For simple programs, the answer to the new halting problem is still "terminates" or "does not terminate" - same as before; just like there are functions over variables that just throw that variable away (e.g., f(x) = 2).
For complicated programs that contain the halting-detector, the halting-detector no longer has to deal with the philosophical problem of being in a feedback loop with itself; all it needs to do is describe what the feedback loop does, relative to the halting-detector's outcome.
The new results of the new halting detector in the two example cases are "the opposite of this" and "the same as this".
In both cases, their values are not computable in terms of a specific outcome - termination or non-termination. But they do accurately describe the underlying program's behavior in a short, clear, symbolic way.
It's interesting to think about what other operations one can do with these values. We do know that the opposite of "the opposite of this" is "the opposite of this", and that the same as "the same as this" is "the same as this"; in fact, any such result, applied to itself, will have to be itself - otherwise the halting detector would be lying.
Where else can we do with this? Is this good for anything? You tell me. (And then I'll, likely as not, do the opposite.)