Godel's incompleteness theorem say that a system of sufficient complexity can not be bother complete and consistent [0]. Consistency ensures that statements can not be both true and false (under a given set of axioms). Completeness says that everything that can be written down can be proved either true or false (under a given set of axioms).
You must sacrifice one or the other. As we mostly care more about consistency (we don't want to make nonsense statements), completeness is sacrificed. Meaning, there will be things we can write down that we won't ever be able to prove true or false (relative to a set of axioms).
There are many other implications and subtleties, like potentially not being able to show a set of axioms is complete or consistent, potentially not being able to show the equivalence of axioms, not knowing whether adding an axiom is still consistent, etc.
Turing understood Godel's incompleteness theorem and applied it to computation. The computer science way of saying the above is that one can never make a program that can determine whether other programs halt or not (in finite time). Practically, this means that there will never be a compiler that will tell you whether you program has a bug. This is called the Halting Problem [1] and directly relates to Godel's incompleteness theorem as one can cast mathematics through the lens of computation and running Turing machines.
Presumably the "classification" here is some combination of whether models are equivalent or consistent and complete.
> Godel's incompleteness theorem say that a system of sufficient complexity can not be both complete and consistent
You've made an important and very common mistake of not qualifying this: it only holds true within the system. If you are using the same logical/mathematical system to evaluate its own completeness or consistency, then Godels results apply. Using other systems we can examine and understand the truth of sentences; which is to say, a systems completeness and consistency.
Wikipedia includes this constraint:
> For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
Few programmers and other on the internet understand the significance, often making reasoning about it wrong.
> Practically, this means that there will never be a compiler that will tell you whether you program has a bug.
TLA+ and other "external" systems can evaluate whether a program has a bug.
> You've made an important and very common mistake of not qualifying this: it only holds true within the system.
It's not a mistake. I've stated it clearly.
You're effectively qualifying it by saying you can add more axioms to extend the mathematical system to render previously incomplete statements complete. This comes at the cost of adding more incomplete statements. I can add the "The Riemmann hypothesis is true" as an axiom, say, and, assuming it's consistent with other axioms and actually adds some power that other axioms didn't, that might be fine, but this will create other statements that are also incomplete.
> TLA+ and other "external" systems can evaluate whether a program has a bug.
No, it won't.
From the Wikipedia entry [0]:
> The model checker finds all possible system behaviours up to some number of execution steps, and examines them for violations of desired invariance properties such as safety and liveness.
From this, at best this only does it out to a finite number of execution steps. I suspect this comes at the cost of exponential resources. So not only does it not find all bugs, the bugs it does find come at an increasingly intractable cost.
The GP had a few misconceptions about Godel and halting problems and stuff, but one nitpick here:
>> Practically, this means that there will never be a compiler that will tell you whether you program has a bug.
>
> TLA+ and other "external" systems can evaluate whether a program has a bug.
The main TLA+ model checker is TLC. It's pretty easy to write a TLA+ spec with a bug that TLC can't find (ironically by exploiting unbound model spaces, which are the Does Not Halt of model checking)
Yes you're right. I should have said "it's impossible to create a program that, in general, for all input programs, determines whether it halts or not (in finite time)."
The whole of mathematics and computer science is, in some sense, analyzing those programs which we can prove to halt.
Can the theorems of Kirt Gödel be classified, though, is my question.
And if so, would their domains of applicability within the universe of formal systems actually be a classification system of that universe in and of itself?
> The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized.
Tarski's axioms for geometry are the standard example of a system which is both complete and consistent. It is even decidable - there is an algorithm which you can feed a statement and it outputs whether it is true or not.
I don't understand your questions. They are relevant for geometry, they aren't really relevant to anything else but they're axioms for geometry so being relevant for geometry is what you'd expect.
The original incompleteness theorems apply to any logical system which can formulate and prove a certain chunk of arithmetic, yes. It turns out that the amount of arithmetic you actually need is incredibly small, so these results apply to many systems including some formulations of geometry.
Oh, by the way, it matters here that "weak" in the source you quote had a very specific technical meaning. If axioms are weak it means they apply to many things. Stronger axioms make more restrictions and apply to more specific things. So (arguably) axioms being weak is a good thing. Generally you want to work with axioms that are as weak as you can get away with.
https://euclideanspace.com/maths/algebra/clifford/index.htm