Discussion:
[isabelle] The presentation of Goedel's First Incompleteness Theorem by Lawrence C. Paulson (and others)
Ken Kubota
2015-12-22 10:58:29 UTC
Dear Gottfried Barrow,

There has been a misunderstanding. I claimed neither that Isabelle is
inconsistent nor the contrary.

However, a proof of Paulson's claimed theorem 'proved_iff_proved_PfP' is not
available in axiomatic (Hilbert-style) deductive systems, because in
Hilbert-style systems this claimed theorem cannot even be expressed, since it
does not contain well-formed formulae only. Therefore this claimed theorem is
not a mathematical theorem or metatheorem. For now, please allow me to focus on
this single point.

This can be demonstrated easily by looking at the structure of the claimed
theorem 'proved_iff_proved_PfP' available at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 21)
http://www.owlofminerva.net/files/Goedel_I_Extended.pdf (p. 19)

which can be written
{} > a <-> {} > PfP "a"

if we use '>' for the deduction symbol (turnstile) and '"' for the Goedel
encoding quotes, and simplified without change of meaning to
a <-> > PfP "a"
expressing that 'a' is a theorem if and only if there is a proof of 'a'.

Now, recalling the quotes by Alonzo Church and Peter B. Andrews available at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00042.html

in an axiomatic (Hilbert-style) deductive system, the claimed theorem
'proved_iff_proved_PfP' could be either a theorem (a theorem of the object
language) or a metatheorem (a theorem of the mathematical meta-language).

Case 1 (theorem of the object language, language level 1):

As a trivial example for a theorem of the object language, we shall use
(T & T) = T

usually written
(T & T) = T
as presented in [Andrews, 2002, p. 220 (5211)], with the preceding deduction
symbol (turnstile) in order to express that '(T & T) = T' is a theorem.

In this notation it has, like all theorems of the object language of Q0,
exactly one occurrence of the deduction symbol (turnstile).
Hence, the claimed theorem 'proved_iff_proved_PfP', having two occurrences,
cannot be a theorem of the object language.

Case 2 (theorem of the meta-language, language level 2):

As a trivial example for a theorem of the meta-language, we shall use
If H > A and H > A => B, then H > B.

presented in [Andrews, 2002, p. 224 (5224 = Modus Ponens)], expressing that if
there is a proof of A (from the set of hypotheses H) and a proof of A => B
(from H), then there is a proof of B (from H).

Note that this example shows some of the typical formal criteria of a
metatheorem:
1. Multiple occurrences of the deduction symbol (turnstile).
2. Use of syntactical variables (denoted by bold letters in the works of both
Church and Andrews).
3. Use of the informal words "If" and "then" instead of logical symbols in the
meta-language (according to Church's proposal).

It should be emphasized that metatheorems in proofs can always be replaced by
the proof of the concrete theorems (the syntactical or schematic variable
instantiated) when carrying out the proof, such that metatheorems are actually
not necessary (but reveal properties of the object language that help finding
proofs).

In the notation of Isabelle (natural deduction) this metatheorem would be
expressed by
[H > A; H > A => B] --> H > B

and, if we would add subscripts for the language levels, by
[H >1 A; H >1 A => B] -->2 H >1 B

So metatheorems infer from theorems of the object language (language level 1)
to another theorem of the object language, and this relation between theorems
of the object language is expressed at a higher level: the meta-language
(language level 2).

But the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
cannot be a metatheorem either, since both ways of dealing with it, either
semantically (subcase a) or syntactically (subcase b), fail.

Case 2 Subcase a (semantically):

In the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
the right-hand side (PfP "a"), expressing the provability of theorem 'a', is,
by its meaning, itself a metatheorem, not a theorem of the object language, and
we would have some kind of meta-metatheorem like
1 a <->3 >2 PfP "a"
If we ignore the fact that his meta-metatheorem violates the language level
restrictions and nevertheless proceed further, then from a theorem of the
object language a theorem of the meta-language could be inferred and vice
versa, which would again violate language level restrictions, as for example a
metatheorem would be added to the list of theorems of the object language and
treated as such, leading to a confusion of language levels.

This is, in principle, the construction of the proofs of Andrews and Rautenberg
[cf. Kubota, 2013], in which 'proved_iff_proved_PfP' is used as an implicit
rule, not as a proven theorem/metatheorem. Of course, they both fail also
simply by not providing a proof using syntactical means only.

Case 2 Subcase b (syntactically):

In Paulson's claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
the right-hand side (PfP "a") needs to be a well-formed formula.

But the Goedel encoding in Paulson's proof is implemented by the use of means
which are not available in the object language (i.e., in mathematics).

According to Paulson at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 16)

"[i]t is essential to remember that Gödel encodings are terms (having type tm),
not sets or numbers. [...] First, we must define codes for de Bruijn terms and
formulas.

function quot_dbtm :: "dbtm -> tm"
where
"quot_dbtm DBZero = Zero"
[...]"

Paulson's definition goes beyond the use of purely mathematical means. After
the introduction of the definition of quot_dbtm, it is used as follows:

"We finally obtain facts such as the following:
lemma quot_Zero: "'Zero' = Zero"
[...]"

But with its purely syntactical means the object language cannot explicitly
reason about its own properties directly.
Propositions in Andrews' logic Q0 have type 'o' (Boolean truth values), and one
could define a function 'foo': o -> o, o -> i, or o -> nat (with nat = (o(oi))
= (i -> o) -> o; natural numbers as "equivalence classes of sets of
individuals" [Andrews, 2002, p. 260]), etc.
But since a type "tm" (for term) does not exist in Q0 [cf. Andrews, 2002, p.
210] or R0, one cannot define a mathematical function 'quot_dbtm': dbtm -> tm.
Of course, there are rules for construing well-formed formulae (wffs), but
these (in R0 hardcoded) rules are used implicitly for construing wffs and are
not part (are not theorems) of the object language itself.
Explicit meta-reasoning as with lemma 'quot_Zero' might extend (and, hence,
violate) the object language, as it actually introduces new rules of inference
to the object language, which again may be considered as a confusion of
language levels.

Type "tm" (for term) is a notion of the (technical) meta-language, but not a
mathematical type. Therefore the function 'quot_dbtm' (type: dbtm -> tm) is not
a mathematical well-formed formula (wff), subsequently the Goedel encoding
function ('" "') and the Goedel encoding of proposition 'a' ('"a"') are not
either, and hence the right-hand side (PfP "a") is not a wff and therefore not
a proposition. Finally,
PfP "a"
or
PfP "a"
cannot be a theorem, and for this reason the claimed theorem
'proved_iff_proved_PfP'
a <-> > PfP "a"
cannot be a metatheorem.

Obviously the (technical) meta-language and the object language in Isabelle are
not strictly separated, since the type "tm" (for term) is treated as a
mathematical type in the construction of wffs of the object language, which is
not mathematically safe. Mathematically, a proposition has only type 'o'
(Boolean truth values), but not a type "tm" (for term).

All definitions of Q0 are only shorthands for established wffs. In my R0
implementation, a definition label added to a wff is used for input (parsing)
and output (display) only, and remains irrelevant for syntactical inference and
can be removed or replaced at any time. This means that a definition label for
Goedel encodings (in this case the quotation marks) must represent a
mathematical well-formed formula (wff) when used in 'proved_iff_proved_PfP',
which may be a function with a mathematical type as input (domain) type such as
the type of truth values (type: o -> *), but not with types of the
meta-language as input (domain) type (type: dbtm -> *, or tm -> *), as this
violates the rules for construing mathematical wffs [cf. Andrews, 2002, p. 211].

Of course, one could introduce Goedel numbering in order to arithmetize the
object language and reason about the Goedel numbers. But the reasoning would
then be restricted to these Goedel numbers, and there would be no possibility
to relate these Goedel numbers directly to theorems of the object language as
done in the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
since the Goedel encodings in Paulson's proof (requiring a type "tm") are not
definable with purely mathematical means of the object language (e.g., in R0).
Since the proposition 'a' has only type 'o' (Boolean truth values), the logical
arithmetic is not stronger than propositional calculus, ruling out Goedel
encodings requiring a type "tm".

The concept of the Goedel encoding function generally violates the type
restrictions for construing mathematical wffs, as with the type of truth values
as input (domain) type there would be only two different Goedel numbers.

As in other claimed proofs, non-mathematical means are used in order to
establish a relation between the object language (proposition 'a') and the
meta-language (its provability: PfP "a") as the translation mechanism between
both language levels necessary for the self-reference part of the antinomy,
since Goedel's antinomy is construed across language levels. Note that the
antinomy seems to cause inconsistency in axiomatic (Hilbert-style) deductive
systems, but not necessarily in natural deduction [cf. Kubota, 2015, p. 14].

References

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.

Kubota, Ken (2013), On Some Doubts Concerning the Formal Correctness of Gödel's
Incompleteness Theorem. Berlin: Owl of Minerva Press. ISBN 978-3-943334-04-3.
DOI: 10.4444/100.101. See: http://dx.doi.org/10.4444/100.101

Kubota, Ken (2015), Gödel Revisited. Some More Doubts Concerning the Formal
Correctness of Gödel's Incompleteness Theorem. Berlin: Owl of Minerva Press.
ISBN 978-3-943334-06-7. DOI: 10.4444/100.102. See:
http://dx.doi.org/10.4444/100.102

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Andrei Popescu
2015-12-22 15:50:50 UTC
Dear Ken,

Please forgive me if my tone may seem condescending. I am not an expert in Goedel's theorems, but I have a lot of experience with Isabelle and I have looked at the incriminated proofs.

First, please let me assure you of the following, concerning Larry’s proofs: The employed concepts and the proved theorems do not suffer from any consequence of Isabelle/HOL being based on a meta-logic or featuring a natural deduction system. These concepts and theorems are easily reproducible in "pure" systems, lacking a logical framework on top of them – including all the HOL and ZF systems, be they based on natural deduction, sequent calculi or Hilbert systems. Any person having experience with logical frameworks and with Isabelle would give you this assurance after superficially inspecting these proofs. The logical framework merely offers some convenience.

Now, I say let’s not be dogmatic about how Goedel’s Second has been proved. The fact that Larry and authors before him took some shortcuts, working at an outer level for as much as possible, is perfectly legitimate. It forms the beauty of taming a proof of huge complexity. Again, I ensure you that there is nothing exotic with the employed primitives that led to the stated theorem. In my opinion, the only question worth asking is if the end statement is truthful:

theorem Goedel II:
assumes "¬ {} ⊢ Fls"
shows "¬ {} ⊢ Neg (PfP ⌈Fls⌉)"

I hope we agree that the conclusion of the theorem (the "shows" part above) should be "¬ {} ⊢ Neg phi" where phi is a formula that captures the notion "False is provable in the first-order logic of HF," as represented in the language of HF itself. Here, one may argue that taking phi to be "PfP ⌈Fls⌉" is not a priori convincing, especially since the definitions leading to PfP are very technical and are spanning two levels. These aspects could be clarified by unfolding the definitions and performing some simplifications that "reify" this term. But I am not sure: Is this what you are arguing?

With best wishes,
Andrei

-----Original Message-----
From: cl-isabelle-users-***@lists.cam.ac.uk [mailto:cl-isabelle-users-***@lists.cam.ac.uk] On Behalf Of Ken Kubota
Sent: 22 December 2015 09:03
To: cl-isabelle-***@lists.cam.ac.uk
Subject: [isabelle] The presentation of Goedel's First Incompleteness Theorem by Lawrence C. Paulson (and others)

Dear Gottfried Barrow,

There has been a misunderstanding. I claimed neither that Isabelle is inconsistent nor the contrary.

However, a proof of Paulson's claimed theorem 'proved_iff_proved_PfP' is not available in axiomatic (Hilbert-style) deductive systems, because in Hilbert-style systems this claimed theorem cannot even be expressed, since it does not contain well-formed formulae only. Therefore this claimed theorem is not a mathematical theorem or metatheorem. For now, please allow me to focus on this single point.

This can be demonstrated easily by looking at the structure of the claimed theorem 'proved_iff_proved_PfP' available at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 21)
http://www.owlofminerva.net/files/Goedel_I_Extended.pdf (p. 19)

which can be written
{} > a <-> {} > PfP "a"

if we use '>' for the deduction symbol (turnstile) and '"' for the Goedel encoding quotes, and simplified without change of meaning to
a <-> > PfP "a"
expressing that 'a' is a theorem if and only if there is a proof of 'a'.

Now, recalling the quotes by Alonzo Church and Peter B. Andrews available at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00042.html

in an axiomatic (Hilbert-style) deductive system, the claimed theorem 'proved_iff_proved_PfP' could be either a theorem (a theorem of the object
language) or a metatheorem (a theorem of the mathematical meta-language).

Case 1 (theorem of the object language, language level 1):

As a trivial example for a theorem of the object language, we shall use
(T & T) = T

usually written
(T & T) = T
as presented in [Andrews, 2002, p. 220 (5211)], with the preceding deduction symbol (turnstile) in order to express that '(T & T) = T' is a theorem.

In this notation it has, like all theorems of the object language of Q0, exactly one occurrence of the deduction symbol (turnstile).
Hence, the claimed theorem 'proved_iff_proved_PfP', having two occurrences, cannot be a theorem of the object language.

Case 2 (theorem of the meta-language, language level 2):

As a trivial example for a theorem of the meta-language, we shall use
If H > A and H > A => B, then H > B.

presented in [Andrews, 2002, p. 224 (5224 = Modus Ponens)], expressing that if there is a proof of A (from the set of hypotheses H) and a proof of A => B (from H), then there is a proof of B (from H).

Note that this example shows some of the typical formal criteria of a
metatheorem:
1. Multiple occurrences of the deduction symbol (turnstile).
2. Use of syntactical variables (denoted by bold letters in the works of both Church and Andrews).
3. Use of the informal words "If" and "then" instead of logical symbols in the meta-language (according to Church's proposal).

It should be emphasized that metatheorems in proofs can always be replaced by the proof of the concrete theorems (the syntactical or schematic variable
instantiated) when carrying out the proof, such that metatheorems are actually not necessary (but reveal properties of the object language that help finding proofs).

In the notation of Isabelle (natural deduction) this metatheorem would be expressed by
[H > A; H > A => B] --> H > B

and, if we would add subscripts for the language levels, by
[H >1 A; H >1 A => B] -->2 H >1 B

So metatheorems infer from theorems of the object language (language level 1) to another theorem of the object language, and this relation between theorems of the object language is expressed at a higher level: the meta-language (language level 2).

But the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
cannot be a metatheorem either, since both ways of dealing with it, either semantically (subcase a) or syntactically (subcase b), fail.

Case 2 Subcase a (semantically):

In the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
the right-hand side (PfP "a"), expressing the provability of theorem 'a', is,
by its meaning, itself a metatheorem, not a theorem of the object language, and
we would have some kind of meta-metatheorem like
1 a <->3 >2 PfP "a"
If we ignore the fact that his meta-metatheorem violates the language level
restrictions and nevertheless proceed further, then from a theorem of the
object language a theorem of the meta-language could be inferred and vice
versa, which would again violate language level restrictions, as for example a
metatheorem would be added to the list of theorems of the object language and
treated as such, leading to a confusion of language levels.

This is, in principle, the construction of the proofs of Andrews and Rautenberg
[cf. Kubota, 2013], in which 'proved_iff_proved_PfP' is used as an implicit
rule, not as a proven theorem/metatheorem. Of course, they both fail also
simply by not providing a proof using syntactical means only.

Case 2 Subcase b (syntactically):

In Paulson's claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
the right-hand side (PfP "a") needs to be a well-formed formula.

But the Goedel encoding in Paulson's proof is implemented by the use of means
which are not available in the object language (i.e., in mathematics).

According to Paulson at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 16)

"[i]t is essential to remember that Gödel encodings are terms (having type tm),
not sets or numbers. [...] First, we must define codes for de Bruijn terms and
formulas.

function quot_dbtm :: "dbtm -> tm"
where
"quot_dbtm DBZero = Zero"
[...]"

Paulson's definition goes beyond the use of purely mathematical means. After
the introduction of the definition of quot_dbtm, it is used as follows:

"We finally obtain facts such as the following:
lemma quot_Zero: "'Zero' = Zero"
[...]"

But with its purely syntactical means the object language cannot explicitly
reason about its own properties directly.
Propositions in Andrews' logic Q0 have type 'o' (Boolean truth values), and one
could define a function 'foo': o -> o, o -> i, or o -> nat (with nat = (o(oi))
= (i -> o) -> o; natural numbers as "equivalence classes of sets of
individuals" [Andrews, 2002, p. 260]), etc.
But since a type "tm" (for term) does not exist in Q0 [cf. Andrews, 2002, p.
210] or R0, one cannot define a mathematical function 'quot_dbtm': dbtm -> tm.
Of course, there are rules for construing well-formed formulae (wffs), but
these (in R0 hardcoded) rules are used implicitly for construing wffs and are
not part (are not theorems) of the object language itself.
Explicit meta-reasoning as with lemma 'quot_Zero' might extend (and, hence,
violate) the object language, as it actually introduces new rules of inference
to the object language, which again may be considered as a confusion of
language levels.

Type "tm" (for term) is a notion of the (technical) meta-language, but not a
mathematical type. Therefore the function 'quot_dbtm' (type: dbtm -> tm) is not
a mathematical well-formed formula (wff), subsequently the Goedel encoding
function ('" "') and the Goedel encoding of proposition 'a' ('"a"') are not
either, and hence the right-hand side (PfP "a") is not a wff and therefore not
a proposition. Finally,
PfP "a"
or
PfP "a"
cannot be a theorem, and for this reason the claimed theorem
'proved_iff_proved_PfP'
a <-> > PfP "a"
cannot be a metatheorem.

Obviously the (technical) meta-language and the object language in Isabelle are
not strictly separated, since the type "tm" (for term) is treated as a
mathematical type in the construction of wffs of the object language, which is
not mathematically safe. Mathematically, a proposition has only type 'o'
(Boolean truth values), but not a type "tm" (for term).

All definitions of Q0 are only shorthands for established wffs. In my R0
implementation, a definition label added to a wff is used for input (parsing)
and output (display) only, and remains irrelevant for syntactical inference and
can be removed or replaced at any time. This means that a definition label for
Goedel encodings (in this case the quotation marks) must represent a
mathematical well-formed formula (wff) when used in 'proved_iff_proved_PfP',
which may be a function with a mathematical type as input (domain) type such as
the type of truth values (type: o -> *), but not with types of the
meta-language as input (domain) type (type: dbtm -> *, or tm -> *), as this
violates the rules for construing mathematical wffs [cf. Andrews, 2002, p. 211].

Of course, one could introduce Goedel numbering in order to arithmetize the
object language and reason about the Goedel numbers. But the reasoning would
then be restricted to these Goedel numbers, and there would be no possibility
to relate these Goedel numbers directly to theorems of the object language as
done in the claimed theorem 'proved_iff_proved_PfP'
a <-> > PfP "a"
since the Goedel encodings in Paulson's proof (requiring a type "tm") are not
definable with purely mathematical means of the object language (e.g., in R0).
Since the proposition 'a' has only type 'o' (Boolean truth values), the logical
arithmetic is not stronger than propositional calculus, ruling out Goedel
encodings requiring a type "tm".

The concept of the Goedel encoding function generally violates the type
restrictions for construing mathematical wffs, as with the type of truth values
as input (domain) type there would be only two different Goedel numbers.

As in other claimed proofs, non-mathematical means are used in order to
establish a relation between the object language (proposition 'a') and the
meta-language (its provability: PfP "a") as the translation mechanism between
both language levels necessary for the self-reference part of the antinomy,
since Goedel's antinomy is construed across language levels. Note that the
antinomy seems to cause inconsistency in axiomatic (Hilbert-style) deductive
systems, but not necessarily in natural deduction [cf. Kubota, 2015, p. 14].

References

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.

Kubota, Ken (2013), On Some Doubts Concerning the Formal Correctness of Gödel's
Incompleteness Theorem. Berlin: Owl of Minerva Press. ISBN 978-3-943334-04-3.
DOI: 10.4444/100.101. See: http://dx.doi.org/10.4444/100.101

Kubota, Ken (2015), Gödel Revisited. Some More Doubts Concerning the Formal
Correctness of Gödel's Incompleteness Theorem. Berlin: Owl of Minerva Press.
ISBN 978-3-943334-06-7. DOI: 10.4444/100.102. See:
http://dx.doi.org/10.4444/100.102

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Ken Kubota
2016-07-08 10:00:47 UTC
Dear Andrei Popescu, dear Ramana Kumar,
dear Members of the Research Community,

As a matter of fairness, I have to correct one of my earlier statements.

Andrei Popescu correctly pointed out at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00059.html
that neither "being based on a meta-logic" (i.e., a logical framework) nor
"featuring a natural deduction system" creates a problem.
There has been a misunderstanding on my part.
However, I have to say that Paulson's definition of the turnstile as a symbol
for the predicate "hfthm" in
"inductive hfthm :: "fm set => fm => bool" (infixl "⊢" 55)"
https://www.isa-afp.org/browser_info/current/AFP/Incompleteness/SyntaxN.html
somewhere hidden in the proof files without being prominently announced in his
articles is strongly misleading and inevitably creates misinterpretations.
Usually (such as in [Andrews, 2002]), the turnstile is used to indicate the
provability of a theorem of the logic in which the proof is undertaken, not the
provability of theorems of the embedded logic. Also, for the predicate of
provability of theorems of the embedded (represented, formalized) logic, a
label is usually used, not a symbolic notation (e.g., Goedel uses "Bew" - for
"beweisbare Formel", the German expression for "provable formula" -, and
O'Connor in Coq uses "SysPrf"). Moreover, without some short introduction to
the notation and the type system it is nearly impossible to analyze the proof
in detail unless one has advanced experience with the Isabelle proof assistant,
leaving aside that with a logical framework there are not only three, but four
language levels involved, all of which one would ideally need to understand in
order to verify the proof.

The result of the formal proof
~Bew(g) /\ ~Bew(not g)
can be interpreted in the sense that it is possible to construe a formula with
the form of a sentence (i.e., a boolean-valued well-formed formula with no free
variables) neither provable nor refutable.
Nevertheless, the claim of "incompleteness" depends on a philosophical
assumption.
One who shares the semantic view (model theory) will conclude that there is a
meaningful (and true) sentence that is neither provable nor refutable, and will
call the discrepancy between syntax and semantics "incompleteness".
But one who shares the syntactic view will conclude that it is possible to
create a meaningless formula which has the form of a syntactically correct
sentence (similar, in natural language, to a grammatically correct but
meaningless statement like "mathematics is liquid").

Concerning the two objections of Ramana Kumar at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00061.html
the first one was a misunderstanding clarified at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-January/msg00029.html
But the second, Ramana Kumar's claim that type "tm" can be defined like type
"nat", is not correct. Andrews' logic Q0, being an improved variant of Church's
original simple type theory, only allows the types iota (i) and omicron (o),
and combinations of them.
Andrews' type "nat" is such a combination ( nat := (o(oi)) ). On the contrary,
Paulson's type "tm" is an inductively defined type, which is not possible in Q0:
"nominal_datatype tm = Zero | Var name | Eats tm tm"
https://www.isa-afp.org/browser_info/current/AFP/Incompleteness/SyntaxN.html
In an e-mail, Paulson basically argued that there should be ways to introduce
further types (instead of restricting oneself to combinations of 'i' and 'o').
This is, of course, correct, but doesn't justify the specific mode of type
introduction used here, namely the Backus-Naur form (BNF). The BNF was designed
to specify formal grammars in computer science, but (mathematical) types are
objects that must have certain logical properties, such as being non-empty.
Thus the introduction of additional mathematical types should be subject to
proof, and not a matter of mere declaration via the BNF.

With some more knowledge about the concept of the current proof assistants, I
can answer Ramana Kumar's question at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00149.html
concerning the motivation of my project announcement at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00147.html
from his perspective now.
Current proof assistants seem to follow the semantic approach, in which the
theory is distinguished from a specific linguistic (formal) formulation, or use
of model theoretic notions like satisfiability is made. For example, in
Isabelle there are implementations of both axiomatic set theory (Isabelle/ZF)
and type theory (Isabelle/HOL), but neither one is explicitly preferred, and
the question of which is more adequate for representing mathematics is not even
raised. (Although, unlike in informal mathematics, in formalized proofs type
theory seems to prevail, e.g., within Isabelle the logic Isabelle/HOL, a
polymorphic version of Church's simple type theory.)
If one follows the syntactic approach, then there is a desire for something
like the ideal formulation or the natural language of formal logic and
mathematics. The main criterion is expressiveness [cf. Andrews, 2002, p. 205],
or, vice versa, the reducibility of mathematics to a minimal set of primitive
rules and symbols. If we leave away for a moment the restrictions of Q0 of
still being a formulation of simple type theory (i.e., without type variables
and dependent types), all of formal logic and mathematics can be expressed on
the basis of only a "single rule of inference (substitution, from which the
rule of modus ponens is derived)" and "only two basic types (individuals and
truth values) and only two basic constants (identity/equality and its
counterpart, description)"
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00147.html
Paulson argued that natural deduction is far superior to Hilbert-style systems
for automated proof. This is, of course, correct, since Hilbert-style
metatheorems become symbolically representable in natural deduction as regular
theorems (and not just metatheorems), but this only reflects the engineer's
perspective with the pragmatic aim of quickly obtaining proofs. For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules.
Hence, from the perspective of the syntactic view, Andrews' elegant logic Q0 is
the natural point to start with. Overcoming its restrictions by extending it to
a polymorphic (with type variables) and dependent type theory (with dependent
types) is the motivation of my logic R0, which I hope to be able to publish
soon. A preliminary excerpt with some definitions and a sample proof is
available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf

Kind regards,

Ken Kubota

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Gottfried Barrow
2016-07-09 09:06:32 UTC
Post by Ken Kubota
Paulson argued that natural deduction is far superior to Hilbert-style systems
for automated proof. This is, of course, correct, since Hilbert-style
metatheorems become symbolically representable in natural deduction as regular
theorems (and not just metatheorems), but this only reflects the engineer's
perspective with the pragmatic aim of quickly obtaining proofs. For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules.
In particular, there is this sentence of yours: "...this only reflects
the engineer's perspective with the pragmatic aim of quickly obtaining
proofs. For the philosopher it is not proof automation but
expressiveness that is the main criterion..."

I completely understand the issue of low status vs. high status, and the
importance of trying to assert one's status, by bandying about labels,
with the hopes that labels will stick.

I myself, at one time, aspired to be a philosopher, and mind you, no
mere philosopher, but the mother-of-all-philosophers, as this would give
me the ability to bandy about the mother-of-all-condensations, along
with, certainly, the mother-of-all-condescensions.

Having aspired, I steeled myself, and struck the philosopher's pose,
after which, my arm fell asleep. Surely, if I could have endured even 5
more minutes, the answer would have come, "42", and therefore, the
prize. It was not to be.

Reevaluating my aspirations, I lowered them, and aspired to be an
engineer, so that I could at least condescend to technicians.

Being a man of action, I went out and bought a little train set, and
spent hours and hours painting the little cars, in intricate detail.

But one day, when I was engineering my little train set, saying, "choo,
choo, choo" to the little train cars, the little kids next door heard
me, because the window was open, and they started laughing at me.

Do you know how bad it hurts to have little kids laugh at you, when
you're saying, "choo, choo, choo"?

So in one massive swoop, I destroyed my little train set, to show those
little kids what a bad martial artist I was, MMA style.

They just kept laughing.

It was no great loss. I was actually only aspiring to be a bad martial
artist, MMA style.

--
GZ
--
Andreas Röhler
2016-07-11 07:58:34 UTC
Post by Gottfried Barrow
Post by Ken Kubota
Paulson argued that natural deduction is far superior to
Hilbert-style systems
for automated proof. This is, of course, correct, since Hilbert-style
metatheorems become symbolically representable in natural deduction as regular
theorems (and not just metatheorems), but this only reflects the engineer's
perspective with the pragmatic aim of quickly obtaining proofs. For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules.
In particular, there is this sentence of yours: "...this only reflects
the engineer's perspective with the pragmatic aim of quickly obtaining
proofs. For the philosopher it is not proof automation but
expressiveness that is the main criterion..."
I completely understand the issue of low status vs. high status [ ... ]
The difference seems that between "What" and "How" rather.
Gottfried Barrow
2016-07-12 10:25:18 UTC
Post by Andreas RÃ¶hler
Post by Gottfried Barrow
Post by Ken Kubota
Paulson argued that natural deduction is far superior to
Hilbert-style systems
for automated proof. This is, of course, correct, since Hilbert-style
metatheorems become symbolically representable in natural deduction as regular
theorems (and not just metatheorems), but this only reflects the engineer's
perspective with the pragmatic aim of quickly obtaining proofs. For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules.
In particular, there is this sentence of yours: "...this only
reflects the engineer's perspective with the pragmatic aim of quickly
obtaining proofs. For the philosopher it is not proof automation but
expressiveness that is the main criterion..."
I completely understand the issue of low status vs. high status [ ... ]
The difference seems that between "What" and "How" rather.
No serious discussion what supposed to ensue, and it's not supposed to
didn't focus on my jokes.

Any seriousness was supposed to get swamped by my attempt at creative
writing. The attempt at creativity was for the purposes of ridiculing an
idea. There was actually no need for a defense to be made. Writing
nothing would have been better, maybe.

Ken wants to compare his raw ideas, valid or not (I wouldn't know), with
Larry Paulson's work, which is the foundation for what has ended up
being more than 30 years work, founded on Robin Milner's ML, drawing
from Mike Gordon's work, propelled further by Tobias Nipkow et al.

I'm nobody's friend here, but I gotta figure that Isabelle, its logic
included, simply represents what Larry Paulson decided to commit to as a
public work, that its not the totality of his mind.

As to "what", "how", and "why", it's not like he hasn't been exposed to
a full gamut of ideas, including that which is pure and impractical.

People want to attack the purity of Isabelle/Pure, but without providing
a serious alternative. Maybe it's the "30 years work thing", that a
serious work takes.

GZ
Andreas Röhler
2016-07-12 10:58:56 UTC
Post by Gottfried Barrow
Post by Andreas RÃ¶hler
Post by Gottfried Barrow
Post by Ken Kubota
Paulson argued that natural deduction is far superior to
Hilbert-style systems
for automated proof. This is, of course, correct, since Hilbert-style
metatheorems become symbolically representable in natural deduction as regular
theorems (and not just metatheorems), but this only reflects the engineer's
perspective with the pragmatic aim of quickly obtaining proofs. For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules.
In particular, there is this sentence of yours: "...this only
reflects the engineer's perspective with the pragmatic aim of
quickly obtaining proofs. For the philosopher it is not proof
automation but expressiveness that is the main criterion..."
I completely understand the issue of low status vs. high status [ ... ]
The difference seems that between "What" and "How" rather.
No serious discussion what supposed to ensue, and it's not supposed to
didn't focus on my jokes.
Indeed.
Post by Gottfried Barrow
Any seriousness was supposed to get swamped by my attempt at creative
writing. The attempt at creativity was for the purposes of ridiculing
an idea. There was actually no need for a defense to be made. Writing
nothing would have been better, maybe.
Ken wants to compare his raw ideas, valid or not (I wouldn't know),
with Larry Paulson's work, which is the foundation for what has ended
up being more than 30 years work, founded on Robin Milner's ML,
drawing from Mike Gordon's work, propelled further by Tobias Nipkow et
al.
Well, "valid or not" is indeed the question, nothing else.
Corey Richardson
2016-07-17 05:02:25 UTC
Post by Ken Kubota
This is, of course, correct, but doesn't justify the specific mode of type
introduction used here, namely the Backus-Naur form (BNF). The BNF was designed
to specify formal grammars in computer science, but (mathematical) types are
objects that must have certain logical properties, such as being non-empty.
Thus the introduction of additional mathematical types should be subject to
proof, and not a matter of mere declaration via the BNF.
The datatype declarations written in theory files is surface syntax for
the underlying datatype package which handles proving non-emptiness
among many other properties. The ML-style declaration is a convenient
abstraction that is easy to use. The following paper explains the
underlying principles:

Dmitry Traytel, Andrei Popescu, and Jasmin C. Blanchette. 2012.
Foundational, Compositional (Co)datatypes for Higher-Order Logic:
Category Theory Applied to Theorem Proving. In Proceedings of the 2012
27th Annual IEEE/ACM Symposium on Logic in Computer Science (LICS '12).
IEEE Computer Society, Washington, DC, USA, 596-605.
DOI=http://dx.doi.org/10.1109/LICS.2012.75

There are additional references at the end of the tutorial for the
datatype package,

https://isabelle.in.tum.de/dist/Isabelle2016/doc/datatypes.pdf

--
cmr
+610481782084
http://octayn.net/
Ken Kubota
2016-07-26 09:46:19 UTC
Post by Corey Richardson
The datatype declarations written in theory files is surface syntax for
the underlying datatype package which handles proving non-emptiness
among many other properties. The ML-style declaration is a convenient
abstraction that is easy to use. The following paper explains the
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00070.html

Thank you for bringing this to my attention. Let me first comment on the
manuals before getting back to the type system.

It would be desirable to have such information - on the nature of the
"datatype" declarations - as well as on the low-level routines for type
inference and type introduction available in the manuals. Also, in order to
avoid misunderstanding, it should be pointed out that "BNF" is an abbreviation
of "Bounded natural functor" and not "Backus-Naur form".

For any type theory, including Isabelle/HOL, the type system is essential.
Hence, one would wish for a concise specification of the logical core, i.e.,
both the type system and the rules of inference. For example, Andrews' system
Q0 - the type system, symbols, wffs, axioms, and the single rule of inference -
is fully specified in a very precise manner on only a few pages [cf. Andrews,
2002, pp. 210-215].

Moreover, one would wish for a tutorial on how to inspect both the types
introduced by the datatype declarations as well as the type of any well-formed
formula, such that the user can look behind the surface and examine the
(actual) type system. In my R0 implementation, it is possible to display the
fully typed expression of any wff (including that of any primitive or derived
type), e.g., the logical AND (&), with a simple print command, which yields the
type (e.g., o -> o -> o [written as "{{o,o},o}", lambda is denoted by '\', and
lambda application is denoted by '_']):
":print &
# wff 47 : [\x.[\y.((=_[\g.((g_T)_T)])_[\g.((g_x)_y)])]]
# := &
# w typd 47 : [\x{o}{o}.[\y{o}{o}.((={{{o,@},@}}_[\g{{{o,o},o}}{{{o,o},o}}.((
g{{{o,o},o}}{{{o,o},o}}_T{o}){{o,o}}_T{o}){o}]{@}){{o,@}}_[\g{{{o,o},o}}{{{o,o},
o}}.((g{{{o,o},o}}{{{o,o},o}}_x{o}{o}){{o,o}}_y{o}{o}){o}]{@}){o}]{{o,o}}]
# { {{o,o},o} }
# := &
type # 0: {{o,o},o} = 35"
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf (p. 356)

Concerning the type system, the most informative explanation I could find in
the documentation was at the very beginning of Part III (Isabelle/HOL) of The
Isabelle/Isar Reference Manual (February 17, 2016):
"Andrews'[ ] book [1] is a full description of the original Church-style
higher-order logic, with proofs of correctness and completeness wrt. certain
set-theoretic interpretations. The particular extensions of Gordon-style HOL
are explained semantically in two chapters of the 1993 HOL book [50]."
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf (p. 236)

Again, a brief description of the type system in the online documentation,
instead of only literature references, would be desirable.
Please allow me a minor correction here. In both the first edition [Andrews,
1986] referenced by the manual as well as the current second edition [Andrews,
2002] Andrews does not only provide a "full description of the original
Church-style higher-order logic", but, more exactly, a description of his own
system Q0, which is roughly equivalent, but much more elegant. For example, in
Church's simple type theory, Modus Ponens is a primitive rule (the fifth of
altogether six rules) [cf. Church, 1940, p. 60; Andrews, 2014, available online
at
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/#AxiRulInf
], whereas in Andrews' Q0, Modus Ponens is a derived rule [cf. Andrews, 2002,
p. 224 (5224)] obtained from the single rule of inference of Q0. The type
system is identical, allowing only 'i' (iota) and 'o' (omicron), and
combinations of them (i.e., simple type theory, having no type variables).
In the dependent type theory R0, not only type variables, but also an
additional type inference mechanism is implemented, which seems to resemble
"Gordon['s] [...] facility to introduce new types as semantic subtypes from
existing types"
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf (p. 261)
: "Whenever a sentence of the form p_(oα)c_(α) is inferred [...], type p is
added to c, such that c_p becomes a well-formed formula." [Kubota, 2015, p. 32]
If it isn't already, p becomes (also) a type, by additionally assigning the
type of types (tau) to it.

The abstract of the article by Traytel et al. you referred to via the DOI link
http://dx.doi.org/10.1109/LICS.2012.75
starts as follows: "Interactive theorem provers based on higher-order logic
specifications to logical primitives." This elegant formulation on the
"definitional approach" coincides with what Andrews calls "expressiveness"
[Andrews, 2002, p. 205], except that the reduction (or, in turn, the
expressiveness of the few primitive symbols and rules) does not only apply to
symbols, but also to the rules, such that the basis of Q0 consists of only
- two primitive symbols (identity and description),
- two primitive type symbols (the type of propositions 'o' and the type of
individuals 'i'),
- and only a single rule of inference (substitution),
from which everything can be expressed (within the limits of simple type
theory).
For extending expressiveness to dependent type theory, in R0, type iota ('i')
is replaced by type omega (the universal type), and type tau (the type of
types) is introduced. Furthermore, it is possible to bind type variables with
lambda.

If one consequently follows Andrews' approach of expressiveness at all levels,
then Hilbert-style deductive systems are preferred, since they allow the rules
to be reduced to a single rule of inference. An extra rule for induction such
as that given at
https://www4.in.tum.de/~wenzelm/papers/lausanne2009.pdf (p. 5)
can be avoided by making induction part of the definition of N [cf. Andrews,
2002, p. 260]. In the same spirit, a primitive type for natural numbers can be
avoided by using a combination of 'i' and 'o' as carrier type ( nat := (o(oi))
) [cf. ibid.].
The pairing operation used by Andrews [cf. Andrews, 2002, p. 208] (see also the
definition of the logical AND (&) above, wff 47) is better suited for lambda
calculus than the one suggested at
https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-175.pdf (p. 13),
and in R0, it can be used to form both a type restricted as well as a
polymorphic pairing operator (by the use of type omega).

According to this approach, a proliferation of primitive types and type
inference mechanisms should be avoided, and rules should be kept as simple as
possible, e.g., as quoted above: "Whenever a sentence of the form p_(oα)c_(α)
is inferred [...]."
In one of the manuals it is stated:
"If you introduce a new type axiomatically, i.e. via typedecl and
axiomatization (§5.5), the minimum requirement is that it has a non-empty
model, to avoid immediate collapse of the logical environment."
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf (p. 115)
But what does this mean exactly, having "a non-empty model"? Is this the same
as the existence of a proven theorem of the form "p_(oα)c_(α)", which shows
that p is non-empty?

It will take some time until I am able to read through the article by Traytel
et al., but at first glance a more complex semantic reasoning seems to be
behind it:
"An inductive definition specifies the least predicate or set R closed under
given rules: applying a rule to elements of R yields a result within R. For
example, a structural operational semantics is an inductive definition of an
evaluation relation.
[...]
Both inductive and coinductive definitions are based on the Knaster-Tarski
fixed-point theorem for complete lattices. The collection of introduction rules
given by the user determines a functor on subsets of set-theoretic relations."
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf (p. 238)

Please allow me, where appropriate, to postpone answers, as I would like to
focus on my forthcoming publication of R0 [Kubota, 2015].

Kind regards,

Ken Kubota

References

Andrews, Peter B. (1986), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Orlando: Academic Press.

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.

Andrews, Peter B. (2014), "Church's Type Theory". In: Stanford Encyclopedia of
Philosophy (Spring 2014 Edition). Ed. by Edward N. Zalta. Available online at
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/ (July 2,
2016).

Church, Alonzo (1940), "A Formulation of the Simple Theory of Types". In:
Journal of Symbolic Logic 5, pp. 56-68.

Kubota, Ken (2015), On the Theory of Mathematical Forms (draft from May 18,
2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5
45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50
011d8303fd59f2c8a4e6f64125867dc4. DOI: 10.4444/100.10. See:
http://dx.doi.org/10.4444/100.10

Kubota, Ken (2015a), Excerpt from [Kubota, 2015] (pp. 356-364, pp. 411-420, and
pp. 754-755). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(January 24, 2016). SHA-512: 67a48ee12a61583bd82c286136459c0c
9fd77f5fd7bd820e8b6fe6384d977eda 4d92705dac5b3f97e7981d7b709fc783
3d1047d1831bc357eb57b263b44c885b.

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Ken Kubota
2016-08-09 10:21:26 UTC
Dear Members of the Research Community,

As part of my efforts to try to understand the type system of Isabelle/HOL, I
have gathered the references from the mails written by Corey Richardson and
Michael Norrish and from The Isabelle/Isar Reference Manual, especially from
a) Chapter 10: Higher-Order Logic (pp. 236 f.)
b) § 11.7 Semantic subtype definitions (pp. 260-263)
c) § 11.8 Functorial structure of types (pp. 263 f.).

In all cases, with one exception, I was able to obtain either the link to the
online version or the ISBN number.

The exception is Mike Gordon's text "HOL: A machine oriented formulation of
higher order logic" (Technical Report 68, University of Cambridge Computer
Laboratory), registered at
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-68.html

As Isabelle is provided by the University of Cambridge Computer Laboratory,
among others, would it be possible for the Isabelle developers to make this
technical report available online as a PDF file? This was done for the first
five reports listed at
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-table.html

According to the references supplied with the traditional (Mike Gordon's) HOL
variant, a revised version exists: "Technical Report No.\ 68 (revised
version)", quoted from
https://github.com/HOL-Theorem-Prover/HOL/blob/master/Manual/Description/references.tex
Either both versions or the revised version should be made available.

I would like to take this opportunity to thank Michael Norrish for his advice.
The LOGIC part of HOL4 system's documentation at
is the kind of short and precise formulation of the logical core I recently
proposed for Isabelle/HOL, too:
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00105.html

In this HOL4 document, it is stated: "From a logical point of view it would be
better to have a simpler substitution primitive, such as 'Rule R' of Andrews'
logic Q0, and then to derive more complex rules from it." (p. 27)
This is exactly the same idea that I presented recently: "The main criterion is
expressiveness [cf. Andrews, 2002, p. 205], or, vice versa, the reducibility of
mathematics to a minimal set of primitive rules and symbols. [...] For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules."
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00028.html

Similarly, the HOL4 document argues in favor of definitions instead of
introducing new axioms: "The advantages of postulation over definition have
been likened by Bertrand Russell to the advantages of theft over honest toil.
As it is all too easy to introduce inconsistent axiomatizations, users of the
HOL system are strongly advised to resist the temptation to add axioms, but to
toil through definitional theories honestly." [p. 33]
In the same manner, I wrote: "One consequence [...] is to avoid the use of
non-logical axioms. [...] The appropriate way to deal with objects of certain
properties is to create a set of objects with these properties." [Kubota, 2015,
p. 29]
For this reason, in the R0 implementation, the group axioms are expressed not
as axioms, but as properties in the definition of "Grp" (wff 266):
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf (p. 359)

Kind regards,

Ken Kubota

Sources

[S1] The Isabelle/Isar Reference Manual (February 17, 2016)
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf

[S2] Reference in the e-mail written by Corey Richardson
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00070.html

Dmitry Traytel / Andrei Popescu / Jasmin C. Blanchette, Foundational,
Compositional (Co)datatypes for Higher-Order Logic: Category Theory Applied to
Theorem Proving
http://dx.doi.org/10.1109/LICS.2012.75
https://www21.in.tum.de/~traytel/papers/lics12-codatatypes/codat.pdf

[S3] Reference in the e-mail written by Michael Norrish
"The chapters from the HOL book mentioned above are available as part of the
HOL4 system's documentation at:
See §2.5.4 in particular for a discussion of how the system can be extended
with new types."

The HOL System: LOGIC (November 6, 2014)

Derived References (from [S1], pp. 260-263)

[15] W. M. Farmer. The seven virtues of simple type theory. J. Applied Logic,
6(3):267-286, 2008.
http://imps.mcmaster.ca/doc/seven-virtues.pdf

[18] M. J. C. Gordon. HOL: A machine oriented formulation of higher order
logic. Technical Report 68, University of Cambridge Computer Laboratory, 1985.

[50] A. Pitts. The HOL logic. In M. J. C. Gordon and T. F. Melham, editors,
Introduction to HOL: A Theorem Proving Environment for Higher Order Logic,
pages 191-232. Cambridge University Press, 1993.
ISBN 0-521-44189-7

[57] M. Wenzel. Type classes and overloading in higher-order logic. In E. L.
Gunter and A. Felty, editors, Theorem Proving in Higher Order Logics: TPHOLs
'97, volume 1275 of Lecture Notes in Computer Science. Springer-Verlag, 1997.
http://www4.in.tum.de/~wenzelm/papers/axclass-TPHOLs97.pdf

[22] F. Haftmann and M. Wenzel. Constructive type classes in Isabelle. In T.
Altenkirch and C. McBride, editors, Types for Proofs and Programs, TYPES 2006,
volume 4502 of LNCS. Springer, 2007.
http://www4.in.tum.de/~wenzelm/papers/constructive_classes.pdf

[27] O. Kuncar and A. Popescu. A consistent foundation for Isabelle/HOL. In C.
Urban and X. Zhang, editors, Interactive Theorem Proving - 6th International
Conference, ITP 2015, Nanjing, China, August 24-27, 2015, Proceedings, volume
9236 of Lecture Notes in Computer Science. Springer, 2015.
http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf
http://andreipopescu.uk/pdf/ITP2015.pdf

Standard References

Andrews, Peter B. (1986), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Orlando: Academic Press. ISBN 0-12-058535-9
(Hardcover). ISBN 0-12-058536-7 (Paperback).

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.

Andrews, Peter B. (2014), "Church's Type Theory". In: Stanford Encyclopedia of
Philosophy (Spring 2014 Edition). Ed. by Edward N. Zalta. Available online at
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/ (July 2,
2016).

Church, Alonzo (1940), "A Formulation of the Simple Theory of Types". In:
Journal of Symbolic Logic 5, pp. 56-68.

Further References

Kubota, Ken (2015), On the Theory of Mathematical Forms (draft from May 18,
2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5
45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50
011d8303fd59f2c8a4e6f64125867dc4. DOI: 10.4444/100.10. See:
http://dx.doi.org/10.4444/100.10

Kubota, Ken (2015a), Excerpt from [Kubota, 2015] (pp. 356-364, pp. 411-420, and
pp. 754-755). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(January 24, 2016). SHA-512: 67a48ee12a61583bd82c286136459c0c
9fd77f5fd7bd820e8b6fe6384d977eda 4d92705dac5b3f97e7981d7b709fc783
3d1047d1831bc357eb57b263b44c885b.

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Lawrence Paulson
2016-08-09 10:25:43 UTC

Larry Paulson
Post by Ken Kubota
Dear Members of the Research Community,
As part of my efforts to try to understand the type system of Isabelle/HOL, I
have gathered the references from the mails written by Corey Richardson and
Michael Norrish and from The Isabelle/Isar Reference Manual, especially from
a) Chapter 10: Higher-Order Logic (pp. 236 f.)
b) § 11.7 Semantic subtype definitions (pp. 260-263)
c) § 11.8 Functorial structure of types (pp. 263 f.).
In all cases, with one exception, I was able to obtain either the link to the
online version or the ISBN number.
The exception is Mike Gordon's text "HOL: A machine oriented formulation of
higher order logic" (Technical Report 68, University of Cambridge Computer
Laboratory), registered at
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-68.html
As Isabelle is provided by the University of Cambridge Computer Laboratory,
among others, would it be possible for the Isabelle developers to make this
technical report available online as a PDF file? This was done for the first
five reports listed at
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-table.html
According to the references supplied with the traditional (Mike Gordon's) HOL
variant, a revised version exists: "Technical Report No.\ 68 (revised
version)", quoted from
https://github.com/HOL-Theorem-Prover/HOL/blob/master/Manual/Description/references.tex
Either both versions or the revised version should be made available.
I would like to take this opportunity to thank Michael Norrish for his advice.
The LOGIC part of HOL4 system's documentation at
is the kind of short and precise formulation of the logical core I recently
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00105.html
In this HOL4 document, it is stated: "From a logical point of view it would be
better to have a simpler substitution primitive, such as 'Rule R' of Andrews'
logic Q0, and then to derive more complex rules from it." (p. 27)
This is exactly the same idea that I presented recently: "The main criterion is
expressiveness [cf. Andrews, 2002, p. 205], or, vice versa, the reducibility of
mathematics to a minimal set of primitive rules and symbols. [...] For the
philosopher it is not proof automation but expressiveness that is the main
criterion, such that the method with the least amount of rules of inferences,
i.e., a Hilbert-style system, is preferred, and all other rules become derived
rules."
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00028.html
Similarly, the HOL4 document argues in favor of definitions instead of
introducing new axioms: "The advantages of postulation over definition have
been likened by Bertrand Russell to the advantages of theft over honest toil.
As it is all too easy to introduce inconsistent axiomatizations, users of the
HOL system are strongly advised to resist the temptation to add axioms, but to
toil through definitional theories honestly." [p. 33]
In the same manner, I wrote: "One consequence [...] is to avoid the use of
non-logical axioms. [...] The appropriate way to deal with objects of certain
properties is to create a set of objects with these properties." [Kubota, 2015,
p. 29]
For this reason, in the R0 implementation, the group axioms are expressed not
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf (p. 359)
Kind regards,
Ken Kubota
Sources
[S1] The Isabelle/Isar Reference Manual (February 17, 2016)
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/isar-ref.pdf
[S2] Reference in the e-mail written by Corey Richardson
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-July/msg00070.html
Dmitry Traytel / Andrei Popescu / Jasmin C. Blanchette, Foundational,
Compositional (Co)datatypes for Higher-Order Logic: Category Theory Applied to
Theorem Proving
http://dx.doi.org/10.1109/LICS.2012.75
https://www21.in.tum.de/~traytel/papers/lics12-codatatypes/codat.pdf
[S3] Reference in the e-mail written by Michael Norrish
"The chapters from the HOL book mentioned above are available as part of the
See §2.5.4 in particular for a discussion of how the system can be extended
with new types."
The HOL System: LOGIC (November 6, 2014)
Derived References (from [S1], pp. 260-263)
[15] W. M. Farmer. The seven virtues of simple type theory. J. Applied Logic,
6(3):267-286, 2008.
http://imps.mcmaster.ca/doc/seven-virtues.pdf
[18] M. J. C. Gordon. HOL: A machine oriented formulation of higher order
logic. Technical Report 68, University of Cambridge Computer Laboratory, 1985.
[50] A. Pitts. The HOL logic. In M. J. C. Gordon and T. F. Melham, editors,
Introduction to HOL: A Theorem Proving Environment for Higher Order Logic,
pages 191-232. Cambridge University Press, 1993.
ISBN 0-521-44189-7
[57] M. Wenzel. Type classes and overloading in higher-order logic. In E. L.
Gunter and A. Felty, editors, Theorem Proving in Higher Order Logics: TPHOLs
'97, volume 1275 of Lecture Notes in Computer Science. Springer-Verlag, 1997.
http://www4.in.tum.de/~wenzelm/papers/axclass-TPHOLs97.pdf
[22] F. Haftmann and M. Wenzel. Constructive type classes in Isabelle. In T.
Altenkirch and C. McBride, editors, Types for Proofs and Programs, TYPES 2006,
volume 4502 of LNCS. Springer, 2007.
http://www4.in.tum.de/~wenzelm/papers/constructive_classes.pdf
[27] O. Kuncar and A. Popescu. A consistent foundation for Isabelle/HOL. In C.
Urban and X. Zhang, editors, Interactive Theorem Proving - 6th International
Conference, ITP 2015, Nanjing, China, August 24-27, 2015, Proceedings, volume
9236 of Lecture Notes in Computer Science. Springer, 2015.
http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf
http://andreipopescu.uk/pdf/ITP2015.pdf
Standard References
Andrews, Peter B. (1986), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Orlando: Academic Press. ISBN 0-12-058535-9
(Hardcover). ISBN 0-12-058536-7 (Paperback).
Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.
Andrews, Peter B. (2014), "Church's Type Theory". In: Stanford Encyclopedia of
Philosophy (Spring 2014 Edition). Ed. by Edward N. Zalta. Available online at
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/ (July 2,
2016).
Journal of Symbolic Logic 5, pp. 56-68.
Further References
Kubota, Ken (2015), On the Theory of Mathematical Forms (draft from May 18,
2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5
45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50
http://dx.doi.org/10.4444/100.10
Kubota, Ken (2015a), Excerpt from [Kubota, 2015] (pp. 356-364, pp. 411-420, and
pp. 754-755). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(January 24, 2016). SHA-512: 67a48ee12a61583bd82c286136459c0c
9fd77f5fd7bd820e8b6fe6384d977eda 4d92705dac5b3f97e7981d7b709fc783
3d1047d1831bc357eb57b263b44c885b.
____________________
Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Ken Kubota
2016-08-16 09:23:27 UTC
Dear Members of the Research Community,

for Mike Gordon's text at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00047.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00046.html

What Gordon calls "'second order' [lambda]-terms like \a. \x:a. x, [which]
perhaps [...] should be included in the HOL logic" at
is implemented in R0 as the main feature of dependent type theory (i.e., the
binding of type variables with lambda) and called "type abstraction" there:
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf (pp. 356 ff.)

It might be worth considering publishing this 2001 version of Gordon's paper
within the technical report series in order to ensure long-term (online)
availability of the currently only cached document. In addition, online
availability of the 1985 version (UCAM-CL-TR-68) is desirable, as it is still
of historical interest.

Also, I would like to thank Tom Melham for bringing to my attention his
approach of implementing in the HOL logic the idea of explicit quantification
over type variables as presented by Andrews in 1965: "I have found Andrews'
book [1] invaluable in working out many of the technical details of the
extension to HOL proposed here."
http://www.cs.ox.ac.uk/tom.melham/pub/Melham-1994-HLE.pdf (p. 15)

I have addressed this issue and discussed details in my draft: "It has to be
noted that already in his PhD thesis 'A Transfinite Type Theory with Type
Variables' published in 1965 Andrews is very near to developing a dependent
type theory. [...] In the system Q with type variables the universal quantifier
(∀) becomes a true variable binder in the case of types, but in the case of
terms (objects) a definition depending on λ." [Kubota, 2015, p. 31] However,
for some reason Andrews refrained from directly using lambda as type variable
binder, which would exhibit the reducibility of the variable binders to a
single one, as was done in R0.

2. In the LOGIC part of the documentation of the current HOL system, reference
is made to Rule R of Andrews' logic Q0: "From a logical point of view it would
be better to have a simpler substitution primitive, such as 'Rule R' of
Andrews' logic Q0, and then to derive more complex rules from it."

Is it possible to figure out the author of this sentence? I am considering
quoting it and would like to credit authorship in such a case.

3. Preferring definitions over the use of non-logical axioms as discussed at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00046.html
has a major advantage to be mentioned. If the conditions are not introduced as
axioms, but as properties within the definition, and these conditions later
turn out to be contradictory, they do not render the system inconsistent as a
whole, but only demonstrate that no mathematical object ("model") can satisfy
these conditions, which is formally expressed by the emptiness of the defined
set, which is the desired result (in the simplest case concerning the types,
i.e., a unary predicate; otherwise tuples may be used).

For this reason, and for other reasons, such as not constraining expressiveness
of the system, instead of postulating a (non-logical) Axiom of Infinity
(usually over the type of individuals), the Peano axioms should be part of the
definition of natural numbers (which can be expressed without axioms quite
simply and naturally in dependent type theory), that guarantees the set to be
infinite: "Actually, any set satisfying postulates P1-P4 must be infinite."
[Andrews, 2002, p. 259] Then there is no need to add an Axiom of Infinity as
Axiom 6, which extends Q0 to Q0^inf [cf. Andrews, 2002, p. 259].

For the same reasons, the use of language elements which make implicit use of
axioms should be avoided. Gordon correctly states: "The inclusion of
[epsilon]-terms into HOL 'builds in' the Axiom of Choice [...]." Somewhat
earlier on the same page, he quite frankly confesses: "It must be admitted that
the [epsilon]-operator looks rather suspicious."
Implicit use of the Axiom of Choice can be avoided by using the description
operator instead, like in Q0 and R0, where it is the counterpart to identity
(equality) [cf. Andrews, 2002, pp. 211, 213; Kubota, 2015].
For example, the "conditional term Cond t t_1 t_2 (meaning 'if t then t_1 else
t_2')" "defined using the [epsilon]-operator" at
can be defined using the description operator [cf. Andrews, 2002, pp. 235 f.
(5313)]; software verification of the formal proof has been provided in
[Kubota, 2015, pp. 165-174]. The (verification of the) formal proof and the
axioms for R0 have now been made available online at:
http://www.kenkubota.de/files/R0_draft_excerpt_with_axioms_and_proof_of_C_function.pdf

Please note that in Axiom 5 (Axiom of Descriptions) and throughout the whole
system the primitive symbol for identity (equality) 'Q' used in Q0 [cf.
Andrews, 2002, pp. 211, 213] was eliminated and replaced by the identity symbol
'=' in R0, as the symbol 'Q' syntactically doesn't have an own independent
function:
http://www.kenkubota.de/files/R0_draft_excerpt_with_axioms_and_proof_of_C_function.pdf (pp. 351 f.)
It then becomes even more obvious that description is the counterpart to
identity, extracting the single element from a singleton (unit set), and
eliminating the identity symbol '='.

4. Rule R is indeed presented by Andrews as the "single rule of inference"
[Andrews, 2002, p. 213] of Q0. However, already on the following page, a
variant, Rule R', is specified [cf. p. 214], which also takes into account a
set of hypotheses. Hence, logically, one might regard Rule R' as the general
rule and Rule R only as the special case in which the set of hypotheses is
empty.

Technically, both Rule R and Rule R' are implemented in the dependent type
theory R0 as two different routines due to the access to different parts of the
formulae, but both routines then use the same subroutine. Rule R and Rule R'
are called §s and §s' in R0, where the section sign (§) indicates a rule, and
's' stands for substitution:
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf (pp. 362 f.)

Rule §\ ('\' stands for lambda) denotes lambda-conversion (or more
specifically, beta-conversion), which in Andrews' Q0 is obtained as theorem
5207 [cf. Andrews, 2002, p. 218 f.] from Axiom Schemata 4_1 - 4_5. Since some
of the Axiom Schemata 4_1 - 4_5 contain restrictions, lambda-conversion was
implemented as a rule in R0. As the active elements, rules may contain
restrictions, but not theorems (including axioms/axiom schemata), which are
passive elements. This alternative method of implementing lambda-conversion was
already considered by Andrews in a paragraph to which he directed my attention
when I met him and which is already contained in the first edition (1986):
"5207 could be taken as an axiom schema in place of 4_1 - 4_5, and for some
purposes this would be desirable, since 5207 has a conceptual simplicity and
unity which is not apparent in 4_1 - 4_5." [Andrews, 1986, p. 165; Andrews,
2002, p. 214]

The fact that lambda-conversion is actually a process (i.e., active, dynamic,
and hence, a rule), and not only a (passive, static) theorem, axiom, or axiom
schema, becomes obvious in the online presentation of Q0, where Axiom Schemata
4_1 - 4_5 are replaced by 5207 as new Axiom Schema 4 which makes use of a
function "SubFree" denoting the process of substitution (of all free
occurrences of a variable):
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/#ForBasEqu

Similarly, in Church's simple theory of types, lambda-conversion
(beta-conversion) is implemented as the second (beta-reduction, also called
beta-contraction) and third (beta-expansion) of altogether six rules of
inference [cf. Church, 1940, p. 60].

Kind regards,

Ken Kubota

References

Andrews, Peter B. (1986), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Orlando: Academic Press. ISBN 0-12-058535-9
(Hardcover). ISBN 0-12-058536-7 (Paperback).

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.

Andrews, Peter B. (2014), "Church's Type Theory". In: Stanford Encyclopedia of
Philosophy (Spring 2014 Edition). Ed. by Edward N. Zalta. Available online at
http://plato.stanford.edu/archives/spr2014/entries/type-theory-church/ (July 2,
2016).

Church, Alonzo (1940), "A Formulation of the Simple Theory of Types". In:
Journal of Symbolic Logic 5, pp. 56-68.

Kubota, Ken (2015), On the Theory of Mathematical Forms (draft from May 18,
2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5
45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50
011d8303fd59f2c8a4e6f64125867dc4. DOI: 10.4444/100.10. See:
http://dx.doi.org/10.4444/100.10

Kubota, Ken (2015a), Excerpt from [Kubota, 2015] (p. 1, pp. 356-364, pp.
411-420, and pp. 754-755). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(January 24, 2016). SHA-512: 67a48ee12a61583bd82c286136459c0c
9fd77f5fd7bd820e8b6fe6384d977eda 4d92705dac5b3f97e7981d7b709fc783
3d1047d1831bc357eb57b263b44c885b.

Kubota, Ken (2015b), Excerpt from [Kubota, 2015] (p. 1, pp. 165-174, and pp.
350-352). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_axioms_and_proof_of_C_function.pdf
(August 13, 2016). SHA-512: 8702d932d50f2e1f6b592dc03b6f283e
64ba18f881150b4ae32e4b915249d241 3db34ba4c3dcf0e0cdef25b679d9615f

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

Gottfried Barrow
2015-12-23 09:43:44 UTC
Ken,

I email this to you also, to try and be civilized, but you shouldn't
email me back. I'm happy to let Popescu and Kumar finish it all.

But you did address this to me. I stated my rules, but rules require the
use of discretion, or being ignored and broken.

If you don't diss the team on the team's turf, in a way that makes me
feel I should defend the team, and you don't address emails to me,
there's no reason for me to keep showing up.

To try and add some concrete value, I include a base 10 datatype that's
bijective with 'nat':

https://gist.github.com/GezzMC/125ce3352966ec7f7e6c

I did that based on Num.thy, by Florian Haftmann and Brian Huffman. For
the proofs, I brute forced them with a combination of 'auto', 'induct',
and Sledgehammer. I've never before or since been willing to wait so
long on Sledgehammer.

I suppose I should tie that into this thread.

I think I'm starting to remember its purpose for here.

We can consider any person an archetype of some class of people. An
archetype at hand is Larry Paulson, and you're another archetype. To
call myself an archetype, that would mean I'm (willing to admit I'm)
full of myself.

*NO FANBOY, NO LOYALTY*

In this thread of emails (made up of multiple subject lines), Larry
Paulson is getting mentioned a lot, and I'm defending him.

I could say, "And obviously, Larry Paulson doesn't need me to defend
him". It's not the need. I think it's the value of someone knowing that
their stuff is getting used.

The gist link also shows that something is being looked at and used.
However, it also reflects that Isabelle/HOL isn't some perfect work
where I'm saying, "I have a testimony for the world. Isabelle? Pure

Where's the base 10 type? Where are the vectors? And don't tell me to
look outside of 'Complex_Main'.

Criticism? Absolutely, to show I'm no fanboy, but in the end, it happens
to be that Isabelle/HOL is good, as far as I can tell.

*ISABELLE/HOL: HAVE YOU, KEN, AND JAMES MEYER EVEN USED IT?*

This falls under the "if you're so smart" category.

I've always considered that I would need a better understanding of logic
to work at the "meta-logic" or "metalanguage" level. Briefly here, you
challenge the top dogs at the meta-logic level, but nothing I've seen
indicates that you have any decent ability to use Isabelle/HOL.
Shouldn't a smart person be able to blow through the easy stuff?

Previously, I kept James Meyer out of this, who was in your first
address list. In reviewing your emails, I paid more attention to your
reference to him, particularly to his remark about Paulson. Based on
that, I then looked at Meyer's PDF on his site. Keep that in mind.

*MISUNDERSTANDING? NO, NON-VERBOSENESS*
Post by Ken Kubota
There has been a misunderstanding. I claimed neither that Isabelle is
inconsistent nor the contrary.
Context is everything. You suggested that the foundation of Isabelle is
shaky, and warned everyone that it could be inconsistent, all while
bringing attention to your own work.

There was no misunderstanding on my part, only a lack of verboseness,
and a lack of completely covering my tail.

*ISABELLE/HOL: HAVE YOU, JAMES, AND KEN KUBOTA EVEN USED IT?*

First, a little comedic relief.

A logician, a 2nd logician, and a 3rd logician were in a bar looking at
their shoes. The first logician said, "I have concluded that when I send
a long-winded email to the FOM mailing list, most likely, most of the
email is not read". The 2nd and 3rd logicians just kept looking at their
shoes. Not because they were stereotypical nerds, but because the
choices had been, while the 1st logician was speaking, between logic or
shoes, shoes or logic, logic and shoes, shoes and logic, not shoes and
not logic, not(shoes and logic), and so forth, duplicates allowed. So,
clearly, if not shoes and not logic is not an option, shoes are more
interesting than logic any day, even to logicians.

*ISABELLE/HOL: HAVE YOU, KEN, AND MEYER EVEN USED IT?*
Post by Ken Kubota
Independently, James R. Meyer has raised the same concern: "[T]he result, as
for Świerczkowski's proof, is an illogical confusion of meta-language and the
language of HF. [...] Paulson does not include any computer code that proves
that this cannot result in an illogical confusion of meta-language and
object[ ]language." [Meyer, 2015, p. 11]
I did a search, and the PDF is this:

http://www.jamesrmeyer.com/pdfs/ff_swierczkowski.pdf

I'll get the links out of the way, and provide Paulson's:

http://afp.sourceforge.net/entries/Incompleteness.shtml

Here's a pertinent quote, page 11, including a pertinent phrase,
"computer code":

Paulson does not include any computer code that proves that this
cannot result
in an illogical confusion of meta-language and object-language. Of
course, it
would be a simple matter to change the computer code to apply a
different
notation for HF sets and meta-language sets, but clearly the reason
that was
not done was because the proof could not proceed without that common
notation
for HF and the meta-language.

Paulson’s computer assisted proof follows ´Swierczkowski’s proof in
including the
assumption that such common notation cannot result in a loss of
independence of
the meta-language and the object-language, and that it cannot result
in an illogical
confusion of meta-language and object-language. This assumption is
not stated anywhere
either in Paulson’s description or in his computer code, but if it is
a fully formal
proof (as Paulson claims his proof to be) then that assumption should
not be implicit
but should be an explicit statement, and included as part of the
computer code.

I will speak for Larry Paulson here, which is meant as a joke, to the
extent that it is a joke.

Larry Paulson gets slightly irritated, at most, when people take pot
shots at his work, but he gets downright furious when "source", as
what's in a THY, is called "computer code".

Language is where I have and could get argumentative with Larry Paulson.
In the end, I recognize what's important, and I try to conform with
their vocabulary, when I'm operating on their domain.

Now the point. Meyer's use of "computer code" indicates that he has
never used Isabelle/HOL in a meaningful way.

You have also said nothing to indicate that you've used Isabelle/HOL in
a meaningful way.

I have used it now for 3.5 years. There is no "changing" things in some
casual sense, as with programming languages.

The axioms are enforced, all the time. Surface changes to notation only
get you a different look, for the same thing.

*SEMANTICS, MONOLITHICISTS LIKE ME DON'T NEED IT*

You're a purist? With the help of this thread, I have new labels for
myself: concretist and monolithicist.

You're all caught up in "meta" and "object". Larry Paulson could have
chosen lots of things to call what he calls "meta-logic" and
"object-logic", like "little-logic" and "big-logic".

Personally, I think he should have chosen "I-be-the-man-logic" and
"I-be-the-beggar-for-axioms-logic".

Ultimately, there's a monolithic logic. It is what is it. It is to you
what you want it to be. It is to Larry Paulson what Larry Paulson wants
it to be. Semantics aren't required for it to do what it does. Bits and
bytes, highs and lows don't need semantics.

Goedel's First Incompleteness Theorem, whether anyone (as opposed to the
machine) thinks it's been implemented in Isabelle (Nominal2 it looks
like, whatever that is), has no bearing on whether it can be proved that
"True = False", which, concretely, should bring a lot of things into
sharp focus, when what's being used is a boolean logic like HOL.

(NOTE: Take the good and leave the bad, anywhere in this email.)

*MY EXPERIENCE, INTUITION, SO FAR SO GOOD*

When I've tried to prove things that aren't true, after seeing the
light, because Isabelle/HOL wouldn't prove them, it has always ended up
making sense.

Likewise with what I've proved true, like with the base 10 datatype
above. It ends up making intuitive sense.

Assuming the consistency of Nominal2, whatever that is, I'll boldly say,
based upon my experience with Isabelle/HOL, that what Larry Paulson
proved is true, in that particular logic.

As to the meaning of what he proved, I don't even care. I care that
"True ~= False". That's 13 characters, with the spaces, and even now,
I'm calling for more neurons to help me out.

There's no burden in knowing very little. I always get to over perform.
Please see my base 10 datatype above. True art? It's ugly, just a means
to the next step.

Regards,
GB
Post by Ken Kubota
Dear Gottfried Barrow,
There has been a misunderstanding. I claimed neither that Isabelle is
inconsistent nor the contrary.
However, a proof of Paulson's claimed theorem 'proved_iff_proved_PfP' is not
available in axiomatic (Hilbert-style) deductive systems, because in
Hilbert-style systems this claimed theorem cannot even be expressed, since it
does not contain well-formed formulae only. Therefore this claimed theorem is
not a mathematical theorem or metatheorem. For now, please allow me to focus on
this single point.
This can be demonstrated easily by looking at the structure of the claimed
theorem 'proved_iff_proved_PfP' available at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 21)
http://www.owlofminerva.net/files/Goedel_I_Extended.pdf (p. 19)
which can be written
{}> a<-> {}> PfP "a"
if we use '>' for the deduction symbol (turnstile) and '"' for the Goedel
encoding quotes, and simplified without change of meaning to
a<-> > PfP "a"
expressing that 'a' is a theorem if and only if there is a proof of 'a'.
Now, recalling the quotes by Alonzo Church and Peter B. Andrews available at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00042.html
in an axiomatic (Hilbert-style) deductive system, the claimed theorem
'proved_iff_proved_PfP' could be either a theorem (a theorem of the object
language) or a metatheorem (a theorem of the mathematical meta-language).
As a trivial example for a theorem of the object language, we shall use
(T& T) = T
usually written
(T& T) = T
as presented in [Andrews, 2002, p. 220 (5211)], with the preceding deduction
symbol (turnstile) in order to express that '(T& T) = T' is a theorem.
In this notation it has, like all theorems of the object language of Q0,
exactly one occurrence of the deduction symbol (turnstile).
Hence, the claimed theorem 'proved_iff_proved_PfP', having two occurrences,
cannot be a theorem of the object language.
As a trivial example for a theorem of the meta-language, we shall use
If H> A and H> A => B, then H> B.
presented in [Andrews, 2002, p. 224 (5224 = Modus Ponens)], expressing that if
there is a proof of A (from the set of hypotheses H) and a proof of A => B
(from H), then there is a proof of B (from H).
Note that this example shows some of the typical formal criteria of a
1. Multiple occurrences of the deduction symbol (turnstile).
2. Use of syntactical variables (denoted by bold letters in the works of both
Church and Andrews).
3. Use of the informal words "If" and "then" instead of logical symbols in the
meta-language (according to Church's proposal).
It should be emphasized that metatheorems in proofs can always be replaced by
the proof of the concrete theorems (the syntactical or schematic variable
instantiated) when carrying out the proof, such that metatheorems are actually
not necessary (but reveal properties of the object language that help finding
proofs).
In the notation of Isabelle (natural deduction) this metatheorem would be
expressed by
[H> A; H> A => B] --> H> B
and, if we would add subscripts for the language levels, by
[H>1 A; H>1 A => B] -->2 H>1 B
So metatheorems infer from theorems of the object language (language level 1)
to another theorem of the object language, and this relation between theorems
of the object language is expressed at a higher level: the meta-language
(language level 2).
But the claimed theorem 'proved_iff_proved_PfP'
a<-> > PfP "a"
cannot be a metatheorem either, since both ways of dealing with it, either
semantically (subcase a) or syntactically (subcase b), fail.
In the claimed theorem 'proved_iff_proved_PfP'
a<-> > PfP "a"
the right-hand side (PfP "a"), expressing the provability of theorem 'a', is,
by its meaning, itself a metatheorem, not a theorem of the object language, and
we would have some kind of meta-metatheorem like
1 a<->3>2 PfP "a"
If we ignore the fact that his meta-metatheorem violates the language level
restrictions and nevertheless proceed further, then from a theorem of the
object language a theorem of the meta-language could be inferred and vice
versa, which would again violate language level restrictions, as for example a
metatheorem would be added to the list of theorems of the object language and
treated as such, leading to a confusion of language levels.
This is, in principle, the construction of the proofs of Andrews and Rautenberg
[cf. Kubota, 2013], in which 'proved_iff_proved_PfP' is used as an implicit
rule, not as a proven theorem/metatheorem. Of course, they both fail also
simply by not providing a proof using syntactical means only.
In Paulson's claimed theorem 'proved_iff_proved_PfP'
a<-> > PfP "a"
the right-hand side (PfP "a") needs to be a well-formed formula.
But the Goedel encoding in Paulson's proof is implemented by the use of means
which are not available in the object language (i.e., in mathematics).
According to Paulson at
http://www.cl.cam.ac.uk/~lp15/papers/Formath/Goedel-ar.pdf (p. 16)
"[i]t is essential to remember that Gödel encodings are terms (having type tm),
not sets or numbers. [...] First, we must define codes for de Bruijn terms and
formulas.
function quot_dbtm :: "dbtm -> tm"
where
"quot_dbtm DBZero = Zero"
[...]"
Paulson's definition goes beyond the use of purely mathematical means. After
lemma quot_Zero: "'Zero' = Zero"
[...]"
But with its purely syntactical means the object language cannot explicitly
reason about its own properties directly.
Propositions in Andrews' logic Q0 have type 'o' (Boolean truth values), and one
could define a function 'foo': o -> o, o -> i, or o -> nat (with nat = (o(oi))
= (i -> o) -> o; natural numbers as "equivalence classes of sets of
individuals" [Andrews, 2002, p. 260]), etc.
But since a type "tm" (for term) does not exist in Q0 [cf. Andrews, 2002, p.
210] or R0, one cannot define a mathematical function 'quot_dbtm': dbtm -> tm.
Of course, there are rules for construing well-formed formulae (wffs), but
these (in R0 hardcoded) rules are used implicitly for construing wffs and are
not part (are not theorems) of the object language itself.
Explicit meta-reasoning as with lemma 'quot_Zero' might extend (and, hence,
violate) the object language, as it actually introduces new rules of inference
to the object language, which again may be considered as a confusion of
language levels.
Type "tm" (for term) is a notion of the (technical) meta-language, but not a
mathematical type. Therefore the function 'quot_dbtm' (type: dbtm -> tm) is not
a mathematical well-formed formula (wff), subsequently the Goedel encoding
function ('" "') and the Goedel encoding of proposition 'a' ('"a"') are not
either, and hence the right-hand side (PfP "a") is not a wff and therefore not
a proposition. Finally,
PfP "a"
or
PfP "a"
cannot be a theorem, and for this reason the claimed theorem
'proved_iff_proved_PfP'
a<-> > PfP "a"
cannot be a metatheorem.
Obviously the (technical) meta-language and the object language in Isabelle are
not strictly separated, since the type "tm" (for term) is treated as a
mathematical type in the construction of wffs of the object language, which is
not mathematically safe. Mathematically, a proposition has only type 'o'
(Boolean truth values), but not a type "tm" (for term).
All definitions of Q0 are only shorthands for established wffs. In my R0
implementation, a definition label added to a wff is used for input (parsing)
and output (display) only, and remains irrelevant for syntactical inference and
can be removed or replaced at any time. This means that a definition label for
Goedel encodings (in this case the quotation marks) must represent a
mathematical well-formed formula (wff) when used in 'proved_iff_proved_PfP',
which may be a function with a mathematical type as input (domain) type such as
the type of truth values (type: o -> *), but not with types of the
meta-language as input (domain) type (type: dbtm -> *, or tm -> *), as this
violates the rules for construing mathematical wffs [cf. Andrews, 2002, p. 211].
Of course, one could introduce Goedel numbering in order to arithmetize the
object language and reason about the Goedel numbers. But the reasoning would
then be restricted to these Goedel numbers, and there would be no possibility
to relate these Goedel numbers directly to theorems of the object language as
done in the claimed theorem 'proved_iff_proved_PfP'
a<-> > PfP "a"
since the Goedel encodings in Paulson's proof (requiring a type "tm") are not
definable with purely mathematical means of the object language (e.g., in R0).
Since the proposition 'a' has only type 'o' (Boolean truth values), the logical
arithmetic is not stronger than propositional calculus, ruling out Goedel
encodings requiring a type "tm".
The concept of the Goedel encoding function generally violates the type
restrictions for construing mathematical wffs, as with the type of truth values
as input (domain) type there would be only two different Goedel numbers.
As in other claimed proofs, non-mathematical means are used in order to
establish a relation between the object language (proposition 'a') and the
meta-language (its provability: PfP "a") as the translation mechanism between
both language levels necessary for the self-reference part of the antinomy,
since Goedel's antinomy is construed across language levels. Note that the
antinomy seems to cause inconsistency in axiomatic (Hilbert-style) deductive
systems, but not necessarily in natural deduction [cf. Kubota, 2015, p. 14].
References
Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI: 10.1007/978-94-015-9934-4.
Kubota, Ken (2013), On Some Doubts Concerning the Formal Correctness of Gödel's
Incompleteness Theorem. Berlin: Owl of Minerva Press. ISBN 978-3-943334-04-3.
DOI: 10.4444/100.101. See: http://dx.doi.org/10.4444/100.101
Kubota, Ken (2015), Gödel Revisited. Some More Doubts Concerning the Formal
Correctness of Gödel's Incompleteness Theorem. Berlin: Owl of Minerva Press.
http://dx.doi.org/10.4444/100.102
____________________
Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
Gottfried Barrow
2015-12-28 11:41:29 UTC
Greetings,

There's a very narrow purpose to this email. I accomplish that purpose
by what I say in the first few sentences, which is not intended to
create any debate.

Picking a nice number, 50, I'll say that at least 50% of what I've said
in the past, on technical matters, has been wrong. But taking a position
can be the first step in working out ideas, though one person working
out an idea can be everybody else getting spammed.

While not technically wrong, since close enough can be good enough,
"monolithic logic" wasn't the right choice. A better choice would "black
box logic". It's based on a hardware perspective, which I believe is the
correct "ultimate view of things" for mechanized proofs. Hex was my
friend when looking at address and data buses with a logic analyzer.
There's Wikipedia, even when no one needs it:

https://en.wikipedia.org/wiki/Black_box

Related to my "minimal black box interpretation", for an overall view,
is my "minimal syntactic interpretation", for logic as strings of
characters. For things like this:

term "THE x. x \<in> {}" (*Do not think beyond the string.*)

lemma "EX! y. y = (THE x. x \<in> {})" (*Totalitarianism. HOL is a
tyrant.*)
by(rule HOL.ex1_eq(1))

Asking no one in particular, I ask and say, "Do you know how annoying it
is to read Isabelle list emails on the web archive? Graphical symbols,
for \<Rightarrow> and \<Longrightarrow>, get converted to funny looking
'a' symbols."

There is always this open question for me: Do I know enough to do a
major disconnect? Being connected can result in more sophistication, but
there also should be a fixed foundation. People have been working off of
ZFC, for how many years?

Here's a polymorphic vector:

datatype ('a,'size) vD = vC (vG: "'a list")

The problem of length reflects the general problem of HOL functions
being total, and that domains aren't easily restricted, by defining new
types. What I have in mind is fixing the problem, in general, by working
in null space, which will get lifted to option space. You heard it here
first.

The sophistication comes from its use with 'CARD(n)', thanks to a
pointer by Manuel Eberl.

I do pull from 'src/HOL/Library', but I stick it all in one file. 'CARD'
is thanks to 'Phantom_Type' by Andreas Lochbihler, 'Cardinality' by
Brian Huffman & Andreas Lochbihler, and 'Numeral_Type' by Brian Huffman.
Lars Noschinski deserves thanks for some other things.

With the new datatype, thanks to Blanchette and Traytel, and whoever
else I'm missing, I guess Popescu, there are so many possibilities they
become uninteresting, due to the shear number of them.

The 'datatype' keyword itself encapsulates what can be a beautiful form
of math, a statement so devoid of detail, it's hard to get the meaning.

The vector is above. Here, possibly, are nested countable and nested
finite sets:

datatype 'a csetD = Atom 'a | csetC "'a csetD cset"

datatype 'a fsetD = Atom 'a | fsetC "'a fsetD fset"

Here is a possible generalization of nested multisets (using lists):

datatype 'a msetD = Atom "'a * nat" | msetD "'a msetD list"

Here is what I'm really interested in. Nested sum sets and factor sets
combined:

datatype 'a sumfacD = Atom 'a | sumC "'a sumfacD list" | facC "'a
sumfacD list"

It uses lists, but why use sets when I can use lists with an equivalence
relation, which may lead back to using sets. I don't care about sets
anymore, unless I need to. I care about lists, to be used as finite sets.

What's it all worth? Nothing much in an email. The definitions might
need to be tweaked, and the devil is in the details.

Consider a quote from Carl Friedrich Gauss:

I confess that Fermat's Theorem as an isolated proposition has very
little interest for me, because I could
easily lay down a multitude of such propositions, which one could
neither prove nor dispose of.

http://www-groups.dcs.st-and.ac.uk/history/Quotations/Gauss.html

Finishing up unfinished business while the gettin' is good, to put it in
amongst a collective gettin'.

Regards,
GB