TULIPS – The Utrecht Logic in Progress Series

Archive

Here we archive past talks (abstracts, handouts, links, …).

2023

Logic, Logicality and Arithmetical Determinacy

Tarski’s model-theoretic definition of logical consequence relies on a division of the expressions of the language under considerations into logical and non-logical. Finding a philosophically motivated and mathematically precise criterion to delineate the logical expressions of a language constitutes the demarcation problem of the logical constants.

In this talk, we propose a novel criterion of logicality designed to address the demarcation problem. It combines insights from the model-theoretic as well as from the inferentialist approaches to meaning. According to it a notion qualifies as logical if its inferential behaviour determines a unique denotation among logically acceptable interpretations.

In this talk, we propose a novel criterion of logicality designed to address the demarcation problem. It combines insights from the model-theoretic as well as from the inferentialist approaches to meaning. According to it a notion qualifies as logical if its inferential behaviour determines a unique denotation among logically acceptable interpretations.

Time: 15.30 – 17.00

Location: Stijlkamer (Janskerkhof 13, room 0.06)

Partial Combinatory Algebras, between realizability and oracle computation

Abstract: Starting with a bit of history of the notion of PCAs (or “Schönfinkel Algebras”), we examine their role as building blocks of various notions of realizability. An extremely important construction is that of the “realizability topos RT(A)” on a PCA A. Immediately, notions from pure topos theory start to find their way into the study of PCAs. It will be argued that PCAs stand on the threshold between topos theory and abstract higher order computability theory (à la Longley-Normann). Particularly forceful is the work of Takeyuki Kihara.

Time: 16.00 – 17.30

Location: Stijlkamer (Janskerkhof 13, room 0.06)

 

The Hyperintensional Kaplan’s Paradox

David Kaplan famously argued that mainstream semantics for modal logic, which identifies propositions with sets of possible worlds, is affected by a cardinality paradox. A new version of the paradox arises for every logical framework that validates an intuitive principle, when the semantics is extended with impossible worlds to deliver a hyperintensional account of propositions. I discuss two general approaches to a solution: giving up on sets and giving up on worlds, either in the semantics or in the corresponding picture of propositions. I argue that a truthmaker-based approach offers the most promising way to account for hyperintensional propositions, without facing the paradoxical outcome.

Time: 15.30 – 17.00

Location: Stijlkamer (Janskerkhof 13, room 0.06)

 

Deontic Game Multiplication, Reduction Theorems, and Acceptability Voids

Deontic games are normal form games equipped with a classification of action profiles into those that are acceptable and those that are not. These games have been used in past work to tackle various questions concerning individual and group obligation, group plans and coordination, and method- ological individualism. This paper is the first to develop a combinatorial perspective on such games. In particular, we define three ways to combine any two deontic games, resulting in a new game that represents the simulta- neous play of both original games. We then study the relation between the deontic status of group actions in the new game and the deontic status of the corresponding sub-actions in the original games, establishing a number of fundamental reduction theorems. From these, we derive several corollaries concerning the existence of acceptability voids in interaction scenarios.

​Time: 15.30 – 17.00

Location: Ravensteynzaal (Kromme Nieuwegracht 80, room 1.06)

A video recording of this talk is available on MS Teams, in the channel ‘Frederik van de Putte’ of the TULIPS team. If you would like to become a guest of the team, let us know.

2022

Derivative normativity and logical pluralism

In a recent article, Gillian Russell (2020) claims that logic isn’t normative: according to her, the usual bridge principles are just derived from general principles for truth and falsity, such as “believe the truth” or “avoid falsity”. For example, we should believe tautologies just because we should believe the truth. Russell argues that this rejection of logical normativity can avoid the collapse objection for logical pluralism. In a different paper, Russell and Blake Turner (2021) defend a telic pluralism, where different logics may correspond to different epistemic aims, without assuming that logic is normative.

In a response to that paper, Stei (2020) claims that even if logic is normative in this weak derivative sense, the collapse objection re-emerges. His main point is that the collapse argument can still work even if the bridge principles are derivative (they just need to be true).

In this talk I will argue against Stei’s point. I will show that there is a possible strategy which maintains the derivative normativity of logic and provides a non-trivial logical pluralism. My proposal is different from the telic pluralism, for it does not depend on the epistemic aims of the agent. The key of my approach is the possibility of having different normative sources for different logics. I will argue that the distinction between classical and relevant logic can be understood in this way.

References

Blake-Turner, C. and G. Russell, “Logical pluralism without the normativity”, Synthese 198, 2021, pp. 4859-4877.

Russell, G. “Logic isn’t normative”, Inquiry 63(3-4), 2020, pp. 371-388.

Stei, E. “Non-normative logical pluralism and the revenge of the normativity objection”, Philosophical Quarterly 70 (278), 2020, pp. 162–177.

​Time: 15.30 – 17.00

Location: Ravensteynzaal (Kromme Nieuwegracht 80, room 1.06).

 

The Computational Complexity of Arimaa

Arimaa is a two-player strategy game played on an 8 x 8 board. Like chess, checkers, and Go, it is a perfect-information, deterministic, turn-based game in which there is at most one winner. Arimaa has been a focus of much attention in applied AI research, as it was originally designed purposely to be difficult for computers to play.

This talk considers the more theoretical question, “Given a position in Arimaa and specified player, does that player have a winning strategy?” As with chess and checkers, this question is trivially solvable in constant time if the game is played on a standard 8 x 8 board (albeit for an impractically large constant). Accordingly, we focus instead on the question for the generalized case of an n x n board. For the aforementioned strategy games, the corresponding question is known to be PSPACE-hard. That is, solving (n x n) chess, checkers, etc. is at least as complex as any computational task in PSPACE–the class of problems solvable by Turing machines using an amount of space (memory) that grows as a polynomial function of the input size n. We prove in this talk that the same is true for Arimaa. Thus Arimaa is about as computationally complex as these other (classic) games. We prove this result by reduction from the known PSPACE-hard problem Generalized Geography, adapting a technique of Storer’s in his 1983 paper originally showing the PSPACE-hardness of (n x n) chess.

(Joint work with A. Schipper.)

​Time: 15.30 – 17.00

Location: Stijlkamer (Janskerkhof 13, room 0.06).

 

A Theory of Definite Descriptions

According to Russell, a definite description is an expression of the form `The F’. They are incomplete symbols that mean nothing in isolation, and only in the context of complete sentences of the form `The F is G’, from which they disappear upon analysis. `The F is G’ is equivalent to `There is exactly one F and it is G’. For Russell, if there is no F, then any sentence of this form is false. Free logicians argue against Russell and that sometimes such sentences can be true. In free logic, definite descriptions are generally treated as genuine singular terms. But Russell surely had a point that we can draw distinctions of scope, for instance between `The F is not G’ and `It is not the case that the F is G’. This would require a means of introducing scope distinctions into the logic, which in free logic is generally not done. In my talk I will present a sequent calculus and a dual domain semantics for a theory of definite descriptions in which these expressions are formalised in the context of complete sentences by a binary quantifier I. I forms a formula from two formulas. Ix[F, G] means `The F is G’. This approach has the advantage of incorporating scope distinctions directly into the notation. Cut elimination is proved for a system of classical positive free logic with I and it is shown to be sound and complete for the semantics. The system has a number of novel features and is briefly compared to the usual approach of formalising `the F’ by a term forming operator in free logic. It does not coincide with Hintikka’s and Lambert’s preferred theories, but the divergence is well-motivated and attractive. Viewed from the perspective of proof theoretic semantics, the systems lets us decide various questions concerning the logic of definite descriptions in a principled way.

​Time: 16.00 – 17.15

Location: Ravensteynzaal (Kromme Nieuwegracht 80, room 1.06).

 

  • April 12, Andrew Tedder (Bochum)

The Algebraic Structure of Mares-Goldblatt Models

Mares and Goldblatt have introduced a general framework for providing frame-based semantics for which quantified relevant logics are complete (avoiding the incompleteness noted by Fine of these logics w.r.t. the simple `constant domain’ extensions of their usual frame-based semantics). In this talk, I’ll generalise and abstract away some of the frame-specific machinery of the Mares-Goldblatt framework in order to get a clearer idea of their algebraic structure. In so doing, I’ll consider issues relating to general completeness proofs for quantified extensions of logics complete w.r.t. certain classes of algebras (especially gaggles, following Dunn), mention some connections to related avenues of research in the model-theory of quantified non-classical logics, and suggest some potential applications of the resulting framework.

​Time: 16.00 – 17.15

Location: Ravensteynzaal (Kromme Nieuwegracht 80, room 1.06).

 

The first steps in cut-restriction

Cut-elimination is a fundamental result of proof theory. It is the algorithm for eliminating cuts from a sequent calculus proof, leading to cut-free calculi and applications. Although the algorithm is remarkably robust in the sense that it applies with few amendments to many logics (irrespective of their semantics), it simply does not hold for most logics. Its importance is such that the invariable response is a move from the sequent calculus to richer proof formalisms in a bid to regain the cut-elimination. In this talk I will discuss the first steps in a radically different approach called cut-restriction: identifying how to adapt Gentzen’s age-old cut-elimination arguments to restrict the shape of the cut-formulas when elimination is not possible. I will present the “first level above cut-free”: restricting the standard sequent calculi for bi-intuitionistic logic and S5 to analytic cuts. These are two logics where (analytic) cuts are known to be essential.

This is joint work with Agata Ciabattoni (TU Wien) and Timo Lang (UCL).

​Time: 16.00 – 17.15

Location: Online.

 

Generalized Scott-Montague Semantics for Hyperintensional Modal Logics

In Montague–Scott semantics, the modal box operator represents a property of sentential intensions, that is, propositions or sets of possible worlds. It follows that all logics modelled by this kind of semantics are intensional: if A and B necessarily express the same proposition, then so do Box A and Box B. This is obejctionable on many readings of Box, notorious examples being propositional attitudes. A more realistic formalization of such examples, it is argued, calls for hyperintensional modal logics, where Box A and Box are are not necessarily equivalent even for necessarily equivalent A and B.

Different semantic frameworks for hyperintensional modal logics have been formulated. In a recent paper (Hyperintensional logics for everyone. Synthese 198, 933-956, 2021), I have introduced a generalization of Montague–Scott semantics in which the Box operator expresses properties of “semantic contents” which in turn determine propositions in the standard sense. The nature of these contents is left undetermined, which renders the semantic framework rather general.

In the first part of the talk, I will outline the generalized Scott–Montague semantic framework and I will show that many well-known semantics for hyperintensional modalities are special cases. In the second part of the talk, I outline recent joint work with M. Pascucci in which we use the framework to give a semantics for some weak modal logics.

​Time: 16.00 – 17.15

Location: Online.

 

Default justification logic as a logical theory of arguments

In the 1980s, John Pollock’s work on defeasible reasons started the quest in the AI community for a formal system of defeasible argumentation. My goal in this talk is to present a logic of structured defeasible argumentation using the language of justification logic. One of the key features that is absent in standard justification logics is the possibility to weigh different epistemic reasons that might conflict with one another. To amend this, we develop a semantics for “defeaters”: conflicting reasons forming a basis to doubt the original conclusion or to believe an opposite statement. Formally, non-monotonicity of reasons is introduced through default rules with justification logic formulas.

Justification logic formulas are then interpreted as arguments and their acceptance semantics is given in analogy to Dung’s abstract argumentation framework semantics. In contrast to argumentation frameworks, determining arguments’ acceptance in default justification logic simply turns into finding (non-monotonic) logical consequences from a starting theory with justification assertions. We can show that a large subclass of Dung’s frameworks is a special case of default justification logic. By the end of the talk, I will briefly connect this logic with Pollock’s original motivation to deal with the justified-true-belief definition of knowledge.

​Time: 16.00 – 17.15

Location: Online.

A video recording of this talk is available on MS Teams, in the channel ‘Stipe Pandzic’ part of the TULIPS team. If you would like to become a guest of the team, let us know.

 

The form of proofs and structure of logics

In this TULiPS talk, I would like to provide a high-level overview of the work of my PhD under supervision of Rosalie Iemhoff and Nick Bezhanishvili. I am studying two topics in the field of proof theory: uniform interpolation and admissible rules. I apply both topics to (intuitionistic) modal logics.

The first is used to describe the form of proof systems in terms of so-called negative results: if a logic does not have uniform interpolation then it cannot have a sequent calculus with certain nice properties.  The aim of my research is to widen the scope to proof formalisms such as nested sequents. The second part is a study on the structure of logics. The admissible rules are those rules that you can add to a logic without changing its set of theorems. One of my main results is a characterization of the admissible rules of six interesting intuitionistic modal logics.

​Time: 16.00 – 17.15

Location: Online.

 

2021

An Inferentialist Redundancy Theory of Truth

I present a “fully schematic”, proof-theoretic account of higher-order logic. The framework allows for the explicit definition of truth predicates for arbitrarily strong theories formulated in the framework. Thus, arguably, for any reasonable language, a truth predicate is explicitly definable with purely logical, proof-theoretic means. Enough “truth” should thus be available to the inferentialist to do the theoretical work that opponents claim is wanting. Furthermore, deflationists claim that the truth-predicate does not express a substantive property, but is only required to express generalizations (and such like). On the account presented here, truth-predicates are strictly purely logical, and indeed eliminable since they are logically definable. Deflationism appears to collapse into a redundancy theory of truth.

​Time: 16.00 – 17.15

Location: Online.

 

Responsibility in multi agent teams

Responsibility for team plan failures and other events arising in the context of multi-agent interaction is an interesting concept to model formally. I will talk about previous joint work with Joseph Halpern and Brian Logan on modelling responsibility for failure of team plans in causal models, and an on-going work in progress on expressing the same notion in logics of strategic ability.

​Time: 16.00 – 17.15

Location: Online.

 

Priority Merge and Intersection: from group attitudes to collective truth-tracking

This talk compares the well-known notion of pooling through intersection, used to define, for instance, distributed knowledge, with the less standard lexicographic merge. We first introduce a (non-hybrid) modal framework capturing both notions on plausibility models. We then compare the logical behavior of the corresponding group attitudes modalities. Finally, we adopt a formal learning theoretic perspective to shed some light on how the two notions behave with respect to collective truth-tracking.

The logic part of the talk is based on joint work with Olivier Roy and Norbert Gratzl (Review of Symbolic Logic, 2021), and the formal learning theoretic part is based on ongoing joint work with Nina Gierasimczuk and Thomas Skafte.

​Time: 16.00 – 17.15

Location: Ravensteynzaal (Kromme Nieuwegracht 80, room 1.06).

 

Subject-Matter and Nested Topic-Sensitive Intentional Modals

The framework of topic-sensitive intentional modal operators (TSIMs) described by Berto provides a general platform for representing agents’ intentional states of various kinds. For example, a TSIM can model doxastic states, capturing a notion that given the acceptance of antecedent information P, an agent will have a consequent belief Q. Notably, the truth conditions for TSIMs include a subject-matter filter so that the topic of the consequent Q must be “included” within that of the antecedent P. In practice, the types of intentional states modeled by TSIMs can be nested within one another–one can have beliefs about beliefs, one can imagine that one knows, etc. The device of the subject-matter filter, however, requires that modeling the semantics of such nested TSIMs requires an adequate account of the subject-matter of sentences in which a TSIM appears. In this talk, I will discuss extending earlier work on the subject-matter of intensional conditionals to the special case of TSIMs and argue that the account provided is adequate both formally and philosophically.

​Time: 16.00 – 17.15

Location: Online.

 

Ecumenical modal logic 

Recent works about ecumenical systems, where connectives from classical and intuitionistic logics can co-exist in peace, warmed the discussion of proof systems for combining logics, called Ecumenical systems by Prawitz and others.

In Prawitz’ system, the classical logician and the intuitionistic logician would share the universal quantifier, conjunction, negation, and the constant for the absurd, but they would each have their own existential quantifier, disjunction, and implication, with different meanings.

We extended this discussion to alethic K-modalities: using Simpson’s meta-logical characterization, necessity is shown to be independent of the viewer, while possibility can be either intuitionistic or classical.

We furthermore proposed an internal and pure calculus for ecumenical modalities, where every basic object of the calculus can be read as a formula in the language of the ecumenical modal logic.

(joint work with Elaine Pimentel, Luiz Carlos Pereira, and Emerson Sales, partially published in the proceedings of Dali’20 and WoLLiC’21)

​Time: 16.00 – 17.15

Location: Online.

 

Worlds far apart:  From Kripke Models to Dynamical Systems

In this talk, we explore the universe of iterated information updates on a logical model. In particular, we show how iterated updates can be understood as a discrete time dynamic system.

In the first part of this talk, we show how to define distances between  pointed Kripke models. These distances can be tailored to the modeler’s needs, putting much weight on propositions that are important for the purpose at hand and discounting those that are not. We then explore the resulting topological spaces and characterize compactness and when different metrics lead to the same topology.

In the second part, we show two applications of the metrics defined: First, our metrics allow to classify which sequences of Kripke Models converge in the limit. With this in hand, we can classify when a social protocol reaches its aim and whether this can be reached in finite time, in the limit or not at all. As a second application, we show that logical product updates with postconditions are continuous in the topology defined. Hence, the space of Kripke models together with iterated product updates forms a discrete time dynamical system.Time permitting we also show that the resulting dynamic system can display non-trivial limit behavior. This is joint work with Rasmus Rendsvig.

​Time: 16.00 – 17.30

Location: Online

 

Transparent quantification into hyperpropositional contexts de dicto (joint work with Mari Duží)

We prove how to validly quantify into hyperpropositional contexts de dicto in Transparent Intensional Logic. Hyperpropositions are sentential meanings and attitude complements individuated more finely than up to logical equivalence. A hyperpropositional context de dicto is a context in which only co-hyperintensional propositions can be validly substituted. A de dicto attitude ascription is one that preserves the attributee’s perspective when one complement is substituted for another. Being an extensional logic of hyperintensions, Transparent Intensional Logic validates all the rules of extensional logic, including existential quantification. Yet the rules become more exacting when applied to hyperintensional contexts. The rules apply to only some types of entities, because the existence of only some types of entities is entailed. The insight that the paper offers is how a particular logic of hyperintensions is capable of validating quantifying-in in a principled and rigorous manner. This result advances the community-wide understanding of how to logically manipulate hyperintensions.

​Time: 16.00 – 17.15

Location: Sweelinckzaal (Drift 21, room 005)

 

Metaphysical Structure

Metaphysical structure is the way things hang together, in and of themselves, and aside from their causes and effects and propensities to behave. Examples include: truth depending on reality, the mind depending on the brain, sets depending on their members, disjunctions depending on their disjuncts, wholes depending on their parts, types being realised by their tokens, determinables being determined by their determinates. These might all be understood as cases of grounding – or rather, they might if we understood what grounding is. In this talk, I investigate parallels between metaphysical construction and familiar logical operators. First, there’s a link between composition (of parts into a whole) and conjunction. Second, I argue, there’s a link between some familiar metaphysical relations and disjunction. On the picture that emerges, metaphysical structure may be understood as logical structure, whilst remaining a genuine mind, concept, and language-independent feature of reality.

​Time: 16.00 – 17.15

Location: Online

 

Actualizing Distributed Knowledge in Bounded Groups

The idealizations resulting from the use of Kripke semantics in Epistemic Logic are inherited by formalizations of group epistemic notions. For example, distributed knowledge is often taken to reflect the potential knowledge of a group: what agents would know if they had unbounded means of communication and deductive ability. However, this does not specify whether/how this potential can be actualized, especially since real people are not unbounded reasoners. Inspired by experiments on group reasoning, we identify two dimensions of actualizing distributed knowledge: communication and inference. We build a dynamic framework with effortful actions accounting for both, combining impossible-worlds semantics and action models, and we provide a method for extracting a sound and complete axiomatization.

​Time: 16.00 – 17.15

Location: Online.

 

Classicality and the ST approach to paradoxes

This talk provides an overview on the ST approach to paradoxes detailed and defended in CERvR (2012) and CERvR (2013). The ST approach has a remarkable feature: it gives you (in a certain sense) classical logic for a classical vocabulary,  but it doesn’t lead you to absurdities in the presence of tolerance principles or a truth-predicate. It follows from the previous that ST cannot be fully classical. Now explaining exactly what ‘full classicality’ means is much trickier than it might appear at first glance (cf. Barrio, Pailos and Szmuc (2020) and related papers). The non-classicality of ST can be investigated by looking at the validity of metainferences. Alternatively, the non-classicality of ST can be investigated by looking at inferences, but in relation to properties other than validity. There is a correspondence between these two approaches (this is stated, or at least, indicated in Cobreros, La Rosa and Tranchini).

References:

Barrio, E. A., Pailos, F., & Szmuc, D. (2020). A hierarchy of classical and paraconsistent logics. JPL, 49(1): 93-120.
CERvR [Cobreros, P., Egré, P., Ripley D., and van Rooij R.] (2012), Tolerant, Classical, Strict, JPL 41(2): 347-385.
CERvR [Cobreros, P., Egré, P., Ripley D., and van Rooij R.] (2013), Reaching transparent truth, Mind 122: 841-866.

Cobreros, P., La Rosa, E. and Tranchini, L. (2020) (I Can’t Get No) Antisatisfaction, Synthese https://rdcu.be/ciJmi

​Time: 16.00 – 17.15

Location: Online

 

Complete Proof Systems for Parikh’s Game Logic

Game logic was introduced by Rohit Parikh in the 1980s as a generalisation of propositional dynamic logic (PDL) for reasoning about outcomes that players can force in determined 2-player games. Semantically, the generalisation from programs to games is mirrored by moving from Kripke models to monotone neighbourhood models. Parikh proposed a natural PDL-style Hilbert system which was easily proved to be sound, but its completeness remained an open problem.

In our LISC 2019 paper, we introduce a cut-free sequent calculus for game logic, and two cut-free sequent calculi that manipulate annotated formulas, one for game logic and one for the monotone μ-calculus, the variant of the polymodal μ-calculus where the semantics is given by monotone neighbourhood models instead of Kripke structures. We show these systems are sound and complete, and that completeness of Parikh’s axiomatization follows. Our approach builds on ideas and results by Afshari & Leigh (LICS 2017) in that we obtain completeness via a sequence of proof transformations between the systems. A crucial ingredient is a validity-preserving translation from game logic to the monotone μ-calculus.

Joint work with Sebastian Enqvist, Johannes Marti, Clemens Kupke and Yde Venema.

​Time: 16.00 – 17.15

Location: Online

 

Linear bi-intuitionistic calculus

In this talk I will overview the logic bi-LC obtained by adding the linearity (Goedel-Dummett) axiom to the bi-intuitionistic calculus bi-IPC. I will discuss some important properties of this logic and highlight the many differences with the Goedel-Dummett intermediate logic LC. I will also mention some open problems about bi-LC and related systems.

​Time: 16.00 – 17.15

Location: Online

 

Canonical Formulas for the Lax Logic

In this talk, we will investigate the method of canonical formulas for propositional lax logic. We construct “lax canonical formulas” using the local finiteness of the variety of nuclear implicative semilattices. These formulas provide a uniform axoimatization method for lax logics. Then we extend generalized Esakia duality to account for nuclear implicative semilattice homomorphisms between nuclear Heyting algebras, and explore the feasibility of lax analogues of subframe logics. This leads to the introduction of steady logics — logics characterized by generalized lax canonical formulas that only partially encode the modal structure of nuclear Heyting algebras. Finally, we use these generalized lax canonical formulas to obtain a number of preservation theorems, for example, we will prove that for each Kripke-complete intermediate logic the least lax logic containing it is also Kripke-complete.

​Time: 16.00 – 17.15

Location: Online

 

Recapture Results and Classical Logic

The discussion on whether non-classical approaches to the semantics paradoxes can recover classical reasoning for non-problematic statements (what is sometimes referred to in the literature as ‘classical recapture’) is of the utmost importance for non-classical logicians. To show that classical logic can be recovered in certain contexts—and thus bolster the case for the suitability of their favored logic—these theorists sometimes establish so-called ‘recapture results’. 

However, in a recent paper Jack Woods has argued that there is a problem with this strategy. In proving recapture results non-classical theorists typically employ classical principles that they themselves reject. In view of this one can conclude that these theorists are not entitled to these results, given that, presumably, they are not available unless classical logic is used. What non-classical logicians need to show, according to Woods, is that, by the standards of their own logic, classical recapture is possible. In this talk I will try to resist Woods’ objection. In particular, I will suggest that there is an ambiguity in his line of argument, one that can be made visible only if one focuses on the specific instances of the classical principles that are needed to obtain recapture results. Once the ambiguity is removed, one can see that there is no problem with the recapture strategy. 

​Time: 16.00 – 17.15

Location: Online

 

Disjunction properties in modal proof complexity

When studying the proof complexity of classical logic, one of the basic tools is feasible interpolation, which gives a way to transform circuit lower bounds into length-of-proof lower bounds. A similar role in the proof complexity of modal logics is played by the feasible disjuction property, which is at the core of known lower bounds on modal Frege and extended Frege (EF) proof systems.

In this talk, we will discuss the feasibility of variants of the disjunction property (and more general extension rules) in some modal logics, and its application for length-of-proof separation of EF from SF (substitution Frege) proof systems for modal logics of unbounded branching. Finally, we will have a look at the complexity of the disjunction property in basic logics of bounded branching.

​Time: 16.00 – 17.15

Location: Online

 

Explanatory proofs (or grounding proofs): philosophical framework, core ideas and results

In this talk we explore the concept of explanatory proofs, namely proofs that not only show that a theorem is true, but also provide reasons why it is true. After a philosophical introduction of this kind of proof, we provide a characterization and a formalization. We conclude the talk by offering a plethora of results/directions of future research that explanatory proofs lead to.

​Time: 16.00 – 17.15

Location: Online

 

  • January 12, Raheleh Jalali (Utrecht)

On Hard Theorems

Given a proof system, how can we specify the “hardness” of its theorems? One way to tackle this problem is taking the lengths of proofs as the corresponding hardness measure. Following this route, we call a theorem hard when even its shortest proof in the system is “long” in a certain formal sense. Finding hard theorems in proof systems for classical logic has been an open problem for a long time. However, in recent years as a significant progress, many super-intuitionistic and modal logics have been shown to have hard theorems.

In this talk, we will extend the aforementioned result to also cover a variety of weaker logics in the substructural realm. We show that there are theorems in the usual calculi for substructural logics that are even hard for the intuitionistic systems. More precisely, we provide a sequence of substructural tautologies {A_n}_{n=1}^{\infty} in a way that even their shortest intuitionistic proofs are exponentially long in the lengths of A_n.

​Time: 16.00 – 17.15

Location: Online.

2020

  • December 22, Edi Pavlovic (Helsinki) [joint work with Norbert Gratzl]

A More Unified Approach to Free Logics

Free logics is a family of first-order logics which came about as a result of examining the existence assumptions of classical logic. What those assumptions are varies, but the central ones are that (i) the domain of interpretation is not empty, (ii) every name denotes exactly one object in the domain and (iii) the quantifiers have existential import. Free logics usually reject the claim that names need to denote in (ii), and of the systems considered in this paper, the positive free logic concedes that some atomic formulas containing non-denoting names (including self-identity) are true, while negative free logic rejects even the latter claim.

These logics have complex and varied axiomatizations and semantics, and the goal of the present work is to offer an orderly examination of the various systems and their mutual relations. This is done by first offering a formalization, using sequent calculi which possess all the desired structural properties of a good proof system, including admissibility of contraction and cut, while streamlining free logics in a way no other approach has. We then present a simple and unified system of generalized semantics, which allows for a straightforward demonstration of the meta-theoretical properties, while also offering insights into the relationship between different logics (free and classical). Finally, we extend the system with modalities by using a labeled sequent calculus, and here we are again able to map out the different approaches and their mutual relations using the same framework.

​Time: 16.00 – 17.15

Location: Online.

 

  • December 8, Bogdan Dicher (Lisbon) [joint work with Francesco Paoli]

Metainferential harmony: Harmony without Identity and Cut

Proof-theoretic semantics aims to explain the meaning of the logical constants in terms of the inference rules that govern their behaviour in proofs. One of its central concepts is that of harmony: roughly, the match between the rules stipulating the conditions for introducing a logical constant in a proof and those for eliminating it from a proof. There are many accounts of harmony, most of them are developed against a background that assumes the rules of Identity and Cut, taken to codify the reflexivity and transitivity of logical consequence. We have argued elsewhere that the proof-theoretic project should be approached relative to a logic, i.e., relative to a consequence relation, and that the consequence relation relevant for proof-theoretic semantics is the one given by the sequent-to-sequent derivability relation in Gentzen systems. This relation is always reflexive, monotonic, and transitive, but it being so does not depend on the availability of sequent rules codifying these properties. In this talk we investigate the prospects for an account of harmony adequate for logics that lack the structural rules of Identity and Cut.​

Time: 16.00 – 17.15

Location: Online.

 

Proof Theory for Modal Tense Logic and Symmetric Modal Logic via Linear Nested Sequents

Modal logics offering the possibility of travelling backwards along the accessibility relation, such as modal tense logic Kt or symmetric modal logic KB, have often proved difficult to capture satisfactorily from a proof-theoretic point of view. In particular, while ordinary sequent systems, e.g., for KB exist, they rely on an analytic cut rule for completeness. Both tense logic Kt and symmetric modal logic KB have been given cut-free calculi in much more expressive frameworks, arguably the simplest being that of nested sequent calculi. However, the machinery necessary for nested sequents as well as the immediate connection to the semantics raise the question whether these logics can also be captured in a simpler framework.

As a possible contender, recently there has been a resurgence of work in the linear nested sequent framework, an intermediate framework between ordinary and full nested sequents. What was lacking so far, however, were convincing examples for the added expressivity of this frameworks over ordinary sequents, which did not depend on the linear structure of the models for the logics in question.

In this talk I will present joint work with Rajeev Gore showing that modal tense logic Kt and symmetric modal logic KB can indeed be captured in the linear nested sequent framework, thus providing positive answers to both of the above questions.

Time: 16.00 – 17.15

Location: Online

 

  • October 13, Lorenzo Rossi(Munich), [joint work with Michael Glanzberg]

Truth and Quantification

Theories of self-applicable truth have been motivated in two main ways. First, if truth-conditions provide the meaning of (several kinds of) natural language expressions, then self-applicable truth is instrumental to develop the semantics of natural languages. Second, a self-applicable truth predicate is required to express generalizations that would not be expressible in natural languages without it. In order to fulfill their semantic and expressive roles, we argue, the truth predicate has to be studied in its interaction with linguistic constructs that are actually found in natural languages and extend beyond first-order logic — modals, indicative conditionals, arbitrary quantifiers, and more. Here, we focus on truth and quantification. We develop a Kripkean theory of self-applicable truth to for the language of Generalized Quantifiers Theory. More precisely, we show how to interpret a self-applicable truth predicate for the full class of type ⟨1, 1⟩ (and type ⟨1⟩) quantifiers to be found in natural languages. As a result, we can model sentences such as ‘Most of what Jane said is true’, or ‘infinitely many theorems of T are untrue’, and several others, thus expanding the scope of existing approaches to truth, both as a semantic and as an expressive device.

Time: 16.00 – 17.15

Location: Online

 

A Note on Synonymy in Proof-theoretic Semantics

The topic of identity of proofs was put on the agenda of general (or structural) proof theory at an early stage. The relevant question is: When are the differences between two distinct proofs (understood as linguistic entities, proof figures) of one and the same formula so inessential that it is justified to identify the two proofs? The paper addresses another question: When are the differences between two distinct formulas so inessential that these formulas admit of identical proofs? The question appears to be especially natural if the idea of working with more than one kind of derivations is taken seriously. If a distinction is drawn between proofs and disproofs (or refutations) as primitive entities, it is quite conceivable that a proof of one formula amounts to a disproof of another formula, and vice versa. The paper develops this idea.

Time: 16.00 – 17.15

Location: Online

 

Validity in Context. Logical Pluralism and 2D-Semantics

Logical pluralism – the view that there is more than one correct logic – is typically connected to claims about the semantics of the logical vocabulary. A number of authors have claimed that the alleged plurality of correct logical systems can be traced back to the context-sensitivity of the expression “valid” in natural language. In the talk, I explore in what ways the most prominent account of context-sensitivity – Kaplan’s logic LD – can be employed to support this claim. I develop three possible readings of the Kaplanian character of “valid” and assess their weaknesses and strengths.

Time: 16.00 – 17.15

Location: Online.

 

  • June 30, Giuliano Rosella(Torino), [joint work with Vita Saitta & Matteo Plebani]

Truthmakers, Incompatibility and Modality

We present an extension of Fine’s truthmaker semantics, obtained by adding an incompatibility relation to Fine’s state space. The resulting framework allows us to improve the semantic clauses for negation and define important notions such as those of (im)possible state and possible world in a new and natural way. We prove some interesting results concerning the interaction between incompatibility, negation and possible worlds, e. g. that possible worlds behave classically with respect to negation; we also give a semantics for the Logic of Exact Entailment, FDE and S5. At the end, we explore various attempts to use the resources of the framework to provide truthmaking clauses for modal sentences.

Time: 16.00 – 17.00

Location: Online

 

Reasoning with words, types and tensors

Computational linguists attempt to make a computer reason about natural language. For this, words and sentence need to be encoded in a suitable numerical form. This is where vector-based semantics comes in: by embedding words in a space in such a way that words similar in meaning will be measurably similar, we allow for efficient representations that encode (some sort of) relationships between words. This idea and its use in deep neural networks have led to successful application in the domain of natural language processing; anno 2020 we have access to several sentence encoders (e.g. Skip-Thoughts of Kiros 2015, InferSent of Conneau et al. 2017, Universal Sentence Encoder of Cer et al. 2018) and contextualised word embeddings (ELMo of Peters et al. 2018, BERT of Devlin et al. 2019). These neural vector approaches are able to map arbitrary text to some vectorial encoding, using state of the art deep learning techniques.

In contrast to purely neural approaches are formal compositional approaches to semantics, that take the meaning of a sentence to be built up by the meaning of individual words, and the way those are combined (Montague 1970). In a type-driven approach, words are assigned types that reflect their grammatical role in a sentence or text, and composition of words is driven by a logical system that assigns a function-argument structure to a sequence of words (Moortgat 2010). Such an approach can be neatly integrated with vector-based semantics of meaning, where individual word meaning is given by the way words are distributed in a large text corpus (Coecke et al. 2010, 2013, Moortgat & Wijnholds 2017).

Earlier talks in this series (Moortgat, Greco) discussed theoretical aspects of compositional distributional models; in this talk I will take the point of view of representation learning: how to learn representations for words and sentences in a way that aligns with some of the key ideas of typelogical grammar?

Compositional tensor-based distributional models assume that individual words are to be represented by tensors, whose order is determined by their grammatical type; such tensors represent multilinear maps, where composition is effectuated by function application. Learning such representations has been done, but often resulted in a large number of parameters to be learnt. We discuss some of these approaches (Grefenstette 2013, Polajnar 2014) as well as a recent new proposal (work in progress) that builds on the adjective skipgram model of (Maillard&Clark 2016).

I end with remarks on the feasibility of tensor-based representation learning, and some suggestions of the options that we are currently investigating, to create wide-coverage type-driven embeddings for Dutch (and beyond!).

Time: 16.00 – 17.00

Location: Online

 

Gibbardian Collapse and Trivalent Conditionals

This paper discusses the scope and significance of a triviality result stated by Allan Gibbard for indicative conditionals, showing that any indicative conditional that satisfies the Law of Import-Export, is supraclassical, and stronger than the material conditional, must in fact collapse to the material conditional. Gibbard’s result is taken to pose a dilemma for a truth-functional account of conditionals: give up Import-Export, or embrace the two-valued analysis. We show that this dilemma can be averted in trivalent logics of the conditional based on Reichenbach and de Finetti’s idea that a conditional with a false antecedent is undefined. Import-Export and truth-functionality hold without triviality in such logics. We unravel some implicit assumptions in Gibbard’s proof, and discuss a recent generalization of Gibbard’s result due to Branden Fitelson.

Time: 16.00 – 17.00

Location: Online.

 

Describable Nuclea, Negative Translations and Extension Stability

What do soundness/completeness of negative translations of intutionistic modal logics, extension stability of preservativity/provability logics and the use of nuclea on Heyting Algebra Expansions (HAEs) to form other HAEs have in common? As it turns out, in all those cases one appears to deal with a certain kind of subframe property for a given logic, i.e., the question whether the logic in question survives a transition to at least some types of substructures of its Kripke-style or neighourhood-style frames. The nucleic perspective on subframe logics has been introduced by Silvio Ghilardi and Guram Bezhanishvili (APAL 2007) for the purely superintuitionistic syntax (without modalities or other additional connectives). It has not been used so much in the modal setting, mostly because the focus in the field tends to be on modal logics with classical propositional base, and nuclea are a rather trivial notion in the boolean setting. However, things are very different intuitionistically. Since the 1970’s, nuclea have been studied in the context of point-free topologies (lattice-complete Heyting algebras), sheaves and Grothendieck topologies on toposes, and finally arbitrary Heyting algebras (Macnab 1981). Other communities may know them as “lax modalities” or (a somewhat degenerate case of) “strong monads”.

We marry the nuclei view on subframe properties with the framework of “describable operations” introduced to study subframe logics in Frank Wolter’s PhD Thesis (1993). Wolter’s original setting was restricted to classical modal logics, but with minimal care his setup can be made to work intuitionistically and nuclea provide the missing ingredient to make it fruitful. From this perspective, we revisit the FSCD 2017 study soundness and completeness of negative translations in modal logic (joint work with Miriam Polzer and Ulrich Rabenstein) and our present study of extension stability for preservativity logics based on the constructive strict implication (jointly with Albert Visser). Various characterization and completeness results can be obtained in a generic way.

Time: 14.00 – 15.00

Location: Online.

This was a joint event with A|C, the Algebra|Coalgebra Seminar of the ILLC, Amsterdam. (http://events.illc.uva.nl/alg-coalg/)

 

Vector spaces as Kripke frames

The compositional distributional approach in computational linguistics has opened the way for an integration of the lexical aspects of meaning into Lambek’s type-logical grammar program. This approach is based on the observation that a sound semantics for the associative, commutative and unital Lambek calculus can be based on vector spaces by interpreting fusion as the tensor product of vector spaces. We build on this observation and extend it to a ‘vector space semantics’ for the general modal Lambek calculus, based on modal algebras over a field K, i.e. vector spaces endowed with a bilinear binary product and a linear binary relation. We regard K-algebras as ‘Kripke frames’ the complex algebras of which are complete residuated lattices. This perspective makes it possible to establish a systematic connection between vector space semantics and the standard Routley-Meyer semantics of modal sub-structural logics.

Time: 16.00 – 17.00

Location: Online.

 

  • March 10, Dominik Klein (Bayreuth/Bamberg), joint work with Soroush Rafiee Rad and Ondrej Majer.

Four-Valued Probabilities

First degree entailment (fde) is a four valued propositional logic that complements the classical truth values True and False with two non-classical truth values Neither and Both. The latter two account for the possibility of the available information being incomplete or containing contradictory evidence about some proposition. In this talk, we present a probabilistic extension of fde, i.e. we allow agents to have probabilistic beliefs about the truth and falsity of a proposition.

More specifically, we present two probabilistic versions of fde, one related to the bi-relational, the other to the four-valued semantics of first degree entailment. We provide sound and complete axiomatizations for both and identify a translation function between the two frameworks. Moreover, we identify policies for conditionalization and aggregation on the probability functions defined. More specifically, we analyze four-valued equivalents of Bayes’ and Jeffrey updating and also provide policies for aggregating information from different sources.

Time: 16.00 – 17.30

Location: 101, Drift 25

 

2019

Words as tensors; proofs as multilinear maps

In this talk, I report on a current NWO project that aims to combine the ‘proofs-as-programs’ view on derivational semantics as one finds it in typelogical grammars with a vector-based, distributional modelling of lexical semantics.

Typelogical grammars derive from the ‘Syntactic Calculus’ introduced by Lambek in 1958/1961. The Lambek calculus is a precursor of the ‘parsing as deduction’ method in computational linguistics: the conventional linguistic categories are turned into formulas of a grammar logic; the judgment whether a phrase is syntactically well-formed is the outcome of a process of deduction. Present-day categorial grammars address the expressive limitations of the original syntactic calculus by extending the formula language with control operations that allow for restricted forms of reordering and restructuring. These control operations play a role analogous to the exponentials/modalities of linear logic; they situate typelogical grammars in the broader family of substructural logics.

Compositional interpretation, in a typelogical grammar, takes the form of a structure-preserving map (a homomorphism) that sends types and proofs of the syntax to their counterparts in an appropriate semantic algebra. In the formal semantics approach in the tradition of Montague grammar, the target models are based on a set-theoretic universe of individuals, truth values, and function spaces over them. Semantics of open class lexical items (nouns, verbs, …) is ignored: these items are treated as unanalysed non-logical constants.

From a Natural Language Processing perspective, this agnostic attitude with respect to lexical semantics is disappointing. In fact, a core ingredient of the highly successful NLP applications we see today is the modelling of word meanings a vector in a high-dimensional semantic space representing a word’s context.

To obtain a vector-based compositional interpretation for typelogical grammars, the key idea is simple: a word with an atomic type, say a noun ‘cat’, is interpreted as a vector in Noun space; a word with an n-place functional type, say a transitive verb ‘chase’ requiring a subject and an object, is interpreted as a tensor of rank n+1; the derivation of a phrase ‘dogs chase cats’ then is interpreted as a multilinear map acting on the meanings of the constituting words to produce a vector in Sentence space, where the syntactic combination of the verb with its subject and object is modelled by tensor contraction.

I discuss some challenges for semantic modelling in terms of finite-dimensional vector spaces and (multi)linear maps arising from tensions between the demands of lexical and derivational aspects of meaning.

Time: 16.00 – 18.00

Location: 203, Drift 25

 

Loeb constructively meets μ

A famous result of Solovay identifies the classical modal Loeb system (GL) as the provability logic of Peano Arithmetic. Under this perspective, Goedel’s Second Incompleteness Theorem and Loeb’s Theorem correspond to explicit definability of
fixpoints of modalized formulas in GL. Van Benthem has observed that this definability of fixpoints entails that adding standard fixpoint operators of positive formulas does not increase the expressive power of GL. Visser has analyzed this observation syntactically and extended it to a proof that GL is a retract of the modal μ-calculus in a suitably defined category of interpretations. A fully polished theory of such embeddings and translations involving the fixpoint operator and similar binders is yet to be developed; our current efforts focus on utilizing the theory of higher-order rewrite systems (HRS).

Furthermore, we face some interesting problems when studying intuitionistic analogs of such results. First, while the fixpoint theorem for modalized formulas works in intuitionistic GL, the propositional reasoning underlying derivations of van Benthem and Visser is essentially classical. Deep results about the intuitionistic propositional calculus (IPC)  established by Pitts and Ruitenburg can be used to overcome this limitation.

Second, over IPC (and sometimes even classically) it is for several reasons natural to look at a more general syntactic setup replacing the box with Lewis arrow, which arithmetically is standardly interpreted in terms of preservativity (although it does admit many other interpretations). As it turns out, there are two major incompatible ways of computing explicit fixpoints for formulas whose principal connective is the strict implication, each requiring a different base logic. Our approach here can be described as the first study of reverse mathematics of explicit fixpoints.

Time: 16.00 – 18.00

Location: 103, Drift 25

 

Unifying Two Views of Conditionals

In this talk, my goal is to draw together two traditions that often share similar motives, but reach different views about the logic of conditionals. Lewis and Stalnaker developed a semantics for conditionals based on selection functions, sometimes called the `variably-strict’ conditional. Routley and Meyer developed a semantics for conditionals based on a ternary relation, sometimes called an `enthymematic-relevant’ conditional. These traditions agree that the material and strict conditional are inadequate-and for similar reasons-but traditionally, these camps also disagree on some important details.

My approach will blend together the idea that antecedent selections play a role in evaluating all conditionals together with the idea that the logic of conditionals is enthymematic. Much of the work is accomplished by shifting attention from the selection function at individual points of models to a `type-raised’ selection function that `holds at whole propositions’. A conditional proposition [If A, B] can then be seen as the proposition that grounds the maximally extensive type-raised selection function on proposition [A] that embeds into proposition [B]. This perspective allows us to capture all different kinds of conditionals—both counterfactuals and implications—by imposing frame conditions on type-raised selection functions.

Time: 16.00 – 18.00

Location: 103, Drift 25

 

  • June 4, Aybüke Özgün (ILLC), joint work with Francesco Bert (University of St Andrews)

Dynamic Hyperintensional Belief Revision

We propose a dynamic hyperintensional logic of belief revision for non-omniscient agents, reducing the logical omniscience phenomena affecting standard doxastic/epistemic logic as well as AGM belief revision theory. Our agents don’t know all a priori truths; their belief states are not closed under classical logical consequence; and their belief update policies can be subject to ‘framing effects’: logically or necessarily equivalent contents can lead to different revisions. We model both plain and conditional belief, then focus on dynamic belief revision. The key idea we exploit to achieve non-omniscience focuses on topic- or subject-matter-sensitivity: a feature of belief states which is gaining growing attention in the recent literature.
Time: 16:00 – 17:30
Location: 117, Janskerkhof 2-3

 

  • May 7, Amir Tabatabai (Utrecht University)

Steenen voor Brood: A Classical Interpretation of the Intuitionistic Logic

In 1933, Gödel introduced a provability interpretation for the intuitionistic propositional logic, IPC, to establish a formalization for the BHK interpretation, reading intuitionistic constructions as the usual classical proofs [1]. However, instead of using any concrete notion of a proof, he used the modal system S4, as a formalization for the intuitive concept of provability and then translated IPC into S4 in a sound and complete manner. His work then suggested the problem to find the missing concrete provability interpretation of the modal logic S4 to complete his formalization of the BHK interpretation via classical proofs.

In this talk, we will develop a framework for such provability interpretations. We will first generalize Solovay’s seminal provability interpretation of the modal logic GL to capture other modal logics such as K4, KD4 and S4. The main idea is introducing a hierarchy of arithmetical theories to represent the informal hierarchy of meta-theories and then interpreting the nested modalities in the language as the provability predicates of the different layers of this hierarchy. Later, we will combine this provability interpretation with Gödel’s translation to propose a classical formalization for the BHK interpretation. The formalization suggests that the BHK interpretation is nothing but a plural name for different provability interpretations for different propositional logics based on different ontological commitments that we believe in. They include intuitionistic logic, minimal logic and Visser-Ruitenburg’s basic logic. Finally, as a negative result, we will first show that there is no provability interpretation for any extension of the logic KD45, and as expected, there is no BHK interpretation for the classical propositional logic.

[1]. K. Gödel, Eine Interpretation des Intuitionistichen Aussagenkalküls, Ergebnisse Math Colloq., vol. 4 (1933), pp. 39-40.

Time: 16:00 – 17:30

Location: Room 105, Drift 25

 

Monadic Second Order Logic as a Model Companion

I will talk about a connection between monadic second order logic and more traditional first order model theory, which has emerged in my recent joint work with Ghilardi [1,2].
Monadic second order (MSO) logic, when interpreted in discrete structures, is closely related to certain formal models of computation. For example, the MSO-definable sets of finite colored linear orders (“words”) are exactly the regular languages from automata theory. MSO logic and its connection to automata has also been studied on many more structures, including omega-indexed words and binary trees.
In more traditional model theory, one typically studies first order logic as a generalization of certain constructions in mathematics. A fundamental insight there was that the theory of algebraically closed fields can be generalized to a purely logical notion of “existentially closed model”. The syntactic counterpart of this notion is called the “model companion” of a first order theory.
We prove that MSO logic, both on omega-words [1] and on binary trees [2], can be viewed as the model companion of a finitely axiomatized universal first order theory. In each case, this universal theory is closely connected to well-known modal fix-point logics on such structures.
[1] S. Ghilardi and S. J. van Gool, A model-theoretic characterization of monadic second-order logic on infinite words, Journal of Symbolic Logic, vol. 82, no. 1, 62-76 (2017). https://arxiv.org/pdf/1503.08936.pdf
[2] S. Ghilardi and S. J. van Gool, Monadic second order logic as the model companion of temporal logic, Proc. LICS 2016, 417-426 (2016). https://arxiv.org/pdf/1605.01003.pdf

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

Analyzing the Behavior of Transition Systems

We develop a novel way to analyze the possible long-term behavior of transition systems. These are discrete, possibly non-deterministic dynamical systems. Examples range from computer programs, over neural- and social networks, to ‘discretized’ physical systems. We define when two trajectories in a system—i.e., possible sequences of states—exhibit the same type of behavior (e.g., agreeing on equilibria and oscillations). To understand the system we thus have to analyze the partial order of ‘types of behavior’: equivalence classes of trajectories ordered by extension.

We identify subsystems such that, if absent, this behavior poset is an algebraic domain—and we characterize these behavior domains. Next, we investigate the natural topology on the types of behavior, and we comment on connections to logic. Finally, and motivated by the ‘black box problem’ of neural networks, we mention ways to transform a system into a simple system with essentially the same behavior.

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

Finite Axiomatizability and its Kin.

In my talk I will first, briefly, discuss some background concerning finite axiomatizability. I will illustrate that some theories, like Peano Arithmetic and Zermelo Fraenkel Set Theory, are not finitely axiomatizable in particularly tenacious ways.

There are two notions that are quite close to finite axiomatizability. The first notion is axiomatizability by a finite scheme and the second notion is *finding a finitely axiomatized extension of the given theory in an expanded language such that the extension is conservative over the original theory*.

Vaught’s theorem tells us that a wide class of recursively enumerable theories is axiomatizable by a finite scheme. I will briefly discuss this result.

I present results by Kleene and Craig & Vaught that show that, for a wide class of recursively enumerable theories, a finitely axiomatized extension in an expanded language exists, such that the extension is conservative over the original theory. I discuss work by Pakhomov & Visser that tells us that, under reasonable assumptions, there is no best such extension.

The paper by Pakhomov & Visser can be found here: https://arxiv.org/abs/1712.01713

Time: 16:00 – 17:30

Location: Room 302, Drift 25

 

The De Jongh Property for Bounded Constructive Zermelo-Fraenkel Set Theory

The theory BCZF is obtained from constructive Zermelo-Fraenkel set theory CZF by restricting the collection schemes to bounded formulas. We prove that BCZF has the de Jongh property with respect to every intermediate logic that is characterised by a class of Kripke frames. For this proof, we will combine Kripke semantics for subtheories of CZF (due to Iemhoff) and the set-theoretic forcing technique.

The paper is available for download under https://eprints.illc.uva.nl/1662/1/djp-illcpp.pdf

 

  • February 19, Allard Tamminga (University of Groningen and Utrecht University)

An Impossibility Result on Methodological Individualism

We prove that the statement that a group of individual agents performs a deontically admissible group action cannot be expressed in a well-established multi-modal deontic logic of agency that models every conceivable combination of actions, omissions, abilities, obligations, prohibitions, and permissions of finitely many individual agents. Our formal result has repercussions for methodological individualism, the methodological precept that any social phenomenon is to be explained ultimately in terms of the actions and interactions of individuals. We show that by making the relevant social and individualistic languages fully explicit and mathematically precise, we can precisely determine the situations in which methodological individualism is tenable. (Joint work with Hein Duijf and Frederik Van De Putte)

 

How to adopt a logic

What is commonly referred to as the Adoption Problem is a challenge to the idea that the principles for logic can be rationally revised. The argument is based on a reconstruction of unpublished work by Saul Kripke. As the reconstruction has it, Kripke essentially extends the scope of William van Orman Quine’s regress argument against conventionalism to the possibility of adopting new logical principles. In this paper, we want to discuss the scope of this challenge. Are all revisions of logic subject to the regress problem? If not, are there interesting cases of logical revisions that are subject to the regress problem? We will argue that both questions should be answered negatively.

Paper: https://carlonicolai.github.io/Adop4.pdf