Registration is free.
Please register with the SAT conference.
Login Credentials will be provided by the SAT conference.
We will keep you posted about the conferencing system (likely Zoom).
See also updates on Twitter Johannes, Kuldeep, Mate.
Day: Tuesday, July 6th, 2021 (Times are given in GMT+2/CEST/Paris, see your timezone)
|16:00||Setup / Welcome||Johannes K. Fichte, Markus Hecher|
|16:10 - 16:35||Revisiting the c2d Knowledge Compiler
The c2d knowledge compiler has been around for more than a decade. Its main purpose is to compile knowledge bases in CNF form into Negation Normal Form (NNF) circuits that satisfy the properties of decomposability, determinism and smoothness. These circuits are tractable, allowing a number of hard queries to be computed in time linear in the circuit size. Perhaps the most common of these queries are the ones related to model counting and weighted model counting. The c2d knowledge compiler also underlies the ACE system, which compiles Bayesian networks into Arithmetic Circuits (ACs), allowing probabilistic inference on the Bayesian network in time linear in the AC size. In this talk, I will revisit the architectural design and underlying algorithms of the c2d compiler, in light of the upcoming release of c2d version 3.0. I will discuss how c2d performs decompositions, how it does formula caching and how it utilizes the technology underlying SAT solvers based on CDCL. Some of these ingredients of c2d have evolved beyond what has been reported in the earlier publications that appeared when c2d was first released.
|Adnan Darwiche||Slides||Lecture Video|
|16:35 - 17:00||Counting Complexity
Reductions are an essential tool to show the hardness of problems. For counting problems, due to the well-known “easy to decide, hard to count” behavior, reductions are particularly challenging to define. On the one hand, if they are based too directly on the underlying decision problem, they are not powerful enough to capture the essence of a counting class, on the other hand, if they are not, they might be too powerful. In this talk, I will come back on some old attempt to define reduction in the context of counting, recall the notion of subtractive reductions in its original formulation and exemplifies its usefulness through hardness results in non monotonic reasoning and query answering.
|17:00 - 17:25||The power of literal equivalence in model counting
The past two decades have seen the significant improvements of the scalability of practical model counters, which have been quite influential in many applications from artificial intelligence to formal verification. While most of exact counters fall into two categories, search-based and compilation-based, Huang and Darwiche’s remarkable observation ties these two categories: the trace of a search-based exact model counter corresponds to a Decision-DNNF formula. Taking advantage of literal equivalences, this paper designs an efficient model counting technique such that its trace is a generalization of Decision-DNNF. We first propose a generalization of Decision-DNNF, called CCDD, to capture literal equivalences, then show that CCDD supports model counting in linear time, and finally design a model counter, called ExactMC, whose trace corresponds to CCDD. We perform an extensive experimental evaluation over a comprehensive set of benchmarks and conduct performance comparison of ExactMC vis-a-vis the state of the art counters, c2d, miniC2D, D4, ADDMC, and Ganak. Our empirical evaluation demonstrates ExactMC can solve 885 instances while the prior state of the art could solve only 843 instances, representing a significant improvement of 42 instances.
This paper appeared at AAAI-21
|17:25 - 17:30||+++++++++ 5 Minute Break +++++++++|
|17:30 - 17:55||SharpSAT-TD: Improving SharpSAT by Exploiting Tree Decompositions
We describe SharpSAT-TD, our submission to Model Counting Competition 2021, which solved the most public instances in both tracks 1 and 2. SharpSAT-TD is based on SharpSAT, with the primary novel feature being the integration of low-width tree decompositions to the decision heuristic of the counter. Another significant new feature is a new preprocessor, implementing among other techniques complete vivification and a treewidth-aware version of the B+E algorithm. SharpSAT-TD extends SharpSAT also by directly supporting weighted model counting.
|Tuukka Korhonen, Matti Järvisalo||Slides||Video||Abstract|
|17:55 - 18:20||About Caching in D4 2.0
We present a new caching scheme and new cache management strategy that have been implemented in the last release of our compilation-based model counter, D4. The caching scheme consists in storing for each entry (a CNF formula forming a connected component given a current variable assignment, together with its model count) the corresponding set of variables and the corresponding set of clauses, except those clauses of the CNF formula that are satisfied or not shortened when conditioned by the assignment. The cache management strategy includes a cache cleaning strategy, based not only on the ages of the entries but also on the proportion of entries of the same size that led to positive hits. It also includes a cache insertion strategy, that aims to memory saving by avoiding to store in the cache every CNF formula that is encountered during search.
|Jean-Marie Lagniez, Pierre Marquis||Slides||Video||Abstract|
|18:20 - 18:45||DPMC: Weighted Model Counting by Dynamic Programming on Project-Join Trees
We describe the model-counting framework DPMC, which combines the model counters ADDMC and TensorOrder. DPMC is a unifying dynamic-programming framework to compute exact literal-weighted model counts of formulas in conjunctive normal form. At the center of our framework are project-join trees, which specify efficient project-join orders to apply additive projections (variable eliminations) and joins (clause multiplications). In this framework, model counting is performed in two phases. First, the planning phase constructs a project-join tree from a formula. Second, the execution phase computes the model count of the formula, employing dynamic programming as guided by the project-join tree. We empirically evaluate various methods for the planning phase and compare constraint-satisfaction heuristics with tree-decomposition tools. We also investigate the performance of different data structures for the execution phase and compare algebraic decision diagrams with tensors. We show that our dynamic-programming model-counting framework DPMC is competitive with the state-of-the-art exact weighted model counters Cachet, c2d, D4, and miniC2D.
DPMC is the predecessor of the weighted projected model counter ProCount. For projected counting, ProCount processes additive variables and existential variables differently by using project-join trees that are graded. ProCount is competitive with the existing exact weighted projected model counters D4P, projMC, and reSSAT.
Source code, benchmarks, and experimental data are available publicly.
|Jeffrey M. Dudek, Vu H. N. Phan, Moshe Y. Vardi||Slides||Video||Paper|
|18:45 - 19:10||On Uniformly Sampling Traces of a Transition System
A key problem in constrained random verification (CRV) concerns generation of input stimuli that result in good coverage of the system’s runs in targeted corners of its behavior space. Existing CRV solutions however provide no formal guarantees on the distribution of the system’s runs. In this work, we take a first step towards solving this problem. We present an algorithm based on Algebraic Decision Diagrams for sampling bounded traces(i.e. sequences of states) of a sequential circuit with provable uniformity (or bias) guarantees,while satisfying given constraints. We have implemented our algorithm in a tool called TraceSampler. Extensive experiments show that TraceSampler outperforms alternative approaches that provide similar uniformity guarantees. This work was presented at ICCAD 2020.
|Moshe Y. Vardi, Aditya A. Shrotri, Supratik Chakraborty||Slides||Video||Paper|
|19:10 - 19:20||+++++++++ 10 Minute Break +++++++++|
|19:20 - 19:45||Caching in Model Counters: A Journey through Space and Time
We investigate the role of the cache in CDCL-based model counters. Intuitively, we would expect the solving time of a model counter to decrease as the maximum allowed size of its cache increases. We find that constraining the cache size does not have a large influence on the solving time, and demonstrate that this is due to the characteristics of good branching heuristics. These heuristics do not only yield small search trees, but also give rise to components encountered only in small parts of those trees. Consequently, we only need to store the model counts of these components for a short while, and can soon safely delete them to free up cache space for new components. Finally, we demonstrate that, using a machine learning approach, we can predict surprisingly accurately whether the model count of a given component should be kept in the cache, or whether it should be discarded.
|Jeroen Rook, Anna Latour, Holger Hoos, Siegfried Nijssen||Slides||Video||Abstract|
|19:45 - 20:10||The Next Challenge: Model Counting Modulo Theories
The rise of Satisfiability Modulo Theories in the last approximately two decades has enabled cutting-edge progress in program verification, automatic test generation, symbolic execution, program synthesis, type inference, motion planning, security exploit detection, constraint satisfaction, and numerous other exciting areas of research. Yet satisfiability alone is insufficient to handle many of the corresponding quantitative analyses in those same areas; we need to be able to count models as well.
Significant progress has been made in model counting for propositional formulas, and there is increasing interest for model counting in more expressive domains. On the other hand, SMT solvers are able to check satisfiability of constraints over combinations of many different theories. The future seems clear: model counting modulo theories (MCMT) is the next big challenge.
In this talk I will describe recent advances in model counting for various domains including strings, linear and nonlinear arithmetic, arrays, and recursive data structures. My own work includes recent results in model counting for arrays and earlier work in model counting for strings.
The main purpose of the talk will be to point out instances in which satisfiability-checking or all-SAT algorithms for some of these domains have been converted into model-counting algorithms. Finally, we will end with a call to action for SAT and SMT researchers to apply their methods to the problem of model counting modulo theories.
|20:10||++++++++++ Warm Words, Moving to Social Online Gathering +++++++++||Johannes K. Fichte, Markus Hecher||Video|