GET IN TOUCH
MC 2024

Model Counting Competition 2022 (Call for Benchmarks)

** Please accept our apologies for cross-posting **

** It would be highly appreciated if you could disseminate this CFB among your colleagues **

Call for Benchmarks

The 3rd International Competition on Model Counting (MC 2022)

The Model Counting Competition MC 2022 invites submission of collections of (weighted) model counting instances in the DIMACS-based submission formats as given at the competition tracks.

Setting

Model counting is a very vibrant field that provides both recent advances in theory as well as in practical solving including various applications. State-of-the-art search engines rely on techniques from SAT-based solving, knowledge compilation, dynamic programming, or approximate solving by means of sampling using SAT solvers. The success of solving various problems, in the area of satisfiability and declarative languages in the last two decades, can be seen in both the availability of numerous efficient solver implementations and the growing number of applications. Designing efficient solvers requires both understanding of the fundamental algorithms underlying the solvers, as well as in-depth insights into how to implement the algorithms for obtaining efficient and robust solvers. Several competitive events are regularly organized for different declarative solving paradigms to evaluate available solvers on a wide range of problems. Winners of such events often set new standards in the area. The Model Counting (MC) Competition aims to identify new challenging benchmarks and to promote new solvers for the problem as well as to compare them with state-of-the-art solvers

Benchmarks

Challenging and representative benchmarks are essential to perform significant comparisons of solvers. We invite submissions of both real world benchmarks and benchmark generators to ensure a diverse benchmark set for the competition. In the case of (randomly) generated benchmarks, we would be happy if authors also publicly provide the generator. Submissions of real world benchmarks are most welcome no matter whether they have directly been taken from an application or if they have been obtained by a translation from another formalism. Note that last year’s instances are publicly available. Please do not send us those instances or permuted versions again. We encourage contributors to provide the instances as a dataset on the public data repository Zenodo (https://zenodo.org/) for submission. Since all instances will be made available to the community after the event, we expect that the copyright of the dataset allows for publication under a CC-BY license. If instances are sensitive (e.g., infrastructure/health-care), sufficient anonymization has to be applied prior to submission.

Selection

From existing and submitted benchmarks, we will select instances on which we evaluate submissions in MC2022. Instances and mappings will be publicly released on Zenodo.

Submission

Before February 15, 2022, contributors of benchmarks are expected to

  • Register for benchmark submission at tinyurl.com/cfb-mc22, and decide between
    • (A) providing us with a link to download the instances or
    • (B) asking us for an upload space to place the instances.
  • Submissions are expected to be as one tar-archive of bz2 compressed files in the format described on the website in Section 2.
  • The submission has to contain a short abstract (at most 2 pages single column pdf) describing the dataset (and generator if applicable).
  • We appreciate if the submissions contain in addition the following information: – a file (checksums.txt) listing the sha256 checksum for each uncompressed, submitted instance and – a file (counts.txt) listing the known (weighted/projected) model counts, the expected hardness, and if applicable in addition the runtime and used tool to obtain the result.

Organization

Organizers

  • Johannes K. Fichte (TU Wien, Austria)
  • Markus Hecher (TU Wien, Austria; University of Potsdam, Germany)

Results will be presented at the SAT 2022 conference.