** Please accept our apologies for cross-posting **
** It would be highly appreciated if you could disseminate this CFB among your colleagues **
The 5th International Competition on Model Counting (MC 2024)
The Model Counting Competition MC 2024 invites submission of collections of (weighted) model counting instances in the DIMACS-based submission formats as given at the competition tracks.
is a vibrant field that provides recent advances in theory and practical solving, including various applications. State-of-the-art search engines rely on techniques from SAT-based solving, knowledge compilation, dynamic programming, or approximate solving using sampling using SAT solvers. The success of solving various problems in the area of satisfiability and declarative languages in the last two decades, Both the availability of numerous efficient solver implementations and the growing number of applications can be seen. Designing efficient solvers requires both understanding the fundamental algorithms underlying the solvers and in-depth insights into how to implement the algorithms for obtaining efficient and robust solvers. Several competitive events are regularly organized for different declarative solving paradigms to evaluate available solvers to various problems. Winners of such events often set new standards in the area. The Model Counting (MC) Competition aims to identify new challenging benchmarks and to promote new solvers for the problem as well as to compare them with state-of-the-art solvers
Challenging and representative benchmarks are essential to perform significant comparisons of solvers. We invite submissions of both real world benchmarks and benchmark generators to ensure a diverse benchmark set for the competition. In the case of (randomly) generated benchmarks, we would be happy if authors also publicly provide the generator. Submissions of real world benchmarks are most welcome no matter whether they have directly been taken from an application or if they have been obtained by a translation from another formalism. Note that last year’s instances are publicly available. Please do not send us those instances or permuted versions again. We encourage contributors to provide the instances as a dataset on the public data repository Zenodo (https://zenodo.org/) for submission. Since all instances will be made available to the community after the event, we expect that the copyright of the dataset allows for publication under a CC-BY license. If instances are sensitive (e.g., infrastructure/health-care), sufficient anonymization has to be applied prior to submission.
From existing and submitted benchmarks, we will select instances on which we evaluate submissions in MC2023. Instances and mappings will be publicly released on Zenodo.
Before March 30, 2024, contributors of benchmarks are expected to
Results will be presented at the SAT 2024 conference.