VMCAI 2020 - 21st International Conference on Verification, Model Checking, and Abstract Interpretation
Welcome to the website of the 21st International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2020).
VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. VMCAI 2020 will be the 21st edition in the series.
Proceedings
https://link.springer.com/book/10.1007/978-3-030-39322-9
Invited Speakers
We are pleased to announce the the invited speakers for VMCAI 2020 are:
Program Schedule
VMCAI program can be found here.
Remark about the schedule. The official schedule reflect the order of the talks. However, the times are not accurate (and we can’t change that). VMCAI will follow the following schedule:
Morning I | 9:00 - 10:00 | Afternoon I | 14:00 - 15:05 |
BREAK | 10:00 - 10:30 | BREAK | 15:05 - 15:35 |
Morning II | 10:30 - 11:35 | Afternoon II | 15:35 - 16:40 |
MINI BREAK | 11:35 - 11:45 | MINI BREAK | 16:40 - 16:50 |
Morning III | 11:45 - 12:30 | Afternoon III | 16:50 - 17:35 |
LUNCH | 12:30 - 14:00 |
Reception
On Monday 20th, VMCAI will host a reception from 7pm to 10pm at the New Orleans BioInnovation Center.
Winter School
To complement the conference, we are organizing a winter school. See the Winter School page.
Registration
VMCAI registration is done through the POPL registration in step 2 of the registration form.
Sun 19 JanDisplayed time zone: Saskatchewan, Central America change
09:00 - 10:00 | |||
09:00 60mTalk | The Siren Song of Temporal Synthesis VMCAI Moshe Vardi Rice University |
10:30 - 12:30 | |||
10:30 30mTalk | Witnessing Secure Compilation VMCAI | ||
11:00 30mTalk | BackFlow: Backward Context-sensitive Flow Reconstruction of Taint Analysis Results VMCAI Pietro Ferrara Università Ca' Foscari, Venezia, Italy, Luca Olivieri JuliaSoft SRL, Fausto Spoto U. Verona | ||
11:30 30mCoffee break | Mini Break VMCAI | ||
12:00 30mTalk | Fixing Code That Explodes Under Symbolic Evaluation VMCAI Sorawee Porncharoenwase University of Washington, James Bornholt University of Texas at Austin, Emina Torlak University of Washington |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
14:00 - 15:05 | |||
14:00 65mTalk | Safety and Robustness for Deep Learning with Provable Guarantees VMCAI Marta Kwiatkowska University of Oxford |
15:35 - 17:45 | |||
15:35 32mTalk | The Correctness of a Code Generator for a Functional Language VMCAI | ||
16:07 32mTalk | Leveraging Compiler Intermediate Representation for Multi- and Cross-Language Verification VMCAI Jack Garzella University of Utah, Marek S. Baranowski University of Utah, Shaobo He University of Utah, Zvonimir Rakamaric University of Utah | ||
16:40 32mCoffee break | Mini Break VMCAI | ||
17:12 32mTalk | Putting the Squeeze on Array Programs: Loop Verification via Inductive Rank Reduction VMCAI Oren Ish-Shalom Tel Aviv University, Israel, Shachar Itzhaky Technion, Israel, Noam Rinetzky Tel Aviv University, Sharon Shoham Tel Aviv university |
Mon 20 JanDisplayed time zone: Saskatchewan, Central America change
09:00 - 10:00 | |||
09:00 60mTalk | Model Checking for Safe Autonomy VMCAI Rajeev Alur University of Pennsylvania |
10:30 - 12:30 | |||
10:30 30mResearch paper | A Systematic Approach to Abstract Interpretation of Program Transformations VMCAI Pre-print | ||
11:00 30mTalk | Sharing Ghost Variables in a Collection of Abstract Domains VMCAI | ||
11:30 30mCoffee break | Mini Break VMCAI | ||
12:00 30mTalk | Harnessing Static Analysis to Help Learn Pseudo-Inverses of String Manipulating Procedures for Automatic Test Generation VMCAI Oren Ish-Shalom Tel Aviv University, Israel, Shachar Itzhaky Technion, Israel, Roman Manevich Mellanox Technologies, Noam Rinetzky Tel Aviv University |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
14:00 - 15:05 | |||
14:00 32mTalk | Synthesizing Environment Invariants for Modular Hardware Verification VMCAI Hongce Zhang Princeton University, Weikun Yang Princeton University, Grigory Fedyukovich Florida State University, Aarti Gupta Princeton University, Sharad Malik Princeton University | ||
14:32 32mTalk | Systematic Classification of Attackers via Bounded Model Checking VMCAI Eric Rothstein-Morris Singapore University of Technology and Design, Jun Sun Singapore Management University, Singapore, Sudipta Chattopadhyay Singapore University of Technology and Design |
15:35 - 17:45 | |||
15:35 32mTalk | Cheap CTL Compassion in NuSMV VMCAI Daniel Hausmann Friedrich-Alexander-Universität Erlangen-Nürnberg, Tadeusz Litak FAU Erlangen-Nuremberg, INF 8, Christoph Rauch FAU Erlangen-Nürnberg, Lehrstuhl 8, Matthias Zinner FAU Erlangen-Nürnberg | ||
16:07 32mTalk | A Cooperative Parallelization Approach for Property-Directed k-Induction VMCAI Martin Blicha Università della Svizzera italiana, Antti Hyvärinen Università della Svizzera Italiana, Matteo Marescotti Università della Svizzera Italiana, Natasha Sharygina USI Lugano, Switzerland | ||
16:40 32mCoffee break | Mini Break VMCAI | ||
17:12 32mTalk | Generalized Property-Directed Reachability for Hybrid Systems VMCAI Link to publication Pre-print |
Tue 21 JanDisplayed time zone: Saskatchewan, Central America change
09:00 - 10:00 | |||
09:00 30mTalk | How to Win First-Order Safety Games VMCAI Helmut Seidl Technische Universität München, Christian Müller Technical University of Munich, Bernd Finkbeiner Saarland University | ||
09:30 30mTalk | Improving Parity Game Solvers with Justifications VMCAI Ruben Lapauw Katholieke Universiteit Leuven, Maurice Bruynooghe Katholieke Universiteit Leuven, Marc Denecker Katholieke Universiteit Leuven |
10:30 - 12:30 | |||
10:30 30mTalk | Language Inclusion for Finite Prime Event Structures VMCAI Andreas Fellner AIT - Austrian Institute of Technology, Thorsten Tarrach Austrian Institute of Technology, Georg Weissenbacher Technische Universität Wien | ||
11:00 30mTalk | Promptness and Bounded Fairness in Concurrent and Parameterized Systems VMCAI Swen Jacobs CISPA Helmholtz Center for Information Security, Mouhammad Sakr Saarland University, Martin Zimmermann University of Liverpool | ||
11:30 30mCoffee break | Mini Break VMCAI | ||
12:00 30mTalk | Solving LIA* Using Approximations VMCAI Maxwell Levatich Yale, Nikolaj Bjørner Microsoft Research, Ruzica Piskac Yale University, USA, Sharon Shoham Tel Aviv university |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
14:00 - 15:05 | |||
14:00 32mTalk | Formalizing and Checking Multilevel Consistency VMCAI Ahmed Bouajjani IRIF, Université Paris Diderot, Constantin Enea IRIF, University Paris Diderot & CNRS, Madhavan Mukund Chennai Mathematical Institute, Gautham Shenoy R Chennai Mathematical Institute, S.P. Suresh Chennai Mathematical Institute | ||
14:32 32mTalk | Practical Abstractions for Automated Verification of Shared-Memory Concurrency VMCAI Wytse Oortwijn ETH Zürich, Dilian Gurov KTH Royal Institute of Technology, Marieke Huisman University of Twente |
15:35 - 17:45 | |||
15:35 2h10mOther | Panel "The Future of Software Verification" at VMCAI VMCAI Lenore Zuck UIC, Michael Whalen Amazon Web Services and the University of Minnesota, Natarajan Shankar SRI International, USA, Andreas Podelski University of Freiburg, Germany, Dirk Beyer LMU Munich |
Accepted Papers
Call for Papers
21th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2020) January 19-21 2020, New Orleans, Louisiana, United States.
VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas.
Scope
The program of VMCAI 2020 will consist of refereed research papers as well as invited lectures and tutorials. Research contributions can report new results as well as experimental evaluations and comparisons of existing techniques. Topics include, but are not limited to:
- Program Verification
- Model Checking
- Abstract Interpretation
- Abstract Domains
- Program Synthesis
- Static Analysis
- Type Systems
- Deductive Methods
- Program Logics
- First-Order Theories
- Decision Procedures
- Interpolation
- Horn Clause Solving
- Program Certification
- Separation Logic
- Probabilistic Programming and Analysis
- Error Diagnosis
- Detection of Bugs and Security Vulnerabilities
- Program Transformations
- Hybrid and Cyber-physical Systems
- Concurrent Systems
- Analysis of Numerical Properties
- Case Studies on all of the above topics
Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, and object-oriented programming.
Important Dates
Abstract submission | 2019-10-01 |
Paper submission | 2019-10-06 |
Notification | 2019-11-07 |
Camera-ready | 2019-11-15 |
Conference | 2020-01-19 – 2020-01-21 |
Submissions
Submissions are restricted to 20 pages in Springer’s LNCS format, not counting references. Additional material may be placed in an appendix, to be read at the discretion of the reviewers and to be omitted in the final version. Formatting style files and further guidelines for formatting can be found at the Springer website.
Submission link: https://easychair.org/conferences/?conf=vmcai2020
Accepted papers will be published in Springer’s Lecture Notes in Computer Science series.
There will be two categories of papers: regular and tool papers. Both types of paper have the same format but will be evaluated differently.
- Regular paper clearly identify and justify a advance to the field of verification, abstract interpretation, or model checking. Where applicable, they are supported by experimental validation (up to 20 pages + refs).
- Tool papers (present a new tool, a new tool component, or novel extensions to an existing tool. They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities. Authors are strongly encouraged to make their tools publicly available and submit and artifact (up to 20 pages + refs, short tools papers are also welcome).
Artifacts
VMCAI 2020 is introducing the possibility to submit an artifact along a paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. Submitting an artifact is encouraged but not required. The artifact will be evaluated in parallel with the submission by the artifact evaluation committee (AEC). The AEC will read the paper and evaluate the artifact on the following criteria:
- consistency with and replicability of results in the paper,
- completeness,
- documentation, and
- ease of use.
More information can be found on the artifact webpage: Call for Artifacts
Call for Artifacts
VMCAI 2020 is introducing the possibility to submit an artifact along a paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results.
The goal of artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches.
Artifacts of interest include (but are not limited to):
- Software, Tools, or Frameworks
- Data sets
- Test suites
- Machine checkable proofs
- Any combination of them
- Any other artifact described in the paper
Artifact submission is optional. However, we highly encourage all authors to also submit an artifact. A successfully evaluated artifact can increase your chance of being accepted since the evaluation result of your artifact is taken into account during paper reviewing. Additionally, badges shown on the title page of the corresponding paper give you credit for good artifact submissions.
We award two types of badges that are granted independently of each other. For artifacts that are successfully evaluated by the artifact evaluation committee we grant the replicated badge. Artifacts that are publically available under a DOI receive an availability badge. Authors may use all granted badges on the title page of the respective paper.
Important Dates
The artifact evaluation will be done in parallel with the submission. The artifacts are submitted less than 1 week after the paper submission.
Artifact registration (abstract submission) | 8th Oct 2019 |
Artifact submission | 14th Oct 2019 (extended) |
Artifact test phase notification | 17th Oct 2019 |
Artifact clarification period | 17th-20th Oct 2019 |
Artifact notification | 7th Nov 2019 |
Artifact Evaluation
All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.
- Is the artifact consistent with the paper and the claims made by the paper?
- Are the results of the paper replicable through the artifact?
- Is the artifact complete, i.e., how many of the results of the paper are replicable?
- Is the artifact well-documented?
- Is the artifact easy to use?
The artifact evaluation is performed in the following two phases.
- In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are informed of the outcome and asked for clarification. The authors will get 3 days to respond to the reviews in case problems are encountered.
- In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t the questions detailed above.
Artifacts Submission
An artifact submission should consist of
- an abstract that summarizes the artifact and explains its relation to the paper including
- a URL from which a .zip file containing the artifact can be downloaded - we encourage you to provide a DOI - and
- the SHA256 checksum of the .zip file, and
- a .pdf file of the submitted paper.
The artifact evaluation chairs will download the .zip file and distribute it to the reviewers. Please also look at the Artifact Packaging Guidelines section for detailed information about the contents of the submission.
The abstract including the URL of the download link, as well as the SHA256 checksum of your .zip file, and the .pdf file of your paper must be submitted to EasyChair.
https://easychair.org/conferences/?conf=vmcai2020ae
We need the checksum to ensure the integrity of your artifact. You can generate the checksum using the following command-line tools.
- Linux:
sha256sum <file>
- Windows:
CertUtil -hashfile <file> SHA256
- MacOS:
shasum -a 256 <file>
If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.
Artifact Packaging Guidelines
We expect that authors package their artifact (.zip file) and write their instructions such that the artifact evaluation committee can evaluate the artifact within a virtual machine provided by us. Only submit the required files to replicate your results in the provided virtual machine. Do not submit a virtual machine image in the .zip file. AEC members will copy your .zip file into the provided virtual machine.
VMCAI 2020 Virtual Machine
Note The final version of the virtual machine is now available from https://doi.org/10.5281/zenodo.3533104.
The VMCAI 2020 virtual machine was created with VirtualBox 6.0.8 and consists of an installation of Ubuntu 19.04 with Linux 5.0.0-31 and the following notable packages.
- OCaml 4.09.0
- OpenJDK 1.8.0_222
- Mono 6.4.0.198
- ruby 2.5.5p157
- Python 2.7.16 and Python 3.7.3
- bash 5.0.3
- cmake 3.13.4-1
- clang 8.0.0.3
- gcc 8.3.0
- Racket 7.4
- VIM 8.1
- Emacs 26.1
- Coq 8.9.1 with CoqIDE
- benchexec 2.2
- TexLive 2019
- A 32bit libc
- VirtualBox guest additions
The VM has a user vmcai2020
with password vmcai2020
. The root user has the same password.
In order to save space, the VM does not have an active swap file. Please mention in your submission if you expect that a swap file is needed. You can activate swap for the running session using the following commands.
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
The artifact evaluation committee will be instructed not to download software or data from external sources. Any additional software required by your artifact must be included in the .zip
file and the artifact must provide instructions for the installation. To include an Ubuntu package in your artifact submission, you can create a .deb
file with all the necessary dependencies from inside the VM. Reviewers can then install them by using sudo dpkg -i <.deb file>
. You can create the necessary .deb
files for example as follows.
- If you have only one package without dependencies, you can use
apt-get download <packagename>
- If you have only one package without dependencies but with local modifications, e.g., particular configuration files, you can use the dpkg-repack utility
- If you have a package with multiple dependencies, you can use
wget
together withapt
to download them all and put them into a folder:wget $(apt-get install --reinstall --print-uris -qq <packagename> | cut -d"'" -f2)
In case you think the VM is improper for evaluation of your artifact, please contact the artifact evaluation chairs prior to artifact submission.
Submission Contents
Your artifact .zip
file must contain the following elements.
- The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper. ◦ The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging). ◦ We recommend to prepare your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially to replicate your results. For example, provide easy-to-use scripts and a detailed README document.
- A license file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation.
- A README text file that introduces the artifact to the user and guides the user through replication of your results. Ideally, it should consist of the following parts.
- We recommend to describe the structure and content of your artifact.
- It should describe the steps to set up your artifact within the provided VMCAI 2020 VM. To simplify the reviewing process, we recommend to provide an installation script (if necessary).
- We would appreciate it if you would support the reviewers not only for the main review phase but also for the testing phase. To this end, it would be helpful if you would provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible.
- Document in detail how to replicate your results of the paper.
- Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
- Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend to provide evaluation scripts (where applicable).
- Precisely state the resource requirements (RAM, number of cores, CPU frequency, etc.), which you used to test your artifact. Your resource requirements should be modest and allow replication of results even on laptops.
- Please provide for each task/step of the replication (an estimate) how long it will take to perform it or how long it took for you and what exact machine(s) you used.
- For tasks that require a large amount of resources (hardware or time), we recommend to provide a possibility to replicate a subset of the results with reasonably modest resource and time limits, e.g., within 8 hours on a reasonable personal computer. In this case, please also include a script to replicate only a subset of the results. If this is not possible, please contact the artifact evaluation chairs early, but no latter than before submission.
Publication of Artifacts
The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on ZENODO or figshare.com, and refer to them in their papers via a DOI. All artifacts for which a DOI exists that is known to the artifact evaluation committee are granted the availability badge.
Badges
Our badges were kindly provided by Claus Schätzle and are licensed under CC0. They are also available from Github. You will receive a mail from the artifact evaluation chairs detailing which badges you can use.
VMCAI Winter School
The VMCAI Winter School is the second winter school on formal methods associated with VMCAI 2020, which will take place in New Orleans, Louisiana on January 16-18, 2020. In the vein of VMCAI, the school is meant to facilitate interaction, cross-fertilization, and advancement of hybrid methods that combine verification, model checking, and abstract interpretation.
The school is aimed primarily at PhD students but we will also consider applications from senior undergrad and master students who intend to continue their study in the field of verification.
Program
The VMCAI Winter School program will feature lectures and tutorials from both academia and industry experts in their respective fields. The school will cover several fundamental aspects of formal methods and applications.
We have the following speakers and topics confirmed already:
-
Dirk Beyer Ludwig-Maximilians-Universität München, Germany
Benchmarking and competitions
-
Igor Konnov Interchain Foundation (Switzerland), Austria
Model checking of distributed algorithms: from classics towards Tendermint blockchain
-
Corina Pasareanu NASA Ames and Carnegie Mellon University, USA
Symbolic execution
-
Andreas Podelski University of Freiburg, Germany
Software model checking
-
Natasha Sharygina University of Lugano, Switzerland
Symbolic model checking with interpolation and IC3
-
Helmut Seidl TU Munich, Germany
Abstract interpretation of concurrent programs
-
Moshe Vardi Rice University, USA
Logic, automata, games, and algorithms
-
Mike Whalen Amazon, USA
Reasoning about the security of Amazon Web Services
-
Valentin Wüstholz Consensys Diligence, Germany
-
Damien Zufferey Max Planck Institute for Software Systems, Germany
Schedule
Here is the preliminary schedule for the winter school:
Time | January 16 | January 17 | January 18 |
---|---|---|---|
09:00 - 10:30 | Helmut Seidl: Abstract interpretation of concurrent programs I | Corina Pasareanu: Symbolic execution I | Natasha Sharygina: Symbolic model checking with interpolation and IC3 I |
10:40 - 12:00 | Andreas Podelski: Software model checking I | Dirk Beyer: Benchmarking and competitions I | Mike Whalen: Reasoning about the security of Amazon Web Services |
12:40 - 14:00 | Helmut Seidl: Abstract interpretation of concurrent programs II | Corina Pasareanu: Symbolic execution II | Natasha Sharygina: Symbolic model checking with interpolation and IC3 II |
14:10 - 15:30 | Andreas Podelski: Software model checking II | Dirk Beyer: Benchmarking and competitions II | Moshe Vardi: Logic, automata, games, and algorithms |
15:40 - 17:00 | Igor Konnov: Model checking of distributed algorithms: from classics towards Tendermint blockchain | Valentin Wüstholz: Testing and verification of smart contracts | Damien Zufferey: Programming Abstractions for Verifiable Software |
Registration
The registration for the winter school is free but mandatory. Please take into consideration that the number of attendees and support is limited. Register as soon you are sure to be able to attend the Winter School.
As part of the registration, you can apply for travel and accommodation support. Furthermore, we offer to help find room mates to reduce the accommodation cost. Students with alternative sources of funding are welcome.
Please fill the form linked below to register.
Registration form. At this time, we do not provide support anymore but we still have a few slots if you come with your own funding.
Venue
The 2nd VMCAI Winter School will take place at the New Orleans BioInnovation Center. The school location and schedule has been chosen to integrate nicely with POPL (https://popl20.sigplan.org/) and VMCAI (https://popl20.sigplan.org/home/VMCAI-2020) that will take place in New Orleans, Louisiana from January 19-25, 2019.
Organizers
- Dirk Beyer, LMU Munich, Germany
- Damien Zufferey, MPI-SWS, Germany