Model checking is great! I, too wrote a simple C# model checker to test some tricky serialization code inside Microsoft. It's now open source: https://github.com/ahelwer/FiniteModelChecker
It requires a fair amount of boilerplate (see the examples in the tests directory) but it is so, so much more powerful than manually coding a bunch of unit test cases that it was worth it (and it unearthed a tricky bug!)
I think a great project would be to write a spec in TLA+, run its model checker in simulation mode, then use the generated event traces to execute corresponding event traces in this (or some other) model checker, all the while translating the program's state back into values understood by TLA+ for checking against the TLA+ spec's invariants. Would be a nice lightweight method of enforcing correspondence between spec and code.
If you let your students model their distributed system with pure functions where each function takes the current state and an event (for example a received message, or a timeout) as inputs and produces a new state together with the actions to be taken (sending of messages), then it's possible (and rather easy) to build the state space in a simulator and verify it. It might explode, but then again, Alloy shows that most bugs in these algorithms already show themselves after the first few steps.
> If you let your students model their distributed system with pure functions where each function takes the current state and an event (for example a received message, or a timeout) as inputs and produces a new state together with the actions to be taken (sending of messages)...
If they were going to do that, they should have just used Erlang instead. That’s precisely the OTP model.
Cloud types are pretty nice too [1], and they expose a familiar branch/merge model for anyone that has any familiarity with version control systems. Which should be every developer at this point.
I doubt there's one true distributed systems model, but there are some good ones which overlap, and some which excel in different domains.
And now I see this, and I'm already having second thoughts. Prima facie, what I like about DSLabs is it's not so steep learning curve as it's written in a mainstream OOP language. Can anyone who has worked on both TLA+ and DSLabs share thoughts?
While I haven't used DSLabs, I did suffer a similar bit of indecision recently trying to decide between Alloy[0] and TLA+[1]. In the end I don't think there is a correct answer, and there are probably more similarities than differences between any two specification languages. In my case I read both Software Abstractions and Specifying Systems (part 1 at least). Each took about a week; which is probably less time than I spent trying to decide between them.
I happen to agree with the argument made by proponents of both TLA+ and Alloy that model checking and specification is better accomplished when detached from an implementation language, but I think you would benefit most from simply starting with one of them (DSLabs or TLA+) and decide later if you'd like to learn the other.
I started learning TLA+ in October (or so), and after getting reasonably acceptable (as in, can write PlusCal without many issues, and can write and read basic plain TLA+) I'm starting to learn Alloy now. It offers a different approach that seems to suit different domains (say, less "temporal" focus)
Speaking as a person who teaches both TLA+ and Alloy, I have a sneaking suspicion that the best way to learn the idea of model checking is _none of the three_.
I haven't seriously sat down and deep dived yet, but I'm really liking what I've seen of Runway: https://runway.systems/. It's advantages over all three are
1. A repl
2. A repl
3. You can try it online
4. Oh my god, there's a repl
The core hasn't been updated in three years, so I'm not as ludicrously-overhyped on applying it to real systems, but to build your model thinking? I really like what I've seen so far.
I don't know about DSLabs' model checker but the advantage you get from TLA+ is that it's plain, simple maths. It can be used for specifying systems at many levels of abstraction that a programming language cannot. I'd be curious about how expressive DSLabs' language is, being based on Java -- can it express temporal properties as well as safety properties?
As well the TLA+ toolbox has other tools in addition to the model checker: a pretty printer, and a proof system as well.
It requires a fair amount of boilerplate (see the examples in the tests directory) but it is so, so much more powerful than manually coding a bunch of unit test cases that it was worth it (and it unearthed a tricky bug!)
I think a great project would be to write a spec in TLA+, run its model checker in simulation mode, then use the generated event traces to execute corresponding event traces in this (or some other) model checker, all the while translating the program's state back into values understood by TLA+ for checking against the TLA+ spec's invariants. Would be a nice lightweight method of enforcing correspondence between spec and code.