A favorite activity for my partner and I is visiting brewery taprooms; often, we’ll combine a trip with some kind of casual game, (usually Odin’s Ravens 2e) but we don’t always haul that out with us. A few years ago we grabbed a game off the shelf at Denver Beer Company called Farkle, and we gave it a try.
It’s quite straightforward: Components are only 6d6, and players take turns rolling all of them, then drafting dice out of the most recent roll which meet scoring criteria, then either rolling again or banking their points and passing to the next player. There’s a cool press-your-luck angle, because if there are no score-able dice in a throw, the player has encountered the titular “Farkle”, and their turn ends with zero points.
A conversation I encounter with some regularity in the board game space (and rightly so) is the notion of automated testing — codifying all the rules to a design into software, and running repeated simulations to try and ferret out mathematical edge-cases that playtesters might miss. In fact, from a certain viewpoint, it sometimes feels absurd that this isn’t a more common practice.
On the other hand, something has always made me skeptical of the utility of doing so — correct, working code isn’t exactly easy to write, plus, in-progress board game rules are usually in a continuous state of flux during testing. Not to mention, playing games requires a lot of abstract thought, so a simulation which plays well is even harder. And, this type of stochastic simulation reveals nothing to a designer about the actual fun, so its overall utility is limited to finding math issues.
With all this in my head, during our most recent Farkle throwdown, I realized that this game could be an easy way to try it out — each turn is discrete, so there’s no interaction between players, and in fact, within a turn, each roll of the dice is discrete, because drafted scoring dice don’t combine with previously-drafted scoring dice. Continue reading