Create an interactive projection/visualisation that includes a state machine. This means you need at least 2 active states (beyond the “idle” state where no human input is detected), and transition rules to switch between them. Is the state machine responding to a gesture? A spatial coordinate? A sound? How do the state machine transition rules correlate with the interactivity in the states themselves? Do the different states/scenes need different interactivity schemas, or do they just look/sound/feel different?

What is gained by adding the state machine layer to the interactive projection?

See also:

Code as Creative Medium - Augmented Projection (p. 68)