Skip to content

Architecture Overview

CritterWatch is designed around a simple principle: monitored services should own their own telemetry, and the monitoring console should be a passive observer that receives and projects that telemetry without requiring database access to the services it monitors.

High-Level Architecture

Design Principles

Services Own Their Telemetry

Monitored services publish their own state rather than CritterWatch scraping it. This means:

  • CritterWatch never needs database credentials for monitored services
  • Services can publish only what they choose to reveal
  • The observer adds minimal overhead — batched publishing, 1-second intervals
  • Service failures do not cascade to CritterWatch (and vice versa)

Event Sourcing for Operational History

All service state in CritterWatch is stored as events in Marten. This provides:

  • Complete history — every state change is recorded with a timestamp
  • Projection flexibility — new projections can be built from existing events
  • Audit trail — all operator actions are events in the same store
  • Temporal queries — what was the state of the system at time T?

Command Dispatch via Transport

Operator commands flow back to services through the same RabbitMQ transport used for telemetry. This means:

  • Commands are reliable — RabbitMQ guarantees delivery
  • Commands are asynchronous — the UI does not block waiting for command execution
  • Services can receive commands even during high load (the command queue is separate from the telemetry queue)

Real-Time via SignalR

All telemetry received by CritterWatch is immediately relayed to connected browsers via SignalR. This creates a real-time monitoring experience without polling:

  1. Service publishes ServiceUpdates → RabbitMQ
  2. CritterWatch handler receives → projects into Marten
  3. Handler relays to SignalR hub → browser updates

End-to-end latency from service state change to browser update is typically under 2 seconds.

Component Summary

ComponentTechnologyResponsibility
ObserverWolverine.CritterWatch libraryIntercept runtime events, publish telemetry
TransportRabbitMQDecouple services from CritterWatch
Handler pipelineWolverineProcess inbound telemetry
Event storeMarten + PostgreSQLPersist service state as events
ProjectionMarten ServiceSummaryProjectionMaterialize queryable snapshots
Real-time relaySignalR CommunicationHubPush updates to browsers
FrontendVue 3 + PiniaReactive monitoring UI
HTTP APIWolverine.HTTPQueries and command dispatch

Scaling Considerations

CritterWatch is designed as a single-instance server. The main scalability lever is the monitored service side: you can have arbitrarily many services and service instances all reporting to a single CritterWatch server. The bottleneck is the RabbitMQ listener and Marten write throughput, which is sufficient for dozens to hundreds of services.

For very large deployments (hundreds of services, thousands of nodes), contact JasperFx for Enterprise architecture guidance.

Released under the MIT License.