Developing software that solves complex problems can be challenging at times. Often, the literature around solving these problems makes it even harder to see the solution through the trees!
This blog series aims at being a pragmatic take on building an event sourcing system by leveraging AWS Serverless technologies. It is by no means a complete guide but provides concrete patterns that you can use while building your own. Moreover, you’ll find that these patterns can be used in other architectures as well.
We will first walk through the process of storing aggregates and their changes as events, then rebuilding aggregates from its past events. Afterwords, we will see how publishing new aggregate events to an event stream works and how downstream event handlers receive these events.
Fundamentally, that is the core of what event sourcing does. However, in most event sourcing systems, your event handlers will need to replay events - so we’ll look at how that can be achieved as well.
Table of Contents:
- System Design: Design of an event sourcing system and how Change Data Capture is achieved.
- Aggregate Design: How aggregates are designed and the role events play in altering their state.
- The Event Store and DynamoDB: Designing an event store and implementing the first component; DynamoDB Table.
- Aggregate Persistence: Persisting complex objects with the Repository pattern and Optimistic Locking.
- Change Data Capture: In editing.
- Event Handlers: in draft.
- Replaying Events: in design.
Have any questions or would like further information? Feel free to leave a comment :).