Event Driven Architecture has got lot of attention and no of use cases which are making life and customer experience smooth, stress free.
In this blog post , I will walk through Event Driven Architecture Patterns and Event Processing patterns
Definition
Event Driven Architecture is a software architecture pattern built to decouple systems and communicate over publishing/consuming events.
An Event represent state of change or update. e.g. item added to shopping cart, address update on account profile. Main purpose behind event driven architecture to make systems loosely couple so we can achieve scalability, independent deployment, independent event/data processing features.
An event can be a message but message can never be an event. Messages are the basic unit of communication that message brokers and buses work with. They can be literally anything you want: a string, a number, an object, a command or an event. Messages have no special intent, which makes them less meaningful than events.
An event is distinguished from a message because there is clear intent, i.e. it describes a change in state. Without this, you can’t really call it an event.
Event Driven Architecture Patterns
Publisher/Consumer(pub/sub) pattern
This pattern uses event broker or event bus as a separator between event producer and consumer. Systems are decoupled using event bus/event broker. Producer do not have idea who is consuming event and how do they process those events. Consumers do subscribe to only those event in which they are interested. While event broker/bus has a responsibility to publish events to subscribed consumers.
This pattern provides flexibility,reliability and scalability. This pattern do not reply events if subscriber joins after event has been generated. Event history is not stored or can't be replayed.
Event Streaming/Sourcing
Event sourcing/streaming pattern is little more complex but more flexible. Events are stored in a ordered manner within a log which acts a event borker/bus between producer and consumer. Stored logs keep track of events and their sequence. These logs can stay there indefinitely. This provide more flexibility as when new consumer subscribes to events they can read all events/track them from beginning. Offset allows old customers to keep track of their position in the stream. This type of pattern is often applied with technologies like Apache Kafka.
Consider Github/Gitlab as a best example of this who keep track of all events and any new consumer/checkout can relate their position or can find all events from beginning.
This comes with complexity, cost of storing all events and their history.
Command Query Responsibility Segregation(CQRS)
CQRS is a pattern that separates the read and write operations into a separate models. The write model called the command model handle commands that change the state of the system and produce events. The Read model called Query model, handles queries and updated its own optimized view of data. CQRS allows independent scaling of read and write opetions ,enhances performance by optimizing the read model for a specific query needs and provides flexibility to evolve each model independently.
Server Sent Events(SSE)
SSE is a communication protocol much like Web-sockets but implications of unidirectional Data. SSE enables a consumer to received stream of events notification sent from a API server.
The consumer subscribe to your API by creating a new Event-source object and passing the URL of an endpoint to server over a regular HTTP request. After that the consumer keep listening for a response with a stream of event notifications.
If there are nor more events sent, the server can terminate the connection or server can open the connection until consumer closes it explicitly.
Being unidirectional protocol ,SSE is a good choice to build your event-driven API's if you think about less bandwidth consumption and not maintaining long-lived HTTP connections to consumers. However there are challenges related to security as there is no way of challenging API consumers for token.
As we have understood different patterns of Event Driven Architecture, it is important to understand event processing patterns as well. Practically how events should be processed at consumer end and what are ways to implement these events needs to consider with respect to scalability, cost and other aspects.
Event Processing Patterns
Simple Event
Consumers receive events and process them one at a time.A generated event immediately triggers an action precisely as the event occurs. This is the simplest form of event processing but it can only be used when event processing time is short compared to event generation rate. We will learn about two types of simple event processing patterns here
Event Notification Very minimal information is carried over event to inform consumer about state change. Here Event does not carry too much data. Usually ID information and link can be used to access the event producer to query event information. consumers want to know something has happened then they should have link, ID information to know from event producer. Producer interface can be used to by consumers to query information as needed.
Advantage - Simple and easy to implement decoupled systems Separation between event and event processing logic
Disadvantage - Difficult to debug business logic as event notification n event processing logic is separated.
Producer has to expose extra interface endpoints to provide state information.
No. of back and forth call to complete event processing.
e.g. Linked Notifications, Facebook Chat message notification
2. Event Carried State Transfer Opposite to Event Notification, in this pattern event carries state changed information. That means event itself is self-sufficient for consumer to perform subsequent actions processing without going to producer for information. Event contains event type, previous state, changed state and further information.
Advantage - Reduce dependency of consumer on event producer to get state changes information.
Increase the responsiveness of the system ,resulting in low latency.
Disadvantage - Message could get bulky depending upon structure of data changed.
Consumer may need to store changed data which is redundancy of data storage n increase in storage cost.
Complex Event
Consumers process a series of events and looks for patterns that span multiple events. This approach can be used to detect anomalies or identify trends or fraud. It requires more resources than the other two patterns, but it can provide valuable insights in to data. Apache storm is an excellent example of tool that can used for complex event process and provide valuable insight on data.
e.g. credit card fraud, biggest day sale trends, bank transaction scam/fraud
Event Stream Processing
We use a data streaming platform like Apache Kafka as a pipeline to ingest events and send them to stream processors. The stream processors are designed to process or transform the streams. There may be numerous stream processors for different application components. Event stream processing can detect patterns in event stream or aggregate events over time window.
e.g geographic movement, weather change, insurance quotation pattern.
Hope you have learned and got more precise information on EDA and Event Processing patterns. Do no hesitate to contact me if you need more information or any queries on EDA.
Simple and lucid way of explaining what Kafka and similar platforms bring to table