Modern backend systems are no longer just about exposing a few REST endpoints and storing data in a database. They are expected to be resilient, scalable, observable, and easy to evolve over time. That becomes even more important when services need to communicate asynchronously and run consistently across local and Kubernetes environments.
To explore that in a practical way, I built quarkus-ddd-kafka-microservices-demo, a hands-on project that combines:
- Java 21
- Quarkus
- Domain-Driven Design (DDD)
- Hexagonal Architecture
- Kafka-compatible event streaming with Redpanda
- PostgreSQL
- Docker Compose
- Kubernetes local deployment with Kind
This project is not meant to be a toy “Hello World” demo. It is designed to reflect how a real backend platform can be structured when we care about clean boundaries, event-driven workflows, and production-style deployment patterns.
GitHub Repository
github.com/kernel2/quarkus-ddd-kafka-microservices-demoWhy I built this project
I wanted a project that demonstrates more than CRUD.
A lot of backend demos stop at:
- one service
- one database
- one controller
- one Dockerfile
That is useful for learning syntax, but it does not really show how modern systems are built.
I wanted something that demonstrates:
- how to split business capabilities into services
- how to keep the domain model independent from frameworks
- how asynchronous workflows work in practice
- how to move from local development to Kubernetes
- how to keep the code understandable and maintainable
That is why I built this project around three services:
- product-service
- order-service
- payment-service
Together, they model a simple but realistic commerce flow.
The business flow
The domain is intentionally simple so the architecture stays easy to understand.
The main flow looks like this:
- A client creates an order through
order-service order-servicepublishes an OrderCreated eventpayment-serviceconsumes that event and processes the paymentpayment-servicepublishes either PaymentCompleted or PaymentFailedorder-serviceconsumes the payment result and updates the order status
This gives us a clean example of event-driven communication and eventual consistency.
The system is simple enough to run locally, but rich enough to explain real architecture decisions.
Why Quarkus
Quarkus is a strong fit for this kind of system.
It brings a modern Java developer experience while being very well aligned with cloud-native execution models. In practice, that means:
- fast startup
- low memory footprint
- smooth containerization
- strong support for Kubernetes-style runtimes
For this project, Quarkus makes it easy to combine REST APIs, Kafka messaging, PostgreSQL integration, health endpoints, local dev mode, and container-friendly behavior.
For microservices, that balance is extremely valuable. You still work in Java, but with a framework that feels designed for modern infrastructure constraints.
Why DDD and Hexagonal Architecture
One of the biggest mistakes in microservices is to split infrastructure before understanding the business model.
You end up with services that are technically separate but still messy internally:
- business rules mixed with controllers
- repositories leaking everywhere
- framework annotations in the core domain
- no real boundary between use cases and technical details
To avoid that, I used DDD + Hexagonal Architecture in each service.
Each service is organized into four main layers:
- domain
- application
- infrastructure
- api
Domain
This is the core business model: entities, value objects, domain rules, repository ports, and domain event abstractions. The domain does not depend on Quarkus, JPA, or Kafka.
Application
This layer orchestrates use cases such as creating products, creating orders, processing payments, and updating order statuses.
Infrastructure
This is where adapters live: JPA persistence, Kafka producers and consumers, and all technical implementations of ports.
API
This exposes the system through REST: controllers, request and response DTOs, validation, and HTTP error handling.
This separation matters. It keeps the system easier to test, easier to explain, and much easier to evolve.
Service breakdown
1. product-service
This service is responsible for managing products and stock.
Typical responsibilities:
- create a product
- retrieve a product
- list products
- update stock
This service demonstrates:
- clear REST boundaries
- persistence isolation
- clean domain modeling for product data
2. order-service
This is the central orchestration point for order creation.
It is responsible for:
- creating orders
- exposing order APIs
- publishing
OrderCreated - consuming payment events
- updating order status
This service shows how a domain model interacts with asynchronous workflows without being tightly coupled to the technical message broker.
3. payment-service
This service reacts to order creation and simulates payment processing.
It is responsible for:
- consuming order events
- recording payment state
- publishing payment result events
This service demonstrates event-driven processing in a simple but realistic way.
Event-driven communication with Kafka
Instead of making all services call each other synchronously, I chose an event-driven pattern.
The key topics are:
order-createdpayment-completedpayment-failed
The flow is intentionally straightforward:
order-serviceemits an order eventpayment-servicereactsorder-serviceupdates itself based on payment outcome
This gives a concrete example of asynchronous choreography.
It also illustrates an important concept: eventual consistency.
When the order is first created, its final state may not be known yet. That is normal in distributed systems. The final status is reached after the relevant event is processed.
That is exactly the kind of design trade-off modern backend systems need to handle.
Local developer experience
A good architecture is not enough if the project is painful to run.
That is why I made local startup a first-class concern. The project supports a workflow based on:
- Docker Compose for infrastructure
- Quarkus dev mode for the application services
The infrastructure includes:
- PostgreSQL
- Redpanda (Kafka API compatible)
- pgAdmin for database inspection
This gives a practical setup where I can run databases and messaging in containers, start Quarkus services locally in dev mode, debug quickly, and iterate without rebuilding the entire stack.
That hybrid setup is extremely productive. It mirrors how many teams actually work:
- infrastructure containerized
- services running locally during development
Kubernetes local deployment with Kind
One part I particularly wanted to include was Kubernetes deployment.
Not because every demo needs Kubernetes, but because many backend systems eventually end up there. If a project is intended to demonstrate cloud-native architecture, it should show how services are deployed and wired together.
For local Kubernetes deployment, I used Kind.
The process is straightforward:
- Create a Kind cluster
- Build the service images
- Load them into Kind
- Apply manifests in dependency order
- Port-forward the services for local testing
kind create cluster --name quarkus-ddd-demo
docker build -f product-service/Dockerfile -t quarkus-ddd-demo/product-service:local .
docker build -f order-service/Dockerfile -t quarkus-ddd-demo/order-service:local .
docker build -f payment-service/Dockerfile -t quarkus-ddd-demo/payment-service:local .
kind load docker-image quarkus-ddd-demo/product-service:local --name quarkus-ddd-demo
kind load docker-image quarkus-ddd-demo/order-service:local --name quarkus-ddd-demo
kind load docker-image quarkus-ddd-demo/payment-service:local --name quarkus-ddd-demo
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/redpanda/
kubectl apply -f k8s/postgres/
kubectl apply -f k8s/product-service/
kubectl apply -f k8s/order-service/
kubectl apply -f k8s/payment-service/I structured the manifests by component:
- namespace
- Redpanda
- PostgreSQL
- product-service
- order-service
- payment-service
Each application folder contains:
- Deployment
- Service
- ConfigMap
- Secret placeholder
I also added readiness and liveness probes, which are essential in Kubernetes to ensure services are healthy and routable.
Why this matters
Many demos show application code but stop before deployment. That creates a gap between “it works on my machine” and “this could realistically run in a cluster”.
By including Kind manifests and image loading steps, this project demonstrates the full path from:
- code
- to container
- to cluster
That is an important part of modern backend engineering.
Databases and isolation
Each service owns its persistence boundary. That was an intentional choice.
In local development, it may be tempting to simplify everything into one shared database. But for microservices, isolated data ownership is a better architectural model.
It reinforces:
- service autonomy
- explicit integration boundaries
- cleaner reasoning about responsibilities
For convenience, Kubernetes and local infrastructure can still be managed in a centralized way, but the service boundaries remain clear.
What this project demonstrates technically
1. Microservices are more than splitting code
Real microservices require communication strategy, ownership boundaries, deployment patterns, and resilience thinking.
2. DDD improves clarity
Even in a small system, modeling business concepts properly makes the code more understandable.
3. Hexagonal Architecture keeps the core clean
The business logic stays independent from infrastructure and easier to test.
4. Event-driven design changes the way workflows behave
You trade immediate consistency for decoupling and flexibility.
5. Quarkus is a strong fit for cloud-native Java
It works well both in local dev mode and in containerized environments.
6. Deployment should be part of the conversation
Running on Docker Compose is useful. Running on Kind makes the architecture much more concrete.
Trade-offs and lessons learned
Event-driven systems are powerful, but harder to reason about
Synchronous request/response flows are simpler to understand at first. Asynchronous messaging improves decoupling, but introduces delayed state transitions, more moving parts, and debugging complexity.
Hexagonal Architecture improves maintainability, but adds structure
For very small apps, it may feel verbose. For systems expected to grow, it becomes a strong advantage.
Kubernetes manifests add realism, but also operational overhead
That overhead is worth it when the goal is to show deployability and operational thinking.
Final thoughts
This project was built to show what modern Java backend engineering can look like when we combine:
- clean domain boundaries
- event-driven workflows
- practical local developer experience
- Kubernetes-ready deployment
It is not just about making services talk to each other.
It is about building systems that are:
- understandable
- maintainable
- deployable
- resilient enough to evolve
That is the type of backend work I enjoy most: turning architecture into something practical, explicit, and usable.
About the author
I'm a Full-Stack / Backend Engineer focused on Java, Quarkus, Spring Boot, microservices, cloud-native architectures, and software craftsmanship. I enjoy building systems that balance clean design, performance, and real-world deployability.
Let's connect
If you work on:
- Java microservices
- Quarkus
- Kafka
- DDD
- Kubernetes
- or backend architecture in general
I'd be happy to connect and exchange ideas.