PHP: Symfony Demo meets Modular, Microservice-ready Architecture - Part 1


I've created a Symfony 6 based Application that can serve as an Enterprise Architecture reference for anyone who's planning to build Large Scale Applications with Symfony. It uses a similar 'Blog' theme as the official Symfony Demo and can be found here

In this part of the Article I'm taking on some theoretical foundations that led me to create this project in a form it exists. 

In the next part I'll go over the actual code and Module-level Architectural decisions.


Symfony is one of the foundations of modern PHP development. It has been my absolute favorite tool of choice ever since I've discovered the wonders of using Frameworks. I know that there are lots of people that prefer Laravel, Zend or YII, but I am (and probably will bee forever and ever) a  die-hard Symfony fan.

After almost 7 years of break from PHP development, I was able to just install the Skeleton App and start coding. I wanted to check out the current best practices when it comes to developing Web Apps using Symfony, so I've naturally went for the Symfony Demo. Unfortunately, when I've started digging through the source code, I was more and more confused. It seemed to me that, for the Demo app, the Time has stopped. Yes, it uses the latest stable version of the Framework (5.3 as of 11/2021). But its internal structure and design choices were exactly as I remember from 2012. Sure, it showcases some really cool and mature features like Twig or Symfony Forms, but the world has, in major part, moved on.

Don't get me wrong. Twig is the best templating engine ever created. In Java world I've always used JTwig and/or Pebble Templates - the ports of Twig. But I was using them to generate pretty e-mails or to create some type of Expression Engine. Not for serving UI content! The UI is now a domain of Single Page Apps written in Angular/React/Vue/some other fancy Framework. 

Same goes for the Symfony Forms. I know that there are tons of legacy Apps that still rely on them and I was always impressed by how they work. Today's World is, however, a World of REST Back-ends that take JSON Requests and respond with JSON Responses. 

After analyzing the state of the Symfony Demo App I decided it would be a pretty good idea to create an alternative. Something that takes on the same basic idea (a Blog), but applies all the modern tech and architecture decisions:

  • REST API + Separate Client App
  • Modular, Micro-service ready Architecture
  • Best design and testing practices I know

Size Matters

OK, We've established that the Demo App uses some dated approaches. But why go with modularization? Why even mention micro-services?

PHP started as a personal project of one Rasmus Lerdorf back in 1994. The author was sick of writing CGI scripts and wanted some automation. Then it was used for many years as an entry-level structural programming language for simple websites. But in 2021 PHP is widely used not only for simple, small Applications, but for large, Enterprise-grade platforms handling tons of data every day.

The architecture that may have worked for a small App maintained by a small team of developers (or even a single dev) cannot be successfully applied to a large system without severe performance, development and maintenance penalties. The Symfony Demo App has some shortcuts that aim to make the developer's life easy, like

  • There is no modularization of any sort
  • There is no layer separation of any kind (Controllers directly using ORM, etc)
  • No proper separation between Unit and Integration tests 
  • Traditional, stateful, session-based security is used

I understand that the Demo is a small Application. But at the same time its Readme states: "The "Symfony Demo Application" is a reference application created to show how to develop applications following the Symfony Best Practices." Does it mean that its overall Architecture and design choices are applicable to all Symfony-based Applications, no matter the size?

Small Apps and Big Enterprise Applications have surprisingly little in common. Yes, they can be built with the same tools, but their internal structure has to reflect the target code-base size, Dev team size and performance requirements. Cramming everything into one "Super-module" and ignoring all the necessary layers of abstraction will, sooner or later, lead to an unmaintainable nightmare. In my opinion Symfony lacks a good Demo that shows a possible Architecture for a big-scaled Applications. But how do we define Scale?

The Scalability Duality

What do we think, when someone says that an Application is scalable? 9 out of 10 times we think about performance. An Application is traditionally considered as scalable if we can increase the load (number of concurrent users, number of processed rows/tasks, etc) without having any service interruptions.

However, there is also a different meaning of scalability. One that is often completely overlooked: the scalability of the development team. Many project/development managers, when asked "how to speed up the development of an Application" will immediately respond: "It's simple, we need to add some new Developers to the project". Sounds familiar?

We all know that the probability of increasing the productivity of a Development Team 9 times by adding 9 times more Developers is more or less equal to speeding up the birth of a baby 9 times by impregnating 9 women.

In reality, there are always multiple synchronizations and mutual exclusions when working on the same code-base. The amount of such phenomena increases drastically with the addition of every hew Developer to the Team. Devs will start getting in each other's way, they will have to wait for one another, they will have different understanding of business requirements, etc.

How to solve this problem? Well, as it happens we already have a solution. Modularization. Good Modularization.

Modules vs Micro-services

How does a (well implemented) Micro-service based Architecture solve performance scalability problem? By ensuring separate deployability of each Service. We can deploy the more heavy-loaded Services to more Instances and achieve more processing power.

How exactly do (well implemented) Modules solve the development team scalability problem? By applying Divide and Conquer strategy. We split the development team into smaller teams/groups. Each micro-team/group can work on only one Module. Communication between Modules should be based on API Contracts. Each micro-team can establish its own boundaries, without having to know the inner workings of other Modules.

What is the connection between Modules and Micro-services then? In a good Software Architecture the only difference between a Micro-service and a Module should be the method of deployment. In other words: Micro-services are Modules that can be deployed independently

This simple, short statement has a ton of architectural consequences. It means that:

  • we can start the development of our Application as a Modular Monolith and cut out the Micro-services later, when we know what is the load of each Module
  • a Module should be ready to be cut out from the main App at any given time, without any significant change in the way it functions and interacts with the rest of the System.
    • each Module has to have its own, separate Database. It cannot use the Databases of any other Modules. 
    • All communication between Modules has to be realized via a well established, high level API. Integration via Database is strictly forbidden (unless the Database serves as a Transport Layer for some higher abstraction, for example a Message Bus).
      • Modules should generally be unaware of each other - the communication should not couple them into dependency chains
    • Every Module should be able to Authenticate and Authorize either in a Stateless way or via an API Gateway. stateful, Session-based Security is not a good solution.

Unfortunately, as with everything in the world, there's a catch here. No matter what communication medium we will choose - REST, Kafka, RabbitMQ, Database-backed Queue - it will be turtle-slow in comparison to just performing a simple SQL Join.  Network communication isn't free and going into a Distributed Domain will bring problems that we've never had to solve, like:

  • Infrastructure outages,
  • Eventual Consistency,
  • Increased network traffic.

How to minimize the severity of all those issues?

Example Scenario

 OK, so let's set an example. Symfony Demo provides a simple Blog engine with 4 main Entities:

  • Post
  • Comment
  • Tag
  • User

 In the original Demo they are located within a single Database, are interconnected via Database Constraints and SQL Join-based Queries. If I were to make a simple Blog App, I'd probably do the same

But my ultimate goal here is to reuse a similar Domain to build an Example of a Modular Application (with the respect to all the rules about Modules that I've stated above).

So let's say that each of the original Entities has its own Bounded Context with specific features:

  • Posts
    • create Post
    • update Post
    • delete Post
    • find all Posts
        • with corresponding Tags list for each Post
        • with count of Comments for each Post
        • with pagination
    • find Post by ID
      • with list of Tags
      • with all the Comments
  • Comments
    • create a Comment for Post
      • with the ability to reply to an existing Comment
    • list all Latest Comments for all Posts 
      • with a corresponding Post title for each Comment
      • with pagination
  • Tags
    • list all Tags with Post counts for each Tag
    • list all Posts for a given Tag 
      • with corresponding Tags list for each Post
      • with count of Comments for each Post
      • with pagination
  • Security
    • create User
    • rename User
    • change User password

Looks like we're making 4 Modules. Each Module gets its own, private Database and will be responsible for it's share of the App's features. Each Module will also serve as a Data Owner of their part of the Data.

"Can you hear me now?"

At a first glance the features as very simple. If they would to be implemented in a single Module, we could implement them very quickly. But now, when we are imposing all of the Modularization restrictions, things are starting to get complicated. For example:

  • creating a new Post will lead to sending this Post's Tags to Tags Module
  • listing the Posts will require fetching the list of Tags from the Tags Module for each Post on the List
  • posting a Comment will require from the Comments Module to verify whether the Post that we're commenting even exists.
  • listing the Latest Comments with Post Titles will require the Comments Module to internally fetch a list of Posts for the collection of Post IDs and then programmatically join their titles with the Comments 

Nearly every action/feature will require some kind of inter-modular communication. And we will start to ask questions:

  • Should the Modules ask one another for data every time anyone makes a Request to any of them?
    • Won't that create a lot of additional traffic on the network?
    • is it a good idea to reconcile two or more Data Sets programmatically?
    • Won't it slow the whole Response down to unacceptable values?
  • What if one of the Modules has an outage due to some infrastructure issues?
    • should we create Timeouts and fail the original Request?
    • should we have Retry logic in case of Request failures?
      • How many Retries should there be until we give up?
      • Should the Retries use an incremental back-off multiplier in case of subsequent failures?
        • By what factor should we increment the multiplier?
        • what should be the maximum time between Retries?
    • should we employ a Circuit Breaker pattern to fail-fast after couple of consequent communication attempts fail?
      • what should be the Response that we send upstream?
      • is there any way we can gracefully fall back?
  • should we maybe use some kind of Caching mechanism to decrease the network load?
    • how would be this Cache invalidated?
  • What if a Transaction in one Module completes successfully, but other Modules will fail to store the data?
    • should we have a verification and retry mechanisms?

The above way of thinking may seem reasonable on the surface, as it is a natural extension of the 'extract method/class' refactoring techniques: "I'll just extract this logic into a separate little App and be done with it". Unfortunately this approach is a trap that will, without a shadow of a doubt, lead to one of the worst nightmares in Distributed Computing - Cascading Failures

To create a truly successful, Modular architecture, we need to change our thinking paradigms: 

A Distributed System is Asynchronous by Nature. Don't try to force it to act as a Synchronous one. 

Whenever we're trying to fetch data from another System, in real time, to do our own work, we are breaking that Rule. Every time we're breaking the Rule, we will have to face all of the issues above. And the System will, sooner or later, fail in Production.

What's the alternative?

Event Driven Data Locality

What if every Module would to keep the local copy of the data it needs to function in a format that is best suited for that particular Module? Sounds kinda insane, doesn't it? We've all went to Computer Science classes and know a thing or two about how Data de-normalization is bad. But aren't we de-normalizing our Data already? What about Cache? What about Full Text Search engines like EleasticSearch? What about things like GraphQL? Aren't those examples of Data de-normalization that we're using on a day-to-day basis? Maybe we should make our peace with the fact, that truly normalized Data doesn't belong in today's distributed landscape. Instead, we should leverage the underlying potential of de-normalization to solve for other issues. 

Every dependent Module may need the same Data to perform completely different operations. Hence, there is a very high probability that every Module will need that Data in a slightly different format. On one hand we can't expect the Source Module to provide its Data in all the shapes and forms that its Clients expect. On the other hand we can't expect from Clients that they will fetch the needed Data from the Source Module and Transform it every time they want to process something. 

The only viable solution is to store the data locally. But Cache, in the traditional form of a transparent layer that stores the data locally, has its own huge set of problems. The biggest one is always:

  • how can we know if the Cache has become stale? 
    • Should it be time-based? 
    • Should it expire some time after creation? 
    • Or maybe after last access? 
    • Should there be a Scheduler that periodically checks for validity with the Source?
    • Or maybe should it be event driven?

But maybe, instead of using a problematic "invisible" Caching layer, we should use a type of Data Storage that's an integral part of every Module's Data Model. A Read Model. Read Models and Cache have some similarities in the sense that they both store duplicated, de-normalized Data somewhere, where it's easy, convenient and fast to obtain. There are, however, some crucial differences:

  • Cache traditionally resides in the layer above the Data Access layer
    • Read Model is an integral part of the Data Access layer
  • Cache invalidation logic is usually time-based
    • Read Model update logic is always event-based
  • Cache, in its lazy nature, needs to be warmed up to be usable
    • Read Models are always ready to go
  • Distributed Cache needs additional infrastructure and Local Cache is often inefficient in a distributed environment
    • Read Models are usually stored in the same Database as the core Data Store
  • Cache is not a part of the Application Code and doesn't have to be maintained
    • Read Models are a part of the Application and they have to be as maintained as any other part of the Code
  • Cache, being "invisible" always reflects the latest Data Format available
    • Read Model reflects the Data Model that its coded for

Read Models do come with additional work, maintenance and their own caveats, but in general are more stable, predictable and way easier to manage, than Cache. They are a part of Architecture, not a band-aid for badly performing processes.

Additionally, they play very well with DDD's concept of Bounded Contexts. In a nutshell - A single Object can have different Shapes, depending on the viewing angle. In our case, different Modules can require different sets of properties from a Post Object. Another example would be a hypothetical e-commerce App - Order placed by a Customer will have different meanings for Payment Module, Shipment Module and Logistics Module.

Isn't that like a premature optimization? - you may ask. No. There's a big difference in premature optimization and performance by design. Arbitrarily adding Cache to some functionality "because i think it will make it run faster" would definitely qualify as premature optimization. Creating a proper read model is performance by design.

Who owns What?

Remember that somewhere above I've said, that each Module will be a Data Owner? Data ownership used to be a big deal. Traditionally we would tend to (consciously or not) define Data Owners and build logic around them. Now, in the distributed world, we have to start thinking in a new category: Process Ownership. Each Module is an owner of a Process:

  • Posts Module is responsible for managing Posts. If it needs, it can store the Data that traditionally would belong to Tags and/or Comments Modules. It has to get the job done.
  • Comments Module is responsible for managing Comments. If it needs some Data (like checking whether the Post exists under some ID, or getting a Title of the Post that the Comment belongs to) it shouldn't have to go and ask for this data. It should have this Data locally stored and ready to go.

The above revelations has led me to the following conclusion: Every Module should have all the data it needs to operate stored locally, in a Read Model. Furthermore, it shouldn't ever have to ask for this data. It should receive it in a form of Events.

You know what's great about Events? They can be sent and handled Asynchronously. Remember how I've stated that we shouldn't force an Asynchronous System to act Synchronously? This is exactly how we achieve it. Through Events. Every time a Process Owner has performed some work that has led to Data Changes, it should send out Events, notifying all other Systems that there is new Data in the System. All the interested Consumers should be able to subscribe to those Events and make corresponding changes in their local copies of the Data.

This is the only solution that:

  • will be optimal for Network Traffic
    • Events will be sent once per every Data Change
    • In most systems the number of Data Changes is significantly smaller than Data Reads
  • will ensure that Modules can do their jobs even in case of partial System outage
    • because every Module will have all of the needed Data, it won't have to ask for it in real time
    • the Module, as a result of its work, will send out Events that, with a proper re-delivery strategy, will be consumed by all interested parties, including the Modules that were experiencing the outage
    • we won't have to worry about setting Timeouts, performing Retries and configuring Circuit Breakers
    • We won't face dreaded Cascading Failures that, in the worst case scenario, can lead to a DDoS-like, widespread System failure
  • will make de-normalized local data storage (in a form of Read Models) an integral, systematic, explicit part of the Architecture
    • we won't have to worry about Cache management anymore


As they say, there's No Free Lunch. This approach comes with two catches, that may potentially be off-putting to you:

  • The initial amount of code and work needed, to achieve the same Feature Set as with the traditional approach, will be significantly larger.
    • This architecture is thought as a good starting point for Systems that eventually will be developed and managed by a number of independent Development Teams
    • If you expect the System to remain small and to be developed by a small number of people, this approach may be too much for you
  • The distributed nature of the Architecture will break traditional ACID Transactions and throw you to the world of Eventual Consistency
    • There are some financial/banking fields that cannot accept EC due to the importance of the processed data. In general though, EC can be accepted for most applications.
    • Having Eventual Consistency will most probably compel you to create additional logic, that will be capable of Base-lining the System.
      • Ideally each Process Owner should be able to periodically emit Base-lining Events that will allow all the dependent Modules to ensure the consistency of their local copies of the Data.
      • Each Module will change in time. That means Modules may start needing data they didn't needed before. Base-lining mechanism should provide a way for all dependent Modules to update their data according to their current needs.
      • Base-line all the Data may start to be difficult for long-living Systems. You might want to employ accurate Data Retention to back-up and delete old data. You also can do a partial Base-line, for example for a defined period of Time (like last week, last month, etc)


The question arises about End-to-End testing. Generally in the Micro-services world there is no such thing. Even if we are creating a Modular Monolith, we won't be able to test out communication between the Modules. 

How can we then ensure, that the Events sent by one Module can be properly consumed by other Modules? We can use Consumer-Driven Contracts to have a pretty good feeling of safety.


What I've learned

Making a good Software Architecture is super hard, mainly because there's no immediate feedback and very limited tooling to measure its quality. It usually takes a long time to be able to truly evaluate whether its good or not. With every new iteration I try to learn from what I've done previously wrong and apply that knowledge to the next Application I'm building. I'm pretty sure, I've been able to identify most crucial weak points of all my previous designs. This iteration is the closest I've ever got to achieving a stable, flexible and maintainable Architecture, that's going to allow both the Application, and Development Team to scale without any major issues. 


Well, that was a long Post. And I haven't event began describing the actual code. To those who withstood this amount of reading - thank you! I hope that it made sense to you and I'd like to welcome you to read the Part 2 - where I'll be talking about the actual Symfony Application.


What if I have an existing Production-grade Application that I want to migrate to Modules or Micro-services?

So far I was lucky enough to be able to implement my Architecture ideas in new, fresh, green-field Projects. Migrating an Architecture of a pre-existent Application, that's already in Production would be a different story altogether, though not completely impossible. It would surely turn out to be a very difficult, multi tiered operation. There is no way in the world anyone would be able to do this it in one go.

The best way I can think of would be to slowly strip the old Application of its functions, extracting its parts into a new, Modular/Micro-service one. In order to do that, you might need to:

  1. Establish the boundaries of the Module you want to extract
  2. Define all the Data and find all the Data Sources it needs to do its job
  3. Make sure that those Data Sources are emitting proper Events with all the needed Data
    1. This may require additional refactoring of all connected parts of the System
    2. Depending on the Business Context, you might need to Baseline the new Module with all/partial historic Data
  4.  Extract the Business Logic into the new Module/Micro-service
    1. it should start emitting Events that are related to the results of its work
  5. Make sure, that the Events emitted by the new Module/Micro-service are being consumed by all the Logic that is/was dependent on the extracted Module 
    1. Depending on the situation, you may want to run the newly created Module/Micro-service in a "parallel" mode with the old Logic, just to be sure that they both behave in the same way
    2. Ideally you'd want to have a Blue/Green deployment coupled with Canary Releases

The decision to even attempt such an endeavor would have to be evaluated against a concrete Business situation.

  • If the Product I'm taking care of would have a fierce competition and if that competition could  easily outrun my App, provided that I'm not bringing any new, innovative features for an extended period of time - I'd rather wait for better times and focus on other means of stabilizing and maintaining your Application.
  • If, on the other hand, my App would have a steady source of income and a good set of features, but I'd want to grow and bring new Clients on board - the refactoring to a scalable, distributed solution would be absolutely the way to go.



  1. Have you maybe looked at the Symfony Fast Track online book or Symfony API Plattform? I agree the 'demo' app is severely outdated, but the 'fast track' book is much closer to real modern Symfony usage.

  2. Every one of these perspectives has its own lifecycle and the proposed framework architecture will comprise of viewpoints that are each in an alternate condition of their individual lifecycle.


Post a Comment