Data DevSecOps

Environments – Monoliths Versus Microservices

05

AUGUST, 2021

by Alexander Fridman

In the beginning there was nothing. Then there was the monolith, though we used to simply call monoliths “software.”

Today we have two rival architectural types: monoliths and microservices. This post will explain what monoliths and microservice-based architectures are, the differences between the two, and the special considerations for each.

Monoliths

All software was a monolith until ten or so years ago. A monolith is basically software code that sits in a single repository that all developers work on. There can be more than one repository if, say, you have front-end code and back-end code. But we’re still talking about a monolith if all domain code is in one place. There’s nothing wrong with this model. It worked fine from the early years of the software industry in the 1950s until the early 2000s or so.

Things Change

In recent years things changed dramatically. Code bases grew to hundreds of thousands of lines of code per app. The number of developers working on each software project grew to hundreds. The number of daily code changes grew rapidly.

All of this resulted in major problems for the monolith model. Making a small change in one part of the application required recompiling and rebuilding the entire application. That takes time. Moreover, bugs were introduced in different parts of the system since there was a large, single code base. When a lot of developers touch the same code base, they can inadvertently step on each other’s feet and introduce bugs through a lack of coordinated effort. Overall, the chance of introducing bugs grows when the number of parameters (code lines, developers, changes per day) increases.

There one more thing to note here: making rapid changes to your code puts a strain on your infrastructure, and DevOps efforts require an infrastructure that can handle a massive code base.

Microservices

The microservices approach was developed to mitigate the issues described above. Instead of having one big monolith of code, the application is split into different microservices. Each is responsible for a small part of the business logic such as registration, login, billing, and invoices. Each sits in a separate repository and is deployed individually. The various microservices usually communicate using RESTful or GraphQL APIs. This separation of concerns, so to speak, minimizes the risk of introducing bugs. Since each microservice handles only a specific area of the application, we can assign dedicated teams to work on them.

In addition, this architecture enables rapid and frequent deployments. Since we usually need to deploy only a few microservices at a time, or in some cases only one, we deploy only a fraction of our code, which takes a fraction of the time. It’s much more efficient than running a pipeline of unit tests, integration tests, and so on for the whole application.

Containers

The rapid adoption of containers as a deployment vehicle also contributed to the adoption of microservices. Although you can deploy microservices using standard virtual machines, containers work perfectly with microservices since they’re small, disposable “envelopes” that run only one process at a time and can be easily and rapidly created and destroyed. Each microservice fits into a container image and can be scaled as needed. For instance, if we face a heavy load on the registration side of the app, we only need to scale the relevant containers instead of the entire application, unlike with virtual machines. This way, we can reduce infrastructure costs in a substantial way.

Microservices And Monolith Considerations

Monolith Considerations

As explained above, managing monoliths is easier, but it can be challenging in some ways. Let’s discuss them in more detail.

Infrastructure Costs

Since each monolith consists of thousands lines of code, the source repositories are heavier. This in turn requires more hardware to store them. In addition, if we want to scale, we have to scale the whole application. We can’t load balance parts of it. This can result in spawning a lot of large instances on expensive machines. In addition, deploying such software in a CI/CD pipeline requires substantial hardware resources because each build can take several hours to complete.

Lack of Agility

Since monoliths contain a lot of legacy code, introducing changes becomes more and more time consuming. And since the large number of developers (usually hundreds on very large projects) spawns a lot of additional bugs, it takes longer to create end-to-end tests and fox bugs, time that’s spent at the expense of creating new features.

Easier to Manage

Since multiple pieces of a monolith don’t need to be synchronized, this might be the right approach for small startups that want to ship a product quickly. Likewise, if your projects are relatively small and well defined, you can probably stick to the monolith architecture.

Microservices Considerations

The flexibility and agility that microservices provide come at a price. Splitting your application can result in the need to manage hundreds of microservices, each with its own repository. Even understanding where code is located can be a challenge.

Synchronizing microservices can become problematic as well, and we need to take the following into consideration when doing so:

Security

  • Determining which microservices can talk with each other
  • Preventing unauthorized access to the various microservice APIs

Deployments

  • Orchestrating daily deployments of dozens of containers
  • Saving each container’s version history
  • Troubleshooting failed deployments

Communication

  • Making sure that each microservice has the right privileges (authentication tokens for instance) to access an API endpoint
  • Being sure that each microservice knows the URL of the other microservices (microservice discovery)
  • Taking into account the fact that Internet communication can be slow, and it can take time to receive a response from a remote microservice

All these challenges stem from the fact that we split our application into pieces. Any tool that would provide visibility into what’s going on would be critical for our ability to manage all this.

How many microservices do we have? How many microservices are deployed? What is the status of each deployed microservice? Do all the microservices adhere to a unified compliance policy?

There are many tools that can answer some of this questions, but enov8’s Test Environment Platform is holistic one that provides answers to all of them. It can save you a lot of time that might be otherwise wasted on scrutinizing information from different sources.

Conclusion

Both monoliths and microservices are valid models for building and deploying software. Each has its own pros and cons. The rule of thumb that says in the software world, there are no perfect solutions, only trade-offs, applies here. You should survey your company’s needs and use cases before choosing which approach will work best for you.

Post Author

This post was written by Alexander Fridman. Alexander is a veteran in the software industry with over 11 years of experience. He worked his way up the corporate ladder and has held the positions of Senior Software Developer, Team Leader, Software Architect, and CTO. Alexander is experienced in frontend development and DevOps, but he specializes in backend development.

 

Relevant Articles

What makes a Good Deployment Manager?

What makes a Good Deployment Manager?

Deployment management is a critical aspect of the software development process. It involves the planning, coordination, and execution of the deployment of software applications to various environments, such as production, testing, and development. The deployment...

DevOps vs SRE: How Do They Differ?

DevOps vs SRE: How Do They Differ?

Nowadays, there’s a lack of clarity about the difference between site reliability engineering (SRE) and development and operations (DevOps). There’s definitely an overlap between the roles, even though there are clear distinctions. Where DevOps focuses on automation...

Self-Healing Data: The Power of Enov8 VME

Self-Healing Data: The Power of Enov8 VME

Introduction In the interconnected world of applications and data, maintaining system resilience and operational efficiency is no small feat. As businesses increasingly rely on complex IT environments, disruptions caused by data issues or application failures can lead...

What is Data Lineage? An Explanation and Example

What is Data Lineage? An Explanation and Example

In today’s data-driven world, understanding the origins and transformations of data is critical for effective management, analysis, and decision-making. Data lineage plays a vital role in this process, providing insights into data’s lifecycle and ensuring data...

What is Data Fabrication? A Testing-Focused Explanation

What is Data Fabrication? A Testing-Focused Explanation

In today’s post, we’ll answer what looks like a simple question: what is data fabrication? That’s such an unimposing question, but it contains a lot for us to unpack. Isn’t data fabrication a bad thing? The answer is actually no, not in this context. And...

Technology Roadmapping

Technology Roadmapping

In today's rapidly evolving digital landscape, businesses must plan carefully to stay ahead of technological shifts. A Technology Roadmap is a critical tool for organizations looking to make informed decisions about their technological investments and align their IT...