Extracting data from SAP in a plug-and-play way with Amazon AppFlow
04 December 2024 - 6 min. read
Mehmed Dourmouch
DevOps Engineer
Containers, lambdas, and microservices are some of modern software development's most popular and important concepts. However, developers and architects often misunderstand or misuse these terms.
Sometimes, we are told that a project we're developing relies on a microservices architecture. In reality, it's only a bunch of tightly coupled little monoliths in containers.
In this article, we'll make things clear about microservices, the computational objects we can use to implement a service-oriented architecture, and the AWS-managed services that will ease our task.
Unfortunately, things are not this easy. Microservices must adhere to strict rules and have clear definitions. Let's summarize and discuss them very briefly.
Microservices should be:
So, what about Containers? Let's see the principles of container design:
As you may see, these definitions and concepts overlap in some points (and we didn't even mention Lambda!), but the devil is in the details: using a specific technology doesn't mean the architecture design will automatically have all the properties we expect.
In fact, containers offer a way to package and execute software in an isolated and portable environment. At the same time, a microservices approach helps to divide a complex problem into multiple simpler, loosely coupled services that communicate using APIs and can be managed by different teams.
With these key concepts in mind, let's dive deeper into the use of these technologies.
Containers can help with scalability, portability, and resource efficiency but also require careful orchestration, monitoring, and security.
Lambdas can help with agility, cost-effectiveness, and scalability, but they also can be challenging to troubleshoot performance and problems due to the complexity of an implementation.
Microservices can help with modularity, flexibility, and fault tolerance but also introduce network latency, data consistency, and testing issues.
Here are some examples of containers and lambda that cannot be identified as microservices.
There are a lot of examples, and you surely can think of something you see in your everyday life.
We will not focus on whether a microservice-oriented architecture is better and for which use case because we aim to ease our lives while managing the infrastructures that will run our applications.
Remember that when we use containers and lambdas to implement a microservice strategy, we need to add at least three components to our architecture: a runtime, an orchestrator, and a discovery service.
As usual, AWS eases our lives because it offers different managed services to choose from; let's see them.
Compute is the most obvious part. We can use Amazon EC2, Fargate (for ECS or EKS), and AWS Lambda. EC2 seems counterintuitive, but microservices are about design, not technology.
Lambda and Fargate offer serverless computing platforms, so their billing is solely based on usage. They are the perfect companion for a microservices approach: if a Lambda-based microservice is rarely invoked, its billing will be marginal.
Using Fargate and Lambda also helps adapt to usage patterns: horizontal resource scaling using multiple small compute units optimizes scalability.
Orchestration is a critical component of a microservices architecture because it can automate the deployment, scaling, coordination, and fault-tolerance of microservices, managing also service lifecycle and execution flows.
ECS and EKS (on EC2 and Fargate) offer freedom of choice for the Container World and take away management effort; API Gateway and Step Functions help with the execution and integration of Lambdas and other AWS services.
AWS AppMesh can also be used to handle routing and authorization between services, adding an additional layer of observability.
In a dynamic environment, accessing everything without relying on hardcoded names or IP addresses is mandatory (this requirement applies to every cloud environment).
Microservices should also be able to automatically notify their presence to a central registry that keeps track of what is where and facilitates communication.
The discovery service maintains a registry of the available service instances and provides a mechanism for service consumers to query and discover them, handling changes in the service instances due to scaling, failures, or upgrades.
EKS uses the built-in Kubernetes internal DNS, while ECS can rely on Amazon ECS Service Connect (tip: these two services can be used together).
To deploy microservices independently, you'll need pipelines. You will need to set up an efficient logging system to monitor containers. You'll need communication components (like queues, load balancers, and event triggers) to decouple your application.
All these additional components will surely increase the complexity of your architecture, so choose carefully: as we all know, sometimes we want to use a specific technology, bending the business case and design to our fantasy or curiosity. Choosing the design according to the desired technology, anyway, is dangerous and can lead a project to a spectacular failure because the wrong design will impact the availability, manageability, and, thus, the cost of owning the solution.
As always, there's no perfect solution: there are use cases that are a perfect fit for microservices, while, sometimes, breaking down every component in different services can only increase the overall complexity of our project: do you really need a user registration service for a local pet store e-commerce?
Starting with a monolith to speed up development time and ship a product faster could be a better approach than missing the right time to market a solution.
In which use case are you getting your hands on? Are you maintaining a legacy monolith, trying to break it down, or designing a new shiny system from scratch? Let us know in the comments!
(If you are considering refactoring a monolithic architecture into microservices, this article by my colleagues Alessandro and Simone can clarify things).
Good Luck with your infrastructure!
Proud2beCloud is a blog by beSharp, an Italian APN Premier Consulting Partner expert in designing, implementing, and managing complex Cloud infrastructures and advanced services on AWS. Before being writers, we are Cloud Experts working daily with AWS services since 2007. We are hungry readers, innovative builders, and gem-seekers. On Proud2beCloud, we regularly share our best AWS pro tips, configuration insights, in-depth news, tips&tricks, how-tos, and many other resources. Take part in the discussion!