Year end summary A turning point for microservices

Mondo Workplace Updated on 2024-01-31

Microservices have long been considered the de facto standard for cloud-native service application architectures, and now cloud giants like Amazon and Google are refactoring them.

Translated from year-in-review: 2023 was a turning point for microservices by Joab Jackson is a senior editor at The New Stack, covering cloud-native computing and system operations. He has been covering IT infrastructure and development for more than 25 years, including roles at IDG and Government Computer News. Until then, he. Maybe there's something wrong with our understanding of microservices?

That's exactly what the article "Towards Modern Development for Cloud Applications" (PDF) is about a group of Google engineers (led by Michael Whittaker, a software engineer at Google) at HOTOS'23 (19th Symposium on Hot Issues in Operating Systems) presented in June. As Whittaker et al. point out, microservices are mostly architecturally not set up correctly. They confuse logical boundaries (how they are written) and physical boundaries (how they are deployed). That's where the problem comes in. Instead, Google engineers came up with another approach. Build your application as a logical monolith, which is then handled by an automated runtime that decides where to run workloads based on your application's needs and available resources. With this type of latency, they were able to reduce the latency of the system by a factor of 15 and reduce the cost by up to a factor of nine.

If people can start with orderly modularity, we can look at the deployment architecture as an implementation detail," said Kelsey Hightower*** commenting on the work. A few months ago, the engineering team at Amazon Prime Video published a blog post explaining that monolithic architectures outperform microservices and serverless approaches, at least when it comes to monitoring. In fact, Amazon saved 90% of its operational costs by abandoning the microservices architecture.

The assertion that a generation of engineers and architects has grown up on the superiority of microservices is truly staggering. "This article is a real complete embarrassment for Amazon as a company. There is no internal coordination or synergistic communication at all," writes Donnie Berkholz, an analyst at Platify, who recently started her own industry analysis firm. "What's unique about this story is that Amazon was the epitome of service-oriented architecture," said D**Id Heinemeier Hansson, creator of Ruby-on-Rails and co-founder of BaseCamp. Serverless, on the other hand, only makes things worse. ”

Amazon engineers are tasked with monitoring the thousands of ** streams that Prime delivers to customers. Initially, this work was done by a set of distributed components coordinated by AWS Step Functions, a serverless orchestration service that uses AWS Lambda serverless services. Theoretically, using serverless should allow teams to scale each service independently. However, it turned out that at least for the way the team implemented the component, they ran into hard scaling limits when they only reached 5% of the expected load. With the need to transfer data between multiple components, the cost of scaling to monitor thousands of streams will also become uneconomical. Initially, the team tried to optimize the individual components, but this did not lead to significant improvements. As a result, the team moved all components to a single process, hosted on Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Container Service (Amazon ECS).

Microservices and serverless components are tools to perform work at high scale, but the choice of them over monolithic architectures must be made on a case-by-case basis," the Amazon team concluded.

Arguably, the term "microservices" was first coined by Peter Rodgers in 2005, although he called them "micro web services". He gave a name to this concept, which was gaining traction in the era of network services and service-oriented architecture (SOA) at the time.

The main impetus behind 'microweb services' at the time was to split a single, large, 'monolithic' design into multiple independent component processes, making the library more granular and manageable," explains software engineer Amanda Bennett in a blog post. The concept gained widespread adoption in the decades that followed, especially in the era of cloud-native computing, and it is only in recent years that it has begun to be criticized in some areas.

Software engineer Alexander Kainz contributed an excellent comparison of monolithic architectures and microservices to TNS.

In their, Google engineers list some of the drawbacks of the microservices approach, including:

Performance: Serializing data and sending it over the network to a remote service can impact performance and can even lead to bottlenecks if the application becomes complex enough.

Understand: In distributed systems, bed bugs are often difficult to track due to the many interactions between microservices.

Management issues: It is an advantage to think that different parts of the app can be updated according to their own schedule. But this results in developers having to manage a large number of binaries, each with its own release schedule. And you may face difficulties when you want to run end-to-end tests on services running locally. APIs become vulnerableThe key to microservice interoperability is that once a microservice is established, the API cannot be changed so as not to break other microservices that depend on that API. As a result, the API can only be extended by adding more APIs, resulting in bloat.

A new type of microservice?

When The New Stack first covered the news about Amazon, many were quick to point out that the architecture used by the team wasn't exactly monolithic either.

It's definitely not a story of going from microservices to a monolith," said Adrian Cockcroft, former vice president of Amazon Web Services and now a consultant at Nubank, in an interview with The New Stack. I think one of the problems is the wrong labeling. ”

He points out that in many applications, especially in-house applications, development costs exceed runtime costs. In these cases, Step Functions makes a lot of sense in terms of saving development time, but can come at a cost when dealing with a large number of workloads.

If you know you're going to end up executing it at a certain scale," Cockcroft says, "you're probably going to build it differently in the first place." So the question is, do you know how to do this, do you know at what scale it's going to run?Cockcroft said.

Google's solves this problem by simplifying the work of developers and enabling the runtime infrastructure to figure out the most cost-effective way to run these applications.

By delegating all execution responsibilities to the runtime, our solution is able to provide the same benefits as microservices, but with higher performance and lower costs," Google researchers wrote.

There have been a number of fundamental architectural revisits this year, and microservices are not the only ideals being questioned.

Cloud computing, for example, has also come under scrutiny.

In June, 37signals, the company that operates both Basecamp and Hey email applications, acquired a fleet of Dell servers and left the cloud, breaking with a decades-long tradition of migrating operations to greater efficiencies with a more vague definition off site. "This is the core deception of cloud marketing, which is that everything will be so easy that you hardly need anyone to operate it," D**ID Heinemeier Hansson explains in a blog post. "I've never seen it. At 37signals, there is no other person running large internet applications either. Cloud computing has some advantages, but it is not usually in terms of reducing the number of operations staff. Of course, DHH is a race car driver, so it was natural for him to dig deeper into the hardware layer. But there are others who are willing to support the bet. Later this year, Oxide Computers unveiled their new system, hoping to serve others with a similar sentiment: more cost-effective running cloud computing workloads in their own data centers. This sentiment seems to be more considered, at least when cloud bills are due. FinOps has become visible in 2023, with more and more organizations turning to companies like Kubecost to take control of their cloud spending. How many people are surprised that a Datadog customer received a bill of $65 million for cloud monitoringArguably, for a company that generates billions of dollars in revenue, paying a $65 million observability bill may be worth it. But as chief architects take a closer look at the engineering decisions of the past decade, they may decide to make some adjustments. And microservices will be no exception.

TNS Cloud Native Reporter Scott MFulton III contributed to this report.

Related Pages