Serverless Analysis, Starting From Data Models
In the previous article, we explored the workings of Serverless under the hood. In essence, Serverless leverages layered scheduling and rapid cold starts, enabling it to scale down to zero when no events are being processed. It’s akin to a voice-activated light that illuminates when someone is present and automatically switches off when the room is empty.
Having grasped the fundamentals, you might be curious about the practical applications of this impressive technology. Today, we’ll delve into the use cases of Serverless. However, a prerequisite to understanding its applications is comprehending its process model, a concept as crucial as cold starts.
The Serverless Process Model
Let’s revisit the Serverless cold start process from the last session. Recall that the cloud provider manages the container and runtime preparation phases, leaving us to focus solely on function execution. Within the Serverless realm, function execution is handled by a “function service.” When the function trigger signals the arrival of an “event,” the function service creates function instances as needed and executes the corresponding functions. Once a function completes execution, its associated instance bows out, allowing the Serverless application to scale down to zero and enter a power-saving mode.
Now, you might wonder if it’s possible to keep an instance alive after function execution instead of terminating it, allowing it to await the next function invocation. This would eliminate the cold start overhead each time, resulting in faster response times.
Indeed, Serverless anticipates such scenarios. Consequently, from the perspective of the process running the function instance, two models exist:
- Run-to-Completion: In this model, the function instance, once ready, executes the function and terminates immediately. This represents the purest form of Serverless usage.
- Persistent Process: Here, the function instance, after readying itself, doesn’t cease upon function completion. Instead, it returns and patiently awaits the next function call. Note that even in this model, the cloud provider eventually destroys the function instance if no event triggers it for a predetermined period.
Data Orchestration
Most engineers are familiar with the MVC (Model-View-Controller) pattern, a widely successful design paradigm. However, the rise of frontend MVVM frameworks has pushed the View layer forward, leading to SPA (Single Page Applications). Conversely, the backend’s Control and Model layers have descended, giving rise to service-oriented backend applications.
This shift has resulted in a more thorough decoupling of the frontend and backend. Frontend development can proceed independently, relying on mock data interfaces, while backend teams can focus on data interface development. However, this separation introduces a data gateway layer with high network I/O.
Node.js, with its asynchronous, non-blocking nature and JavaScript’s affinity for frontend engineers, naturally took on the mantle of the data gateway layer. This led to the emergence of the Node.js BFF (Backend For Frontend) layer, which orchestrates backend data and interfaces, adapting them into data structures suitable for frontend consumption.
The BFF layer acts as an intermediary, bridging the frontend and backend. Unprocessed data, often referred to as raw data or metadata, is practically unreadable for end-users. Therefore, we need to combine and process relevant data, adding value and making it meaningful. This process of combining and processing is called data orchestration.
Traditionally, managing Node.js applications for the BFF layer has been resource-intensive, requiring virtual machines or PaaS platforms. However, since the BFF layer primarily performs stateless data orchestration, we can seamlessly replace the Node.js application with Serverless using the run-to-completion model. This is the essence of the increasingly popular term SFF (Serverless For Frontend).
With the evolution from BFF to SFF understood, let’s trace the new request flow. When the frontend initiates a data request, the function trigger activates our function service. Our function then starts, invokes the backend’s metadata interface, processes the returned metadata into the format required by the frontend, and finally, our Serverless function can take a well-deserved rest.
Service Orchestration
Service orchestration shares similarities with data orchestration, the key distinction being that it focuses on combining and processing various services offered by the cloud provider. While the concept predates Serverless, its traditional implementation was restricted by the SDK language versions supported by the services. Typically, we’d resort to YAML files or command-line interfaces to orchestrate services. Utilizing these services or APIs involved finding corresponding SDKs in our preferred programming language, loading them into our code, and employing secret keys to call SDK methods for orchestration. Similar to data orchestration, backend operations and deployment costs were substantial, and the absence of an SDK necessitated manual implementation based on the platform’s interfaces or protocols.
Serverless expands the boundaries of SDK utilization. For instance, imagine a web service needing to send verification codes via email. We can achieve this with a run-to-completion Serverless function that utilizes the cloud provider’s SDK to dispatch emails. Concurrently, a persistent Serverless function can generate random string verification codes, store them, and trigger the email-sending Serverless function to deliver the codes to user inboxes. During verification, we can invoke the persistent Serverless function again to validate the codes.
A notable advantage of Serverless is its language agnosticism. This liberates development teams from being confined to a single language, allowing them to leverage the strengths of Java, PHP, Python, Node.js, and others to collaboratively build complex applications.
The open nature of Serverless service orchestration has garnered significant attention from cloud providers. It empowers the creation of diverse and intricate service orchestration scenarios, all while remaining language-independent, significantly expanding the use cases of various cloud services. However, this also places a demand on developers to familiarize themselves with the array of services offered by their chosen cloud provider.
Conclusion
- There are two types of process models for Serverless: resident process type and use and destroy type. The resident process type is designed to adapt to the traditional MVC architecture and does not look natural; If you start playing Serverless from now on, I would definitely recommend the use and destroy model, which can maximize the advantages of Serverless.
- Tracing back history, I have sorted out the BFF developed through front-end and back-end separation, which can then be replaced by SFF. Whether it is internal interface orchestration or external data orchestration, Serverless can play a great advantage.
- Going further from data orchestration, we can leverage the capabilities of Serverless and cloud service providers to achieve service orchestration, create more powerful composite service scenarios, and enhance our research and development efficiency.
Novita AI is the All-in-one cloud platform that empowers your AI ambitions. Integrated APIs, serverless, GPU Instance — the cost-effective tools you need. Eliminate infrastructure, start free, and make your AI vision a reality.
Recommended Reading
Unveiling the Revolution: Exploring the World of Serverless Computing