Skip to content
micro-frontend.dev 2.0

Micro-frontends and serverless execution

Author: Natalia Venditto

Date First Published: Wed Jan 04 2023

What are serverless functions?

Serverless functions are basically functions that run in a serverless or fully managed execution context, in the cloud.

Serverless execution contexts are typically:

  • Elastic: they scale out and in, up and down depending on the configuration
  • Auto scalable: meaning the scaling can be pre-configured, and doesn’t require manual steps
  • Scale down to zero: meaning you don’t pay if you don’t use them
  • Highly available: they are equiped with fault-tolerance mechanisms to auto recover, and hence be compliant with high availability SLAs (nines after the 99%)
  • Can be globally available: they’re deployed in distribution over a CDN with multiple points of presence (PoPs)

It’s true that like every service we consume to pay-as-we-go, we need to be mindful of costs and design and implement with execution in mind. Most cloud providers will offer large amounts of free invocation quotas, and mechanisms to limit computing time and memory usage, so you can build and test your application without having to worry about costs.

Serverless functions that run in origin

Serverless functions that run in a single region or origin specified by us when we create it, and are not globally distributed by default. This execution context is meant to be lightweight and fast, and have a single concern. It is typically used to process data, and return a response to the client, although it may be used for other use-cases or computing.

Another downside is cold-start: when a function is not used for a while, it is unloaded from memory, and when it is called again, it needs to be loaded again and install all dependecies. That is known as cold-start.

Origin functions use-cases

  • Origin functions use-cases
  • Data processing and access
  • Media processing
  • API calls
  • Webhooks
  • Authentication and authorization flows
  • Analytics
  • Websockets

Edge functions

Because of that, latency is reduced and runtime performance improved for the end-user, and high-availability and fault-tolerance guaranteed.

Additionally, edge functions tend to run in a more lightweight and isolated runtime, with different capabilities and requirements. Because the resources needed for the runtime are typically minimal, cold-start can be reduced to a minimum, and the function can be loaded and executed in much shorter amount of time -down to a few milliseconds-.

Edge functions use-cases

  • Composability
  • Personalization
  • Re-routing
  • Custom Content delivery ie: AB testing or customization
  • Real-time apps computations