Skip to content

lwc 301 lifecycle and state design

James Hou edited this page May 13, 2021 · 1 revision

A common issue that arises with sufficiently complex apps is the being able to discern where to:

  • First paint state
  • Maintain / propagate state

And unfortunately, at the time of this writing (summer 21 incoming) there is not yet a first party redux or rxjs state mgmt library.

So then, in the context of lifecycle design, I'd like to go over the above topics on the lifecycle of your data. These thoughts should help you make decisions around the tradeoffs of dealing with state (to the extent of officially documented first party solutions).

Note: There are a few libraries out there that attempt (successfully!) to convert concepts and state mgmt paradigms from other frameworks into a lib that LWC can support. Those will not be discussed here since this wiki is strongly aligned to first party solutions and there is currently nothing officially supported.

First Paint State

With the increased ergonomics of ES6 in data manipulation, it can be advantageous to create an initial cache from the server and then continue to manipulate / filter it down clientside for consumption in various LWCs on the flexipage. A couple of concepts here:

  1. Using Map<String, Object> to build a cache of data (covered more in depth here).
  2. Optionally, building the cache with cacheable=true.
  3. Sharing that cache via (idempotent) @wire calls across multiple LWCs.

Let's dive into the bottom two since those paired together allows for some interesting design patterns:

Idempotent Requests

First, let's define an idempotent operation as serverside call that can be called any number of times while guaranteeing that side effects only occur once.

Given that, let's first apply that to the getRecord LDS adapter which is idempotent given the same recordId. There is a built in caching layer in the Lightning Platform that allows for both native components (e.g. record detail) and custom components (e.g. custom lwc using getRecord) to allow for idempotent calls. In context of Lightning platform, this simply means that the cache is warmed when the first component (native or custom) uses the equivalent of getRecord in its calls. The second component will take from that warmed cache and, most likely, remove the need for a full network call to the server.

This is extremely powerful because you can use this to your advantage when designing your lifecycles. Here's a concrete example:

  1. You've designed a Flexipage using the native tab component, having configured two tabs.
  2. On the first tab, you put Record Detail native component in there.
  3. On the second tab, you put custom LWC using an @wire to getRecord using the same recordId as the Record Detail.

This is what happens on the platform:

  1. User makes URL request to load a Flexipage.
  2. Flexipage bootstraps, cold cache of current record in the Lightning Platform cache layer.
  3. Contents of First Tab now loads the Record Detail which calls (the equivalent of?) getRecord under the hood. Cache for current record is now warmed.
  4. User clicks second tab.
  5. Contents of Second Tab now loads the custom LWC which calls getRecord but doesn't initiate a serverside request, due to it being both an idempotent call and being cached clientside.

Maintain / Propagate State

This one is a trickier topic since each situation is different, but here are some high level designs that can scale well:

  1. Use hidden component on flexipage to warm your data cache (either Map<String, Object> or an apex class representing a Data Transfer Object).
    • Then, use the idempotent technique as described above to increase speed on your actual first paint.
  2. Wrap your suite of LWCs with a container responsible for coordinating events, event payloads, initial data access and downward propagation (to child LWCs through @api props)
    • This is considered data driven design, because the state of your application is "prop-drilled" from a higher level component.
  3. Use message-service to loosely couple data payloads on demand, sharing state between a centralized "channel" and subscribers can situationally choose to omit updates.