-
Notifications
You must be signed in to change notification settings - Fork 10
lwc 301 lifecycle and state design
A common issue that arises with sufficiently complex apps is the being able to discern where to:
- First paint state
- Maintain / propagate state
And unfortunately, at the time of this writing (summer 21 incoming) there is not yet a first party redux
or rxjs
state mgmt library.
So then, in the context of lifecycle design, I'd like to go over the above topics on the lifecycle of your data. These thoughts should help you make decisions around the tradeoffs of dealing with state (to the extent of officially documented first party solutions).
Note: There are a few libraries out there that attempt (successfully!) to convert concepts and state mgmt paradigms from other frameworks into a lib that LWC can support. Those will not be discussed here since this wiki is strongly aligned to first party solutions and there is currently nothing officially supported.
With the increased ergonomics of ES6 in data manipulation, it can be advantageous to create an initial cache from the server and then continue to manipulate / filter it down clientside for consumption in various LWCs on the flexipage. A couple of concepts here:
- Using
Map<String, Object>
to build a cache of data (covered more in depth here). - Optionally, building the cache with cacheable=true.
- Sharing that cache via (idempotent)
@wire
calls across multiple LWCs.
Let's dive into the bottom two since those paired together allows for some interesting design patterns:
First, let's define an idempotent
operation as serverside call that can be called any number of times while guaranteeing that side effects only occur once.
Given that, let's first apply that to the getRecord
LDS adapter which is idempotent given the same recordId
. There is a built in caching layer in the Lightning Platform that allows for both native components (e.g. record detail) and custom components (e.g. custom lwc using getRecord
) to allow for idempotent calls. In context of Lightning platform, this simply means that the cache is warmed when the first component (native or custom) uses the equivalent of getRecord
in its calls. The second component will take from that warmed cache and, most likely, remove the need for a full network call to the server.
This is extremely powerful because you can use this to your advantage when designing your lifecycles. Here's a concrete example:
- You've designed a Flexipage using the native tab component, having configured two tabs.
- On the first tab, you put
Record Detail
native component in there. - On the second tab, you put custom LWC using an
@wire
togetRecord
using the samerecordId
as theRecord Detail
.
This is what happens on the platform:
- User makes URL request to load a Flexipage.
- Flexipage bootstraps, cold cache of
current record
in the Lightning Platform cache layer. - Contents of First Tab now loads the
Record Detail
which calls (the equivalent of?)getRecord
under the hood. Cache forcurrent record
is now warmed. - User clicks second tab.
- Contents of Second Tab now loads the custom LWC which calls
getRecord
but doesn't initiate a serverside request, due to it being both an idempotent call and being cached clientside.
This one is a trickier topic since each situation is different, but here are some high level designs that can scale well:
- Use hidden component on flexipage to warm your data cache (either
Map<String, Object>
or an apex class representing a Data Transfer Object).- Then, use the idempotent technique as described above to increase speed on your actual first paint.
- Wrap your suite of LWCs with a container responsible for coordinating events, event payloads, initial data access and downward propagation (to child LWCs through
@api
props)- This is considered data driven design, because the state of your application is "prop-drilled" from a higher level component.
- Use
message-service
to loosely couple data payloads on demand, sharing state between a centralized "channel" and subscribers can situationally choose to omit updates.