Skip to content

Dancing between state and effects - a real-world use case #15240

Open
@jlongster

Description

@jlongster

I started this as a gist but Dan mentioned that this would be a good discussion issue so here goes. I've been writing with and refactoring code into hooks for a while now. For 95% of code, they are great and very straight-forward once you get the basic idea. There are still a few more complex cases where I struggle with the right answer though. This is an attempt to explain them.

The use case

This is a real-world use case from an app I'm building: interacting with a list items. I've simplified the examples into codesandboxes though to illustrate the basic idea.

Here's the first one: https://codesandbox.io/s/lx55q0v3qz. It renders a list of items, and if you click on any of them, an editable input will appear to edit it (it doesn't save yet). The colored box on the right will change whenever an item rerenders.

If you click around in the items, you can see that when changing the edited item, all items rerender. But the Row component is wrapped with React.memo! They all rerender because the onEdit is new each time the app renders, causing all items to rerender.

Maintaining callback identity

We want onEdit to be same function for all future renders. In this case, it's easy because it doesn't depend on anything. We can simply wrap it in useCallback with an empty dependencies array:

  let onEdit = useCallback(id => {
    setEditingId(id);
  }, []);

Now, you can see clicking around only rerenders the necessary items (only those colors change): https://codesandbox.io/s/k33klz68yr

Implementing saving

We're missing a crucial feature: after editing an item, on blur it should save the value. In my app the way the "save" event gets triggered is different, but doing it on blur is fine here.

To do this, we create an onSave callback in the app and pass it down to each item, which calls it on blur with the new value. onSave takes a new item and updates the items array with the new item and sets the items state.

Here is it running: https://codesandbox.io/s/yvl79qj5vj

You'll notice that all items are rerendering again when saving. The list rerenders twice when you click another item: first when you click down and the input loses focus, and then again to edit a different item. So all the colors change once, and then only the editing rows color changes again.

The reason all of them are rerendering is because onSave is now a new callback every render. But we can't fix it with the same technique as onEdit because it depends on items - so we have to create a new callback which closes over items otherwise you'd lose previous edits. This is the "callbacks are recreated too many times" problem with hooks.

One solution is to switch to useReducer. Here's that implementation:
https://codesandbox.io/s/nrq5y77kj0

Note that I still wrap up the reducer into onEdit and onSave callbacks that are passed down to the row. I find passing callbacks to be clearer in most cases, and works with any components in the ecosystem that already expect callbacks. We can simply use useCallback with no dependencies though since dispatch is always the same.

Note how that even when saving an item, only the necessary rows rerender.

The difference between event handlers and dispatch

There's a problem though. This works with a simple demo, but in my real app onSave both optimistically updates local state and saves it off to the server. It does a side effect.

It's something like this:

async function onSave(transaction) {
  let { changes, newTransactions } = updateTransaction(transactions, transaction);
  // optimistic update
  setTransactions(newTransactions)
  // save to server
  await postToServer('apply-changes', { changes })
}

There's a big difference between the phase when an event handler and dispatch is run. Event handlers are run whenever they are triggered (naturally) but the dispatching the action (running reducer) happens when rendering. The reducer must be pure because of this.

Here's the reducer from https://codesandbox.io/s/nrq5y77kj0:

  function reducer(state, action) {
    switch (action.type) {
      case "save-item": {
        let { item } = action;
        return {
          ...state,
          items: items.map(it => (it.id === item.id ? item : it))
        };
      }
      case "edit-item": {
        return { ...state, editingId: action.id };
      }
    }
  }

How is save-item also supposed to trigger a side effect? First, item's important to understand these 3 phases:

Event handler -> render -> commit

Events are run in the first phase, which causes a render (when dispatches happen), and when everything is finally ready to be flushed to the DOM it does it in a "commit" phase, which is when all effects are run (more or less).

We need our side effect to run in the commit phase.

Option 1

One option is to use a ref to "mark" the saving effect to be run. Here's the code: https://codesandbox.io/s/m5xrrm4ym8

Basically you create a flag as a ref:

let shouldSave = useRef(false);

Luckily, we've already wrapped the save dispatch into an event handler. Inside onSave we mark this flag as true. We can't do it inside of the reducer because it must be pure:

  let onSave = useCallback(item => {
    shouldSave.current = true;
    dispatch({ type: "save-item", item });
  }, []);

Finally, we define an effect that always runs after rendering and checks the flag and resets it:

  useEffect(() => {
    if (shouldSave.current) {
      // save... all the items to the server?
      post(items)
      shouldSave.current = false;
    }
  });

I thought this option was going to work, but just ran into this issue. We don't know what to save anymore. We certainly don't want to send the entire items array to the server! I suppose we could store the item in the ref, but what happens if multiple events are fired before the effect runs? I suppose you could store an array, but... do we really need that?

Option 2

Note: I just noticed this option is documented in How to read an often-changing value from useCallback?, but I disagree with the tone used. I think this is a fine pattern an better in many cases than dispatch, even if it's not quite as robust. Especially since it's not as powerful as callbacks. (see end of this section)

Keeping around all of the data we need to do the effect might work in some cases, but it feels a little clunky. If we could "queue up" effect from the reducer, that would work, but we can't do that. Instead, another option is to embrace callbacks.

Going back to the version which used a naive onSave which forced all items to rerender (https://codesandbox.io/s/yvl79qj5vj), onSave looks like this:

  let onSave = useCallback(
    item => {
      setItems(items.map(it => (it.id === item.id ? item : it)));
    },
    [items]
  );

The core problem is that it depends on items. We need to recreate onSave because it closes over items. But what if it didn't close over it? Instead, let's create a ref:

let latestItems = useRef(items);

And an effect which keeps it up-to-date with items:

useEffect(() => {
  latestItems.current = items
});

Now, the onSave callback can reference the ref to always get the up-to-date items. Which means we can memoize it with useCallback:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
}, []);

We are intentionally opting to always referencing the latest item. The biggest change with hooks in my opinion is that they are safe by default: an async function will always reference the exact same state that existed at the time they were called. Classes operate the other way: you access state from this.state which can be mutated between async work. Sometimes you want that though so you can maintain callback identity.

Here is the running sandbox for it: https://codesandbox.io/s/0129jop840. Notice how you can edit items and only the necessary rows rerender, even though it updates items. Now, we can do anything we want in our callback, like posting to a server:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
  // save to server
  post('/save-item', { item })
}, []);

Basically, if all you need is the latest data since last commit, callbacks can be memoized as well as reducers. The drawback is that you need to put each piece of data you need in a ref. If you have lots of pieces of data and only a few simple effects, reducers would be better, but in my case (and I suspect in many others) it's easier to use callbacks with refs.

It's nice too because in my real app the save process is more complicated. It needs to get changes back from the server and apply them locally as well, so it looks more like this:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
  // save to server
  let changes = await post('/save-item', { item })
  applyChanges(latestItems.current, changes)
}, []);

Maintainability-wise, it's really nice to see this whole flow here in one place. Breakin this up to try to manually queue up effects and do a dance with useReducer feels much more convoluted.

Option 3

I suppose another option would be to try to "mark" the effect to be run in state itself. That way you could do it in useReducer as well, and it would look something like this:

function reducer(state, action) {
  switch (action.type) {
    case "save-item": {
      let { item } = action;
      return {
        ...state,
        items: state.items.map(it => (it.id === item.id ? item : it)),
        itemsToSave: itemsToSave.concat([item])
      };
    }
    // ...
  }
}

And an effect would check the itemsToSave state and save them off. The problem is resetting that state - the effect would have to change state, causing a useless rerender, and it's not determistic to make sure that the effect does not run multiple times before itemsToSave gets reset.

In my experience mixing effects into state, causing renders, make things a lot more difficult to maintain and debug.

What's the difference between Option 1 and 2?

Is there a crucial difference between 1 and 2? Yes, but I'd argue it's not a big deal if you can accept it. Remember these three phases:

Event handler -> render -> commit

The big difference is option 1 is doing the side effect in the commit phase, and option 2 is doing it in the event handler phase. Why does this matter?

If, for some reason, an item called onSave multiple times before the next commit phase happened, option 1 is more robust. A reducer will "queue up" the actions and run them in order, updating state in between them, so if you did:

onSave({ id: 1, name: "Foo" })
onSave({ id: 2, name: "Bar" })

which runs the callback twice immediately, the reducer will process the first save and update the items, and process the second save passing in the already updated state.

However, with option 2, when processing the second save the commit phase hasn't been run yet so the latestItems ref hasn't been updated yet. The first save will be lost.

However, the ergonomics of option 2 is much better for many use cases, and I think it's fine to weight these benefits and never need the ability to handle such quick updates. Although concurrent mode might introduce some interesting arguments against that.

Another small use case for triggering effects

In case this wasn't already long enough, there's a similar use case I'll describe quickly. You can also add new items to the list by editing data in an empty row, and the state of this "new item" is tracked separately. "Saving" this item doesn't touch the backend, but simply updates the local state, and separate explicit "add" action is needed to add it to the list.

The hard part is that there is a keybinding for adding the item to the list while editing - something like alt+enter. The problem is I want to coordinate with the state change, so first I want to save the existing field and then add to the list. The saving process is complicated so it need to run through that first (I can't just duplicate it all in onAdd).

This isn't a problem specific to hooks, it's just about coordinating with state changes. When I was working with reducers, I had though that something like this would be neat. Basically when the new items detect that you want to save + add it first an action like { type: 'save-item', fields: { name: 'Foo' }, shouldAdd: true }

function reducer(state, action) {
  switch (action.type) {
    case "save-item": {
      let { fields } = action;
      let newItem = { ...state.newItem, ...fields };

      if(action.shouldAdd) {
        shouldAdd.current = true
      }

      return { ...state, newItem };
    }
    // ...
  }
}

where shouldAdd is a ref that is checked on commit phase and saves the item off to the server. This isn't possible though.

Another option would be for the item to call onAdd instead of onSave when saving + adding, and you could manually call the reducer yourself to process the changes:

async function onAdd(fields) {
  let action = { type: 'save-item', fields }
  dispatch(action)
  
  let newItem = reducer(state, action)
  post('/add-item', { newItem });

}

This is kind of a neat trick: you are manually running the reducer to get the updated state, and React will run the reducer again whenever it wants.

Since I ended up liking callbacks for my original problems, I ended up going with a similar approach where I have a ref flag that I just set in onSave:

let [newItem, setNewItem] = useState({})
let latestNewItem = useRef(newItem);
let shouldAdd = useRef(false);

useEffect(() => {
  latestNewItem.current = newItem;
})

useEffect(() => {
  if(shouldAdd.current) {
    setNewItem({})
    post('/add-item', { newItem })
    shouldAdd.current = false;
  }
})

let onSave = useCallback((fields, { add }) => {
  // In my real app, applying the changes to the current item is a bit more complicated than this,
  // so it's not an option to separate on an `onAdd` function that duplicates this logic
  setNewItem({ ...latestNewItem.current, ...fields });

  // This action also should add, mark the effect to be run
  if(add) {
    shouldAdd.current = true;
  }
}, [])

Conclusions

Sorry for the length of this. I figure I'd be over-detailed rather than under-detailed, and I've been brewing these thoughts since hooks came out. I'll try to conclude my thoughts here:

  • Effects are very nice. It feels like we have easy access to the "commit" phase of React, whereas previously it was all in componentDidUpdate and not composable at all. Now it's super easy to throw on code to the commit phase which makes coordinating stuff with state easier.

  • Reducers have interesting properties, and I can see how they are fully robust in a concurrent world, but for many cases they are too limited. The ergonomics of implementing many effect-ful workflows with them requires an awkward dance, kind of like when you try to track effect states in local state and split up workflows. Keeping a linear workflow in a callback is not only nice, but necessary in many cases for maintainability.

  • Callbacks can be made memoizable without much work. In many cases I think it's easier to use the ref trick than reducers, but the question is: just how dangerous is it? Right now it's not that dangerous, but maybe concurrent mode really is going to break it.

  • If that's the case, we should figure out a better way to weave together effects and state changes.

I hope all of this made sense. Let me know if something is unclear and I'll try to fix it.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions