Performance of mergeNodeDataArray method

I’m testing the new mergeNodeDataArray and mergeLinkDataArray for updating model data in React with Redux. It works fine for simple diagrams. However, when I feed them with larger amount of data, I noticed they are significantly slower than updating the model directly with data converters. One simple scenario: in a diagram with several hundred nodes, updating a field of a specific node with mergeNodeDataArray can take several hundred milliseconds while using a data converter to do the same update takes a few milliseconds. I’m not surprised to see that these new merge methods are slower because they have to go over the full model data, but I’d like to understand to what extend they are slower than the good old binding approach, their time complexity, and expected performance on complicated diagrams.

You have it right – Model.mergeNodeDataArray is designed for updating an existing model and diagram based on a whole Array of node data, some of which might be new and thus need to be created as new nodes in the diagram, some of which might be missing and thus need to be removed from the existing model/diagram, and of which some of the properties may have changed.

Really this is only useful if your architecture assumes immutable data. So even one tiny property change will require a new node data object with its other properties copied over, along with a new Array for the nodeDataArray with all of its items copied over except for the one node data object that was modified. This is the price to be paid for having immutable data.

So then mergeNodeDataArray needs to run through everything and detect whether there are any changes, whereas calling Model.setDataProperty, when the model data is assumed to be mutable, is fairly efficient because you already have the knowledge to only make the needed changes.

Because GoJS models assume mutable data, the models cannot use any of the references to immutable data that you might have. So the models have to keep their own copy. We could make it more efficient by keeping for each node data object in the model the reference to the immutable data, to speed up the “need-update” check, combined with making ReactDiagram.componentDidUpdate more efficient by not copying blindly.

But it’s never going to be as fast as the original design with the mutable data.

Thanks for your reply. Glad to know what I thought was correct. For our application, we would like to store model data in a Redux store and update it by dispatching Redux actions, but the current performance of these new merge methods are not sufficient to support many real world use cases where a diagram has to routinely handle several hundred items. The official gojs-react package also uses these merge methods, so I’d expect it to suffer from the same issue. Do you have any plan to implement some optimizations you said in future releases? Is there any good ways to optimize the overall performance on our side other than stop using Redux to manage model?

We’ll need to talk about that.

Until then, you could adapt the gojs-react code to be a lot more efficient, just because you can make assumptions about what might be replaced and when. It’s basically what we would do.