In our application we’re running into performance issues when loading in big nodeDataArray(s) or linkDataArray(s), while diving into the performance tab of google chrome I notice that doModelChanged is called, which takes most of the computing time. But I don’t think this is intended.
This is our diagram.ts, a class that extends the gojs.Diagram
Now the actual question; how can I either disable the modelChanged invocation (I couldn’t find anything in interface) and make sure it’s only done after the ‘setNew’ transaction is committed?
Internally there’s a model Changed listener that notices changes in the model and makes the corresponding changes in the diagram. So when you set the Model.nodeDataArray, it knows to delete all of the old Nodes and create new ones corresponding to the contents of the given Array.
Are you sure that your code is faster than just creating a new model and setting Diagram.model? It will automatically unregister your model Changed listener on the old Model and register it on the new Model, and use a new UndoManager. I don’t know what your _emitHistory and _forceRedrawOverview do, so I don’t know if you still need to call them. You do need to call Diagram.focus, if that is what you want.
_forceRedrawOverview; redraws the (gojs)overview forcefully, which also takes some computing time _emitHistory; emits the gojs diagram transaction history to a pinia store.
I doubt that the code is faster than just creating a new model, I’ll apply that and I’ll let you know.
How big are your models? I’m wondering if there’s some memory re-use happening in your original code. Although it seems too big a time difference to explain that.
If you primarily care about loading time, then you could try virtualization. But it’s significant programming work, and I don’t know what your trade-offs are.
With each release we usually have a number of performance improvements, although the library is mature enough that I’m not sure that dramatic improvements are now possible without greatly restricting the functionality.
I have read that page, yeah. It has guided us a decent amount already.
If anything I think 10 seconds for loading 50 thousand ~ish items is acceptable. But I’ll continue to push forward, because faster and simpler is always better. I think I can achieve this by minimizing the node and link data types (They’re currently considerably big and I’m guessing this is causing the setting of the model / setting of the node/link data array to take the time it requires.)
At this time I’m not interested in virtualization.
if you zoom out in this diagram (for me atleast) the diagram becomes unresponsive and laggy, how could you achieve that zooming is (almost) always performant? Or is that something that isn’t possible?
The size of each data object (i.e. the number of properties it has) probably won’t make much difference in time. Some, since more memory means more time, but not a lot, and perhaps not noticeable.
Simplifying the node templates would help. Avoid unnecessary bindings, and particularly avoid TwoWay Bindings unless needed. Avoid unnecessary Panels.
The virtualized samples are inherently slower when scrolling or zooming, since they need to check for any Parts that they need to instantiate, or maybe destroy if outside of the viewport, each time there’s a “ViewportBoundsChanged” DiagramEvent. But there are other optimizations that are possible.
So I guess you are interested in performance after loading. That’s another reason to avoid virtualization.
Hi, I’d much rather have a 30 second load time with smooth scrolling, zooming and insertion than vice versa.
Please elaborate on “But there are other optimizations that are possible.” so I could implement these.
I’ll also take a look at the (node/link) templates, these are particularly big and filled with two way bindings at the moment that are probably unnecessary.