Yes and no.
Having implemented virtual DOM natively in Sciter (1), here are my findings:
In conventional browsers the fastest DOM population method is element.innerHTML = ...
The reason is that element.innerHTML works transactionally:
Lock updates -> parse and populate DOM -> verify DOM integrity -> unlock updates and update rendering tree.
While any "manual" DOM population using Web DOM API methods like appendChild() must do such transaction for any such call, so steps above shall be repeated for each appendChild() - each such call shall left the DOM in correct state.
And virtual DOM reconciliation implementations in browsers can use only public APIs like appendChild().
So, indeed, vDOM is not that performant as it could be.
But that also applies to Svelte style of updates: it also uses public APIs for updating DOM.
Solution could be in native implementation of Element.patch(vDOM) method (as I did in Sciter) that can work on par with Element.innerHTML - transactionally but without "parse HTML" phase. Yes, there is still an overhead of diff operation but with proper use of key attributes it is O(N) operation in most cases.
The key observation about HTML templates is that usually large portions of them don't change with new data. There is static content, and even with lots of dynamic bindings they're tied together in a static structure.
So the vdom approach of processing all the static parts of a template during a diff is just extremely wasteful, especially for fairly common cases like conditionally rendering a node before some static content.
Ideally you already know what changed, and can just update the parts of the template that depend on it. In JS that typically requires a compiler, complexity, and custom semantics (like Solid). But you can get very, very close to that ideal with plain JS syntax and semantics by capturing the static strings and the dynamic values separately then only comparing the dynamic values on updates.
This is what we do with lit-html and why I think tagged template literals are nearly perfect for HTML templates.
With tagged template literals, an expression like:
html`<h1>Hello ${name}!</h1>`
is passed to the `html` tag function as an array of strings `['<h1>Hello ', '!</h1>']` and an array of values: `[name]`, and the strings array is the same every time you evaluate the expression, so you can compare it against the previous template and only update the values in the DOM if it's the same.It's a really efficient way to render and update DOM with a pretty simple conceptual model that requires no compiler at all - it's all runtime. I think it's straightforward and powerful enough to be standardized at some point too.
Imagine being someone in the semiconductor industry reading this. You're at the absolute pinnacle of high-tech and are approaching the limits of material reality to realize a 20% faster CPU. It's a true super human accomplishment.
Software developers: well yes, 99% of the cycles I use are completely needless, but it's still plenty fast enough!
Which we justify with the idea that a framework like React is abstract, hence expressive and productive.
Excuse me? Abstract? React is absurdly low level.
25 years ago I was coding in Borland products. You visually drag you UI together. Link events with a click. Drag data sources and fields into UI to do two-way binding. Tooling chain? What tooling chain. There's a build and a run button. No weird text config files or CLI crap. And every setup is the same.
25 years later we're worrying about painting frames. We're pissing away impressive hardware performance whilst simultaneously not actually achieving abstraction. That's a double fail.
My life, and blood pressure, would be greatly improved if every article about JS frameworks ended with a paragraph that says something like "... but JS frameworks are pretty fast really, so you'll only see a problem if you're changing lots of things on the page at once. And if you're updating lots of DOM nodes in a single action then maybe you need to think hard about your underlying HTML structure and UX instead of worrying about what JS is doing. That's where your problem lies after all."
In more cases than not I've noticed the choice of single page app itself is pure overhead.
SPA technology brings some key advantages but also a whole new realm of cost and complexity. It's my experience that SPA popularity has convinced many folks to use it when they really don't have a practical reason to justify it.
Inferno.js uses VDOM https://github.com/infernojs/inferno and is faster than Svelte according to these benchmarks https://krausest.github.io/js-framework-benchmark/2023/table.... Sooo, VDOM can improve performance?
Svelte is great. React is great. X, Y and Z are also great. And you know what they all share as well? Speed. They are all fast. Definitely fast enough for 99% of all uses cases if not more. The benchmarks they all provide are just benchmarks. I treat them like I treat car range reports by the car makers. I personally use react because I know it well, and it allows me super speedy development cycle once all the base components are done. I'm sure another person will say "I use Svelte because A, B and C". etc.
I haven't see any project which feels fast which are written in React. But most important thing for me which is broken (because it is too difficult to catch all edge cases) in all the JS frontends is the broken navigation (browser's back and forward almost never work as expected, bookmarking links are broken because state is in JS etc.)
I always find these a little funny. "[Commonly held thought with some convenient tweaks] is WRONG...so use our stuff!"
Even open source projects are guilty of this type of grifting, everyone wants to win, even without money in the game.
I take “pure overhead” to mean a cost with literally no benefit. To me that just makes it sound like Svelte is being pushed by idiots, because clearly there’s a substantial benefit (regardless of whether or not VDOM is an optimal strategy).
From the end of this article: “Virtual DOM is valuable because it allows you to build apps without thinking about state transitions, with performance that is generally good enough. That means less buggy code, and more time spent on creative tasks instead of tedious ones.“
Okay great, it’s not pure overhead.
Sadly, it seems like nobody is considering the best optimization: make DOM operations fast. I think if you could batch DOM operations together you could avoid a lot of wasted relayout and duplicate calculations.
Reading this as a native developer is a bit like reading about alchemy or astrology - two fields with their own vast suite of terminology and internal logic that doesn't fully correspond to anything real ...
... only to find out that this stuff is actually real and is how a big chunk of the visible web actually works.
I can't trade TSX for any text templates. Being able to write tags and having them syntax-checked with types support is indispensable. I wish that those frameworks embrace TSX rather than trying to drag users to the dark past.
I've heard the term "virtual dom" for years. This article made me want to understand. It gives this example for explaining "what is a virtual dom?"
function HelloMessage(props) {
return (
<div className="greeting">
Hello {props.name}
</div>
);
}
And that returns "an object representing how the page should now look"Aren't we developers here. How about an object type. I assume it's a DocumentFragment. Is that correct?
Then it talks in broad (i.e. useless) terms about using this object.
So my next question is: what's exactly is wrong with using a DocumentFragment to just replace that part of the DOM? For example:
let frag = renderMyModel();
if (destElt.firstElementChild) {
destElt.replaceChild (frag, destElt.firstElementChild);
} else {
destElt.appendChild (frag);
}
I do this with a massive DOM tree and rendering is like instant.Calling the VDOM pure overhead is a strong statement when there are patterns that are more difficult to express in Svelte because of how it manages views.
Once a view is created, it can't be processed by JS. It can't be stored in an array or an object. You can't count or individually wrap children. This makes it harder to create flexible API's [1].
The question is: are we willing to give up the expressivity of React for extra performance?
I am leaning towards "no", because I believe React's performance issues mainly come from its memoization-based reactivity model rather than the VDOM. When applying `.useMemo` in the right places I can create perfectly performant apps. However, this requires profiling and is often unintuitive.
[1]: for example https://news.ycombinator.com/item?id=33990947
Related:
Virtual DOM is pure overhead (2018) - https://news.ycombinator.com/item?id=27675371 - June 2021 (289 comments)
Virtual DOM is pure overhead (2018) - https://news.ycombinator.com/item?id=19950253 - May 2019 (332 comments)
Rules of thumb is:
- Make it work
- Make it fast
Reality is, most of developers just want to get shit done and go home. The bosses of course never want to pay you for "make it fast".
The point of React is of course, low overhead JS class/function to decompose large UI. That made the job done.
Switching from vuejs to svelte these days, indeed svelte is much easier to write and understand.
I'd argue this is essentially just an optimized (and therefore potentially more buggy) virtual dom.
Svelte is being smart and skipping comparisons in the places it knows the result is static. That's nifty. But it also means you have to depend on svelte getting it right every time, in all scenarios.
Long term - I think this is probably the right approach, but it feels very similar to the -03 c++ optimization flag: There was a fairly long period where enabling that flag was considered risky. Each extra transformation carries opportunities for bugs.
It also means extra work at code generation time - it's building a vdom engine specific to your template (again - this is nifty!). Probably not a huge deal, since js build tooling is seeing a LOT of focus on speed, but it's there.
Note that a virtual DOM is pure overhead if you already have a real DOM to work with.
That's kind of the ignored superpower of a framework like React, which makes the virtual DOM the authority: there might be a DOM, but there also might not be. Whether the virtual DOM reflects to a real DOM, or native UI, or Qt, or an ASCII terminal interface, it doesn't care.
This is also the part that most web devs have the hardest time with: React (and some other frameworks) are not web frameworks. They really are just UI frameworks, that happen to (also) work in a browser. Even if they were original born out of a web need way back when.
Really they should just add DOM morphing into the DOM APIs. React has proved there is some value to this approach, whether it is speed or mental model, there are enough positives to just add it to the platform.
Way I remember, the message was that it was too hard for most devs to be careful enough in their Backbone.View.render code to consistently write fast DOM manipulation that scaled. That may not be whats in the cited talk but that's my recall from the various bits of early marketing by Facebook Engineering. So the sell was alievating that burden.
Js frameworks are also pure overhead. I have never seen any benchmark where they have ever been better than vanillaJs. /s
Love Svelte but well, raw DOM manipulation isn't fast either. I have a recursive Svelte tree component which, as I collapse node's contents, destroys all of its children. Which is fine with small trees but once they get big there's a noticeable lag between collapsing/uncollapsing trees. Once I deploy the site it gets faster but still, destroying children components is slow.
I think I'll try next just toggling visibility instead and skipping the removal of children. Sure this is a bit of edge-case but there's no definite silver-bullet here. Don't know about Svelte internals if this is something they could do eg hiding the components before destroying them. But well it'll still jank (but not as visibly) as everything is done in the same UI thread.
Virtual list of course would probably be the optimal solution.
For me, whole DOM is a useless overhead. Why don’t we just have some nice transactional browser API? Client side frameworks don’t really need to generate HTML that is immediately parsed. It’s a total waste.
I absolutely love articles like this. Written by people with more critical thinking skills than technology fetishism.
Great to see.
Can we please talk about how much reactive programming sucks for UIs? I miss all the people who switched from angular to react for a reason in these discussions... I'd bet none of those are going to move to svelte.
Is there any discussion in webdev community whether using typesetting engine from 80s is even a good idea for modern performant UI apps? Or it's just taken for granted and never questioned?
In the first example showing the diff of JSX and React.createElement:
> You can do the same thing without JSX...
Well, the end result is that there is no JSX. It's gone. It's syntactic sugar.
Rethinking Reactivity by Rich Harris (2019)
https://twitter.com/dan_abramov/status/1135424423668920326
Above thread summarizes the issue pretty well I think. Optimizing for DOM updates is nice, but you also want to optimize for bundle size and page load time, and at a certain app size the compiler output is always going to be bigger than just using a virtual DOM.
If that's true, then what is needing a custom compiler for your components to work?
Isn’t svelte essentially just a cached vdom or do I have the wrong mental model
If I see html in a javascript file, I boycott the codebase
And having a custom JS compiler isn't pure overhead?
Anything beyond HTML and CSS is a mistake.
An article that talks about performance and "fast" and "slow" but does not quote a single benchmark?
Sorry this is nonsense.
(2018)
All abstractions are pure overhead. Let's code everything by hand by flipping bits in memory using a tiny magnetic needle.
So is the JS runtime. So why don't we just write apps in raw WASM?
Poorly written react code isn't performant and removing the Virtual DOM will not fix your problem. It's a hill I'm willing to die on. Many engineers seem to struggle with unnecessary re-renders, to the point where I see long tasks in the performance tab. Clicking a button shouldn't lock the UI thread for 2 seconds.
I like the looks of Svelte, but this argument is a bit strong. The supposed benefit of virtual DOM being:
X application-level virtual DOM changes -> differ detects only Y < X real DOM changes -> Y final DOM operations
is faster than
X application-level virtual DOM changes -> no virtual DOM diffing -> X DOM operations
this depends a lot of how fast the diffing is and how fast the DOM is but unless DOM operations are instant now (and with CSS, layout reflow, etc. I'm not sure how they could be) then there must remain some situations where VDOM has a perf advantage.
I'm quoted in this blog post so I figured I'd respond. I'm a former member of the React team but I haven't worked on it in a long time.
Largely I agree with everything in this article on a factual basis but I disagree on the framing of the trade-offs. Two points in particular:
1. Before open sourcing React we extensively measured performance on real world applications like mobile search and desktop ads management flows. While there are pathological cases where you can notice the overhead of the virtual DOM diff it's pretty rare that it meaningfully affects the user experience in practice, especially when compared to the overhead of downloading static resources like JS and rendering antipatterns like layout thrash. And in the situations where it is noticeable there are escape hatches to fix it (as mentioned in the article).
2. I disagree with the last sentence strongly. I would argue that Svelte makes huge sacrifices in expressive power, tooling and predictability in order to gain performance that is often not noticeable or is easily achieved with React's memoization features. React was always about enabling front-end engineers to take advantage of software engineering best practices so they could level up velocity and quality. I think Svelte's use of a constrained custom DSL is a big step backwards in that respect so while I appreciate the engineering behind it, it's not a technology I am interested in using.
Even though I disagree on these points I think it is a well-written article and is a pretty fair criticism of React outside of those two points, and I could see reasonable people disagreeing on these trade-offs.