Jane street briefly summarizes some options here: https://blog.janestreet.com/breaking-down-frp/
And they have an interesting talk on the trade-offs and how their own system, incremental, evolved: https://blog.janestreet.com/seven-implementations-of-increme...
The "2 * x" is rather - why would the reaction from a change in X display many gradual increments of 1 instead of showing the final value once? And then why does Z =Y+1 instead of +1 to Y repeats all the steps again from X? That's not how real signal frameworks work, and also not how you'd imagine they should work
Then the next cascading example: ok, if Signal is a button, not the underlying mechanism behind it, then "computed 1" is also a signal, why isn't it called that? (though intuitively you'd think the moving dots are signals, not buttons)
Cheers
* I think the first implementation in JS land was Flapjax, which was around 2008: https://www.flapjax-lang.org/publications/
* The article didn't discuss glitch-freedom, which I think is fairly important.
Mind you, the framework still has a hostile learning curve, but for those who already made that investment, it's a boon.
The push-pull approach described here actually sidesteps the worst glitches because the dirty-flag propagation is just marking, not computing. But the article glosses over what happens during the pull phase when the dependency graph has diamonds. Topological sorting during pull is the standard fix -- Preact Signals and SolidJS both do this -- but it adds complexity that matters if you are rolling your own.
Flapjax was doing a lot of this right in 2008. It is wild that the JS ecosystem took another 15 years to converge on essentially the same core ideas with better ergonomics.
I've gone with the universal `alien-signals` package for my project (which doesn't use a frontend framework that includes signals). They show benchmarks of being by far the fastest and have strict limits on code complexity. Those limits are also supposed to avoid glitches by design, and now at least some of that is tested[1].
The system as described isn’t actually glitchy, is it? It doesn’t eagerly run any user computations, just dirtying, and that is idempotent so the order is irrelevant. It’s also a bit useless because it only allows you to pull out values of your own initiative, not subscribe to them, but that’s fixable by notifying all subscribers after the dirtying is done, which can’t cause glitches (unless the subscribers violate the rules of the game by triggering more signals).
So now I’m confused whether all the fiddly priority-queue needlepoint is actually needed for anything but the ability to avoid recomputation when an intermediate node decides it doesn’t want to change its output despite a change in one of its inputs. I remember the priority queue being one of the biggest performance killers in Sodium, so that can’t be it, right?..
I’m also confused about whether push-pull as TFA understands it has much to do with Conal Elliott’s definition. I don’t think it does? I feel like I need to reread the paper again.
Also also, some mention of weak references would probably be warranted.
And a lot of literature on the algorithms.
I wrote a bit about the connection here:
https://blog.metaobject.com/2014/03/the-siren-call-of-kvo-an...
(It starts in a slightly different place, but gets there)
Also about constraints as an architectural connector.
https://dl.acm.org/doi/10.1145/2889443.2889456?cid=813164912...
Virtually nothing that is getting sold/branded as "FRP" has anything to do with Conal Eliott's definition.
I once gave a long talk about this here in Berlin, but I don't remember if there was a video.
I've also explained it on twitter a bunch of times, including this memorable sequence:
https://x.com/mpweiher/status/1353716926325915648
Kinda like the Marshall McLuhan scene in Annie Hall ("if only real life were like this")
> Virtually nothing that is getting sold/branded as "FRP" has anything to do with Conal Eliott's definition.
True but not what I meant. The article implicitly (and, in the links at the end, explicitly) refers to his 2009 paper “Push-pull functional reactive programming”, which describes a semantic model together with an specific implementation strategy.
So I was wondering if TFA’s “push-pull” has anything to do with Elliott 2009’s “push-pull”. I don’t think so, because I remember the latter doing wholly push-based recomputation of discrete reactive entities (Events and Reactives) and pull-based only for continuous entities that require eventual sampling (Behaviors).
With that said, I find it difficult to squeeze an actual algorithm out of Elliott’s high-level, semantics-oriented discussion, and usually realize that I misunderstood or misremembered something whenever I reread that paper (every few years). So if the author went all the way to reference this specific work out of all the FRP literature, I’m willing to believe that they are implying some sort of link that I’m not seeing. I would just like to know where it is.
We have been using Signals in production for years via several modern front-end frameworks like Solid, Vue, and others, but few of us are able to explain how they work internally. I wanted to dig into it, especially diving deep into the push-pull based algorithm, the core mechanism behind their reactivity. The subject is fascinating.
Imagine an application as a world where we describe the set of rules that govern it. Once a rule is defined, our program will no longer be able to change it.
For example, we decide that in our world, any y value must be equal to 2 * x. We define this rule, and from then on, whenever x changes, y will automatically adjust. We can define as many rules as we want. They can even depend on each other by deciding that z must be equal to y + 1, and so on.
Now we press the play button, our program starts, the world is running, and the rules we have defined are now in effect over time. (think of it as our runtime).
x
10
y = x * 2
20
z = y + 1
21
And then, we just have to observe. We can modify x and see how y and z automatically adjust to comply with the rules we have established. It's like a spreadsheet where dependent cells automatically update when their sources change. In other words, derived values are reactive to changes in their dependencies.
These derived values behave like pure functions: no side effects, no mutable state. In the next example, time is a source that changes continuously while rotation is derived from it. The square simply reflects the result of this transformation that is declared once.
time = 0.00
derived value
rotation = f(time)
This "reactive world" didn't come out of nowhere. The idea emerged in the 1970s and was formalized as Reactive Programming, a paradigm that describes systems where changes in data sources automatically propagate through a graph of dependent computations, which is exactly what Signals do.
Signals are thus heirs to the Reactive Programming paradigm, whose first JavaScript implementations came with libraries like Knockout.js (2010) and then RxJS (2012), which brought reactive ideas to the browser.
Now that we have more context on what Signals are, let's dive into the push-pull based algorithm that is at the core of this system.
A Signal is an abstraction that represents a reactive value that can be read and modified. When a signal changes, all parts of the application that depend on this signal are automatically updated. I went through the exercise of implementing a very basic version:
const signal = <T>(initial: T) => {
let value: T = initial
const subs = new Set<(state: T) => void>()
return {
get value(): T {
return value
},
set value(v: T) {
if (value === v) return
value = v
for (const fn of Array.from(subs)) fn(v)
},
subscribe(fn: (state: T) => void) {
subs.add(fn)
return () => subs.delete(fn)
}
}
}
We can imagine Signals as the starting point of the rules of our world, the primitive entry points of targeted mutations.
My first thought was "Ok, it's just a simple publish–subscriber pattern with a getter and a setter." The Signal itself works like that, except the function keeps a reference to the current state that can be read and modified. If you have ever used an event emitter, this pattern will seem familiar to you:
const count = signal(0)
// Somewhere in the application
count.subscribe((newValue) => {
console.log("Count changed to:", newValue)
})
// Anytime and anywhere in the application
count.value += 1
// "Count changed to: 1"
This is what we call the push approach, also known as eager evaluation. A notification is immediately pushed to its subscribers when the signal is updated. Updating the signal dispatches a notification to all its subscribers.
I deliberately use the term "notification" and not "state" because Signals, using the push-pull based algorithm, don't dispatch a state value; they notify that their own state has changed; this is not the same. We will talk about cache invalidation in detail in the next section. Keep in mind that the dot moving between "nodes" is only a notification. (In the next modules, you can click the Signal to see the notifications being dispatched to its subscribers).
signal
subscriber
subscriber
In this more complex example, we have multiple "nodes" that depend on each other. All of them can notify their own subscribers that their state has changed.
signal
computed 1
computed 2
computed 3
computed 4
At this point, we understand that the push-based approach propagates downward through notifications, and now we have to explore how the pull-based approach propagates upward through re-evaluation. What does that mean?
One of the most important aspects of Signals may not be the signal function itself, but the computed. They are reactive derived functions that compute values based on signals or computeds. We can imagine them as signals without a setter.
First, the main difference between signals and computeds is that computeds are lazy. They are invalidated (not updated) whenever one of their dependencies changes. Furthermore, they are updated only when they are read, if they have been invalidated first (our cache system). This is what we call the pull-based algorithm.
Secondly, computeds automatically track their dependencies. They subscribe to changes in the signals/computeds they access during their execution. It's one of the most "magical" aspects of this system that developers love, compared to React where we have to manually specify the dependencies of a useEffect or useMemo with the dependency array. Let's see how we can implement a simple version:
const computed = <T>(fn: () => T) => {
let cachedValue: T
// ...
const _internalCompute = (): void => {
// ...
cachedValue = fn()
}
return {
get value() {
_internalCompute()
return cachedValue
}
}
}
The thing to note here is that accessing the value property of the computed object triggers the _internalCompute function, which re-evaluates the computation and updates the cached value (not actually cached for now, but we will address this later).
const count = signal(1)
const doubleCount = computed(() => count.value * 2)
const plusOne = computed(() => doubleCount.value + 1)
// Update the signal…
count.value = 5
console.log(doubleCount.value) // 10
console.log(plusOne.value) // 11
We know this code, right? Now, look at the dependency tree of this program and focus on the "pull" aspect of the algorithm. You can click the computed to see how the dot moves up the tree when we read its value.
count
doubleCount
plusOne
We can observe that the computed being read has no knowledge of the entire tree. It only knows what its sources (dependencies) are and what its subscribers (dependents) are.
Check the same module with a more complex dependency tree. We can see what happens when a computed function has multiple dependencies at the same time (this is the case for the lowest node in the tree):
signal
computed 1
computed 2
computed 3
Some questions remain about the implementation of this system at this point, and this is where Signals become more complex and interesting:
computed function process the link between its sources and itself? (the auto-tracking of dependencies)The link between signals and computeds is somewhat magical. As mentioned before, no need to explicitly declare the dependencies of a computed value on signals, as we do in React (with the damned dependency array). The system automatically tracks which signals are accessed during the execution of the computed function. This is what we will discover in this section.
Back to our previous example with the count signal, doubleCount and plusOne computeds.
const count = signal(1)
const doubleCount = computed(() => count.value * 2)
const plusOne = computed(() => doubleCount.value + 1)
// Update the signal…
count.value = 5
console.log(doubleCount.value) // 10
console.log(plusOne.value) // 11
Keep this program in mind. To understand the mechanism of the auto-tracking, the best way is to look at the implementation of a Signal library in detail:
We have demystified the auto-tracking dependency system, using the global STACK, which enables communication between the currently executing computed and the signals/computeds it accesses during its execution.
We have also seen how the cache system works in the pull-based algorithm, using the dirty flag to know when a computed is invalidated.
As described above, the final flow of signals is now possible by combining both push and pull mechanisms! You can click the signal or a computed to observe the invalidation and re-evaluation of nodes in the tree. Let's play with it!
signal
computed 1
computed 2
computed 3
Note that all
setDirtycalls are synchronous; each node is invalidated when the dot passes through it. The delay is purely for visual purposes.
And that's it! We now have a complete picture of the push-pull algorithm at the core of Signals. I will not cover it here, but I still need to mention that most signal libraries also expose an effect function on top of the same tracking mechanism, but that belongs more to API design than to the algorithm itself.
The article focused on the algorithm, so what makes Signals interesting is not just that they update some UI, but how they propagate change through a reactive graph:
This combination gives us a fine-grained reactivity system already adopted by many frameworks like Solid, Vue, Preact, Angular, Svelte, and others. Each comes with its own API surface, but shares the same underlying logic.
The Signals topic has already been covered in a large number of publications that greatly helped me understand the subject, but none that I found offered an in-depth analysis of implementing the push-pull based algorithm from scratch. To explore this subject in depth, I implemented my own version of the Signal system, certainly very naive compared to the great alien-signals, preact-signals or solidjs-signals, but functional enough to understand the concept.
Note that we may "soon" (maybe?) no longer need to implement this system manually, as this model is being standardized natively in JavaScript: TC39 proposal-signals (currently at Stage 1). This would be a major advancement for the entire JavaScript ecosystem, as it would allow each framework to rely on a common foundation, while retaining the freedom to choose the API that best suits them.
I greatly enjoyed writing this article and building interactive modules for it. If you learned something new or enjoyed reading it, consider supporting my work ☕️ or feel free to connect with me on Bluesky or LinkedIn 👋
I highly recommend taking the time to listen to this podcast episode, which helped me dive deep into this subject: How signals work by Con Tejas Code w/ Kristen Maevyn and Daniel Ehrenberg
Articles
Videos & podcasts
Libraries