One thing I can say about the Wolfram language is that is actually Lisp with syntax that looks weird at first sight.
However when you look at rule processing, it's like pattern matching on steroids that I haven't seen in lisp world. It looks quite powerful and applies throughout the language (eg the "Query" book).
Too bad the whole language is closed and so heavily licensed .
I actually think this is just computer science. Why? Because the first "computer scientist" - Alan Turing - was interested in this exact same set of ideas.
The first programs he wrote for the Atlas and the Mark II ("the Baby"), seem to have been focused on a theory he had around how animals got their markings.
They look a little to me (as a non-expert in these areas, and reading them in a museum over about 15 minutes, not doing a deep analysis), like a primitive form of cellular automata algorithm. From the scrawls on the print outs, it's possible that he was playing with the space of algorithms not just the algorithms themselves.
It might be worth going back and looking at that early work he did and seeing it through this lens.
And that's my point; it's okay to create new names for sub-disciplines, as Wolfram is doing here. Because that's what we have been doing since the days of Aristotle.
The idea iiuc, is that pattern formation in animals depends on molecules diffusing through the growing system (the body) and reacting where the waves of molecules overlap.
To me , the 1952 paper is very important, since it shows up in theoretical biology a lot. Seeing generality at all these different emergence levels is really exciting to me. (and it makes me sad when others don't see it). Can you imagine? Set up a few gradients, and now you have coordinates. Put all the bits where they're supposed to go like uhhh... GLSL sort of loosly fits. How cool is THAT?
More recently I've gotten into all sorts of debates on HN by people who like Searle. Often the argument goes "Turing is all wrong, he knows nothing about biology."
Turns out towards the end of his life he was applying his knowledge to biology. Most of which experimentally verified, besides!
(ps. just to be sure: Never wondered how DNA encodes the trick? You started out as a clump of cells, all the same. How did one part decide to become the tip of your nose, and the other the tips of your toes? Segmentation controlled by Turing patterns all the way down!)
Not quite. A formal system is a system of syntactic rules defined over an alphabet of symbols. They can be mechanized in principle. Peano arithmetic is one example.
A „logical” semantics can be assigned to such a formal system, but it is not a necessary entailment of the syntax, even if such systems are typically motivated by particular semantic models. Model theory might examine how the same formal system affords different interpretations.
Such syntactic systems have computational properties, and it is how computer science kicked off historically.
I’m involved in the development of the Functional Universe (FU) framework [0], and I see some interesting intersections with Wolfram’s ruliology.
Both start from the idea that simple rules / functions can generate complex structure. Where FU adds a twist is by making a sharp distinction between possibility and history. In FU, we separate aggregation (the space of all admissible transitions - superpositions, virtual processes, rule applications) from composition (the irreversible commitment of one transition that actually enters history).
You can think of ruliology as exploring the space of possible rule evolutions, while FU focuses on how one path gets selected and becomes real, advancing proper time and building causal structure. Rules generate possibilities; commitment creates facts.
So they’re not the same thing, but I think they’re complementary: ruliology studies the landscape of rules, FU studies the boundary where possibility turns into irreversible history.
I don’t know, I’ve been involved in computer science for several decades now and cellular automata hasn’t really lost its charm. Seems like a cool thing to dedicate your life to!
His last 30 years can be summarized as: "Look at these pictures of cellular automata! I predict that the world can be described with cellular automata like these!"
Given that he’s known for priority disputes and legal action over who did what and when, that “well-accomplished” bit should have an asterisk next to it.
He is. But he's also convinced that cellular automata will replace the standard model as the foundation of physics, and that he will therefore be known to history as the third founder of physics after Newton and Einstein.
Always found this term sounded like a half-backed one. I get that going full greek roots with nomology was a dead end due to prior art. But "regularology" was probably free, or even at the time "regulogy" or "regology" though by now they are attached to different notions.
Ruliology provides a powerful descriptive framework - a taxonomy of computational behavior. However, it operates at the level of external dynamics without grounding in a primitive ontology. It tells us that rules behave, not why they exist or what they fundamentally are.
This makes ruliology an invaluable cartography of the computational landscape, but not a foundation. It maps the territory without explaining what the territory is made of.
I am struggling to understand what is new here - other than the word ruliad - which to me seems to similar to what we have in theoretical computer science when we talk about languages, sentences, and grammars.
It's just Wolfram explaining how he likes stuying things that can be describe by simple rules and how complexity can emerge in spite of (or because of?) the seeming simplicity of those rules. He came up with a word for it, and while I think "ruliology" sounds a bit silly, it does say what's on the tin.
For some reason he doesn't like doing mathematical demonstrations so he shuns the practice of doing them, and invented a new word to describe that way of using formal systems.
But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.
With Wolfram it is usually the grandstanding and taking credit for other people's work. Inventing new words for old things is part and parcel of that. He has a lot in common with Schmidhuber, both are arguably very smart people but the fact that other people can be just as smart doesn't seem to fit their worldview.
Wolfram has failed to live up to his promise of providing tools to make progress on fundamental questions of science.
From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).
Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).
At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.
But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.
Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.
Sure, it's typical Wolfram, inviting the typical criticism. If you can understand what he's talking about at all then you won't be very convinced it's new. If you can't understand what he's talking about, then you also won't be interested in the puffery and priority dispute.
Yes, he frequently exhibits an ego the size of Jupiter. But he is very smart†, and he writes well, and this stuff that theyre doing is at least interesting. I don't know if its physics or metaphysics or something else entirely, and it may be just empty tail-chasing, but I reckon its at least worth paying some attention to.
† and he's also built a long-term business making and selling extremely capable maths tooling, of all things, which I think is worth some respect
Fair enough. However I feel that there are plenty of others we could give our finite attention to, from who we would derive as much or more benefit from. So that’s what I’ll do, with no net loss for me.
yeah i get the emotional push back but putting that aside, he still seems fairly well accomplished, more-so than me by a long shot and at least he is throwing nerdy ideas out there we can think about or discuss.
The Wolfram Engine (essentially the Wolfram Language interpreter/execution environment) is free: https://www.wolfram.com/engine/. You can download it and run Wolfram code.
In my experience, Alpha works very hard to force you into a natural-language syntax that takes away much of the fun of the rule-based aspects of the Wolfram language.
Isn't this his personal blog? The domain name is "stephenwolfram.com", this is his personal website. Of course there will be "I"'s and "me"'s — this website is about him and what he does.
As for falsifiability:
> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.
So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.
I think the comparison is unfair. Wolfram is endowed with a very generous sense of his own self worth, but, other than the victims of his litigation, I'm not aware that he's hurting anybody.
However when you look at rule processing, it's like pattern matching on steroids that I haven't seen in lisp world. It looks quite powerful and applies throughout the language (eg the "Query" book).
Too bad the whole language is closed and so heavily licensed .
But it does exist in the FP world: Prolog, Erlang.
The first programs he wrote for the Atlas and the Mark II ("the Baby"), seem to have been focused on a theory he had around how animals got their markings.
They look a little to me (as a non-expert in these areas, and reading them in a museum over about 15 minutes, not doing a deep analysis), like a primitive form of cellular automata algorithm. From the scrawls on the print outs, it's possible that he was playing with the space of algorithms not just the algorithms themselves.
It might be worth going back and looking at that early work he did and seeing it through this lens.
https://fr.wikipedia.org/wiki/Turtles_All_the_Way_Down
https://en.wikipedia.org/wiki/Doctor_of_Philosophy
https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosoph...
https://xefer.com/2011/05/wikipedia
https://snap.stanford.edu/class/cs224w-2013/projects2013/cs2...
https://youtu.be/kz7DfbOuvOM
https://youtu.be/wQbFkAkThGk
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_sys...
The idea iiuc, is that pattern formation in animals depends on molecules diffusing through the growing system (the body) and reacting where the waves of molecules overlap.
More recently I've gotten into all sorts of debates on HN by people who like Searle. Often the argument goes "Turing is all wrong, he knows nothing about biology."
Turns out towards the end of his life he was applying his knowledge to biology. Most of which experimentally verified, besides!
(ps. just to be sure: Never wondered how DNA encodes the trick? You started out as a clump of cells, all the same. How did one part decide to become the tip of your nose, and the other the tips of your toes? Segmentation controlled by Turing patterns all the way down!)
Homeobox genes, right?
https://en.wikipedia.org/wiki/Morphogenesis
Yes, https://en.wikipedia.org/wiki/The_Chemical_Basis_of_Morphoge...
But also:
https://en.wikipedia.org/wiki/Body_plan#Genetic_basis
https://en.wikipedia.org/wiki/Homeobox
https://en.wikipedia.org/wiki/Hox_gene
https://en.wikipedia.org/wiki/Gene_regulatory_network
https://en.wikipedia.org/wiki/Epigenetics
https://en.wikipedia.org/wiki/Cell_potency
https://en.wikipedia.org/wiki/Evo-devo
https://www.youtube.com/watch?v=ydqReeTV_vk
https://en.wikipedia.org/wiki/Formal_system
Formal Systems is the study of logical systems themselves.
Ruliology is a study of what actual systems do.
It's doing the arithmetic computations and looking at the results, not the abstract algebra.
In theory, theory and practice are the same. In practice, they are not.
A „logical” semantics can be assigned to such a formal system, but it is not a necessary entailment of the syntax, even if such systems are typically motivated by particular semantic models. Model theory might examine how the same formal system affords different interpretations.
Such syntactic systems have computational properties, and it is how computer science kicked off historically.
Both start from the idea that simple rules / functions can generate complex structure. Where FU adds a twist is by making a sharp distinction between possibility and history. In FU, we separate aggregation (the space of all admissible transitions - superpositions, virtual processes, rule applications) from composition (the irreversible commitment of one transition that actually enters history).
You can think of ruliology as exploring the space of possible rule evolutions, while FU focuses on how one path gets selected and becomes real, advancing proper time and building causal structure. Rules generate possibilities; commitment creates facts.
So they’re not the same thing, but I think they’re complementary: ruliology studies the landscape of rules, FU studies the boundary where possibility turns into irreversible history.
[0]https://github.com/VoxleOne/FunctionalUniverse/blob/main/doc...
Isn't he well accomplished, and prolific throughout his life?
His last 30 years can be summarized as: "Look at these pictures of cellular automata! I predict that the world can be described with cellular automata like these!"
http://bactra.org/reviews/wolfram/ is still a classic.
/s
https://en.wiktionary.org/wiki/regula#Latin
https://en.wikipedia.org/wiki/Nomology
https://www.ebi.ac.uk/ols4/ontologies/ro/properties/http%253... https://www.ycombinator.com/companies/regology
This makes ruliology an invaluable cartography of the computational landscape, but not a foundation. It maps the territory without explaining what the territory is made of.
For some reason he doesn't like doing mathematical demonstrations so he shuns the practice of doing them, and invented a new word to describe that way of using formal systems.
https://en.wikipedia.org/wiki/Formal_system
https://en.wikipedia.org/wiki/Chomsky_hierarchy
But maybe it is more like fractals and emerging complex systems?
https://en.wikipedia.org/wiki/A_New_Kind_of_Science
But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.
From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).
Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).
At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.
But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.
Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.
[0] https://en.wikipedia.org/wiki/Rule_110
The rest of his stuff tagged ruliology is more interesting though. Here's one connecting ML and cellular automata: https://writings.stephenwolfram.com/2024/08/whats-really-goi...
Respectfully, I think that is a mistake.
Yes, he frequently exhibits an ego the size of Jupiter. But he is very smart†, and he writes well, and this stuff that theyre doing is at least interesting. I don't know if its physics or metaphysics or something else entirely, and it may be just empty tail-chasing, but I reckon its at least worth paying some attention to.
† and he's also built a long-term business making and selling extremely capable maths tooling, of all things, which I think is worth some respect
At least Wolfram's ego led him to contribute something interesting.
Wolfram Mathematica (the Jupyter Notebook-like development environment) is paid, but there are free and open source alternatives like https://github.com/WLJSTeam/wolfram-js-frontend.
> WLJS Notebook ... [is] A lightweight, cross-platform alternative to Mathematica, built using open-source tools and the free Wolfram Engine.
https://www.wolframalpha.com/
Didn't find anything on falsifiable criteria -- any new theory should be able, at least in theory, to be tested for being not true.
As for falsifiability:
> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.
So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.
You judge them by how useful they are.
Ruliology is a bit like that.
https://nedbatchelder.com/blog/200207/stephen_wolframs_unfor...