Nowadays I'd probably just ask Claude to figure it out for me, but pre LLMs, WL was the highest value tool for thought in my toolbox.
(Edit: and they actually offer perpetual licenses!)
Yes this is all a bit quirky and nuanced but when you get into it these things are really good. It’s refreshing to see some really smart folks just focused on doing great things without blinding from VCs and MBAs pushing another hacky quick way to make a buck and cash out.
Typical example of a extraction/exploitation mentality where innovation would be better. Wolfram is in an amazingly good spot to spin up better "simulation as a service" if they would look at fine-tuning LLMs for compiling natural language (or academic papers) into mathematica semi-autonomously and very reliably. Mathworld is potentially a huge asset for that sort of thing too.
The reality is that by now we should already be at a level where common programming would be like Wolfram everywhere.
Maybe agents and LLM driven code generation is how we eventually get into the next abstraction level, sadly won't be without casualties with smaller team sizes, when so much can be automated away.
I played around with RemoteKernel some time ago (https://taoofmac.com/space/blog/2016/08/10/0830) but this is “better”, although I wish they’d make it hostable in your own cloud provider like materials simulation software and other things we see running in HPC clusters. (I also ran Mathematica in a 512GB/128core VM once for kicks, but it’s just not cost-effective).
I started working on an implementation in Rust called Woxi (https://github.com/ad-si/Woxi) and I hope to find some contributors, as it is such a gargantuan task!
If this was open sourced, it had the potential to severely change the software/IT industry. As an expensive proprietary software however, it is deemed to stay a niche product mainly for academia.
Incidentally, Mathematica + LLMs make a great combination. If you take what is pretty much the biggest mathematical routine library in the world and combine it with interactive visualization tools, and then use an LLM to accelerate things, it becomes an incredible tool. Almost ridiculously powerful for trying things out, teaching, visualizing things, etc.
(I've been using Mathematica since 1992 or so, so I'm familiar with the language, but it's still so much faster to just tell Claude to visualize this or that)
Mathematica seems a little pricey but maybe it would motivate me to learn more math.
I would love to read what non-mathematicians use MatLab, Mathematica, and Maple for.
Mathematica is way, way under appreciated in industry, and even in the sciences.
One would expect 37 years would be enough to create such alternative.
Jupiter notebooks aren't the same.
What aspects of the Wolfram language should be everywhere? The easy access to lots of datasets? The easy access to lots of mathematical functions? CAS in general?
Jokes and sales pitches aside. We kinda have that already, we platforms that allow us to run the same code on, x86, arm, wasm… and so on. It’s just there is no consensus on which platform to use. Nor should there be since that would slow progress of new and better ways to program.
We will never have one language to span graphics, full stack, embedded, scientific, high performance, etc without excessive bloat.
I do notice that they have an "Application Server" for Kubernetes, which is pretty curious: https://github.com/WolframResearch/WAS-Kubernetes (though not updated in over a year)
As an engineering undergrad I had a similar feeling about Matlab & Mathematica.
Matlab especially had 'tool boxes' that you bought as add-ons to do specific engineering calcs and simulations and it was great, but I almost always found myself recreating things in python just because it felt slightly more sane to handle data and glue code.
Pandas and Matplotlib and SciPy all used via an ipython notebook were almost a replacement.
It's $195/year for a personal license. And only $75/year for students. Their licensing model is pretty broad.
Matlab and Python are in the same ballpark. Easy syntax and large standard library. Matlab provides a lot more dedicated libraries for niche areas but the overall experience feels the same.
Mathematica doesn't really have a standard counterpart. Jupyter notebooks try to capture the experience but the support for symbolic expressions makes the Mathematica experience very different.
Basically the ideas of Smalltalk and Lisp Machine variations, that are still only partially available in modern IDEs, and proudly ignored by the "vt100 rules and vi first" minded devs.
Good for product: not so good for people.
I am told that he gave a great deal of agency to people he trusted, though.
In my career, I ran into two [brilliant] individuals that had, at one time, worked with Jobs.
They both hated him.
That said, the parent was talking about it being expensive for use in industry. Personal and student licenses aren't relevant there.
Plus you buy a version of it, and then someone else is on another version, and you don't have the same features, and the tiny community is fragmented.
I’m using xcas now, it’s working pretty well for my humble needs.
But it seems like the proprietary languages have all withered, regardless of price. Even $195 for Mathematica is an obvious concession to this trend. I don't ever remember it being that cheap.
I could write an essay on the benefits of free tooling, but enough has already been written. I'll spare you the slop. ;-)
In retrospect, doing the work in mathematica would have probably stretched my brain more (in a good way!) since it provides a different and powerful way of solving problems vs other languages...maybe I'll have to revisit it. Perhaps even try advent of code with it?
While python did get the job done, it feels like the ceiling (especially for power users) is so much higher in mathematica.
(Mathematica is of course much better than Python at symbolic math, but this isn't what you are asking about)
Reminds me of the “Stop writing Dead Programs” talk. https://news.ycombinator.com/item?id=33270235
Ugly os software at least has potential to grow internally. Long lived commercial software is a totting carcass with fresh coat of paint every now and then.
I don't remember what the pricing has been throughout the years. But I do remember that for some of the time I couldn't really afford Mathematica. And the license I wanted was also a bit too expensive to justify for a piece of software that only I would be using within an organization.
Because it is also about enough other people around you not being able to justify the expense. And about companies not wanting to pay a lot of money for licenses so they can lock their computations into an ecosystem that is very small.
Mathematica is, in the computing world, pretty irrelevant. And I'm being generous when I say "pretty": I have never encountered it in any job or even in academia. People know of it. They just don't use it for work.
It would have been nice if the language and the runtime had been open source. But Wolfram didn't want to go in that direction. That's a perfectly fine choice to make. But it does mean that as a language, Mathematica will never be important. Nor will knowing how to program in it be a marketable skill.
(To Stephen Wolfram it really doesn't matter. He obviously makes a good living. I'm not sure I'd bother with the noise and stress coming from open sourcing something)
The notebooks are also difficult to version control (unreadable diffs for minor changes), and unit testing is clearly just an afterthought. Also the GUI performance is bad. Put more than a hand full of plots on a page, and everything slows to a crawl. What keeps me coming back is the comprehensive function library, and the formula inputs. I find it quite difficult to spot mistakes in mathematical expressions written in Python syntax.
Maybe it will someday be good enough, but not today, and probably not for at least 5 years.
Same about your criticism of error handling and control flow: https://reference.wolfram.com/language/guide/RobustnessAndEr...

To immediately enable Wolfram Compute Services in Version 14.3 Wolfram Desktop systems, run
RemoteBatchSubmissionEnvironment["WolframBatch"]
.
(The functionality is automatically available in the Wolfram Cloud.)
Let’s say you’ve done a computation in Wolfram Language. And now you want to scale it up. Maybe 1000x or more. Well, today we’ve released an extremely streamlined way to do that. Just wrap the scaled up computation in RemoteBatchSubmit and off it’ll go to our new Wolfram Compute Services system. Then—in a minute, an hour, a day, or whatever—it’ll let you know it’s finished, and you can get its results.
For decades I’ve often needed to do big, crunchy calculations (usually for science). With large volumes of data, millions of cases, rampant computational irreducibility, etc. I probably have more compute lying around my house than most people—these days about 200 cores worth. But many nights I’ll leave all of that compute running, all night—and I still want much more. Well, as of today, there’s an easy solution—for everyone: just seamlessly send your computation off to Wolfram Compute Services to be done, at basically any scale.
For nearly 20 years we’ve had built-in functions like ParallelMap and ParallelTable in Wolfram Language that make it immediate to parallelize subcomputations. But for this to really let you scale up, you have to have the compute. Which now—thanks to our new Wolfram Compute Services—everyone can immediately get.
The underlying tools that make Wolfram Compute Services possible have existed in the Wolfram Language for several years. But what Wolfram Compute Services now does is to pull everything together to provide an extremely streamlined all-in-one experience. For example, let’s say you’re working in a notebook and building up a computation. And finally you give the input that you want to scale up. Typically that input will have lots of dependencies on earlier parts of your computation. But you don’t have to worry about any of that. Just take the input you want to scale up, and feed it to RemoteBatchSubmit. Wolfram Compute Services will automatically take care of all the dependencies, etc.
And another thing: RemoteBatchSubmit, like every function in Wolfram Language, is dealing with symbolic expressions, which can represent anything—from numerical tables to images to graphs to user interfaces to videos, etc. So that means that the results you get can immediately be used, say in your Wolfram Notebook, without any importing, etc.
OK, so what kinds of machines can you run on? Well, Wolfram Compute Services gives you a bunch of options, suitable for different computations, and different budgets. There’s the most basic 1 core, 8 GB option—which you can use to just “get a computation off your own machine”. You can pick a machine with larger memory—currently up to about 1500 GB. Or you can pick a machine with more cores—currently up to 192. But if you’re looking for even larger scale parallelism Wolfram Compute Services can deal with that too. Because RemoteBatchMapSubmit can map a function across any number of elements, running on any number of cores, across multiple machines.
OK, so here’s a very simple example—that happens to come from some science I did a little while ago. Define a function PentagonTiling that randomly adds nonoverlapping pentagons to a cluster:

For 20 pentagons I can run this quickly on my machine:

But what about for 500 pentagons? Well, the computational geometry gets difficult and it would take long enough that I wouldn’t want to tie up my own machine doing it. But now there’s another option: use Wolfram Compute Services!
And all I have to do is feed my computation to RemoteBatchSubmit:

Immediately, a job is created (with all necessary dependencies automatically handled). And the job is queued for execution. And then, a couple of minutes later, I get an email:

Not knowing how long it’s going to take, I go off and do something else. But a while later, I’m curious to check how my job is doing. So I click the link in the email and it takes me to a dashboard—and I can see that my job is successfully running:

I go off and do other things. Then, suddenly, I get an email:

It finished! And in the mail is a preview of the result. To get the result as an expression in a Wolfram Language session I just evaluate a line from the email:

And this is now a computable object that I can work with, say computing areas

or counting holes:

One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale parallelism. You want to run your computation in parallel on hundreds of cores? Well, just use Wolfram Compute Services!
Here’s an example that came up in some recent work of mine. I’m searching for a cellular automaton rule that generates a pattern with a “lifetime” of exactly 100 steps. Here I’m testing 10,000 random rules—which takes a couple of seconds, and doesn’t find anything:

To test 100,000 rules I can use ParallelSelect and run in parallel, say across the 16 cores in my laptop:

Still nothing. OK, so what about testing 100 million rules? Well, then it’s time for Wolfram Compute Services. The simplest thing to do is just to submit a job requesting a machine with lots of cores (here 192, the maximum currently offered):

A few minutes later I get mail telling me the job is starting. After a while I check on my job and it’s still running:

I go off and do other things. Then, after a couple of hours I get mail telling me my job is finished. And there’s a preview in the email that shows, yes, it found some things:

I get the result:

And here they are—rules plucked from the hundred million tests we did in the computational universe:

But what if we wanted to get this result in less than a couple of hours? Well, then we’d need even more parallelism. And, actually, Wolfram Compute Services lets us get that too—using RemoteBatchMapSubmit. You can think of RemoteBatchMapSubmit as a souped up analog of ParallelMap—mapping a function across a list of any length, splitting up the necessary computations across cores that can be on different machines, and handling the data and communications involved in a scalable way.
Because RemoteBatchMapSubmit is a “pure Map” we have to rearrange our computation a little—making it run 100,000 cases of selecting from 1000 random instances:

The system decided to distribute my 100,000 cases across 316 separate “child jobs”, here each running on its own core. How is the job doing? I can get a dynamic visualization of what’s happening:

And it doesn’t take many minutes before I’m getting mail that the job is finished:

And, yes, even though I only had to wait for 3 minutes to get this result, the total amount of computer time used—across all the cores—is about 8 hours.
Now I can retrieve all the results, using Catenate to combine all the separate pieces I generated:

And, yes, if I wanted to spend a little more, I could run a bigger search, increasing the 100,000 to a larger number; RemoteBatchMapSubmit and Wolfram Compute Services would seamlessly scale up.
Like everything around Wolfram Language, Wolfram Compute Services is fully programmable. When you submit a job, there are lots of options you can set. We already saw the option RemoteMachineClass which lets you choose the type of machine to use. Currently the choices range from "Basic1x8" (1 core, 8 GB) through "Basic4x16" (4 cores, 16 GB) to “parallel compute” "Compute192x384" (192 cores, 384 GB) and “large memory” "Memory192x1536" (192 cores, 1536 GB).
Different classes of machine cost different numbers of credits to run. And to make sure things don’t go out of control, you can set the options TimeConstraint (maximum time in seconds) and CreditConstraint (maximum number of credits to use).
Then there’s notification. The default is to send one email when the job is starting, and one when it’s finished. There’s an option RemoteJobName that lets you give a name to each job, so you can more easily tell which job a particular piece of email is about, or where the job is on the web dashboard. (If you don’t give a name to a job, it’ll be referred to by the UUID it’s been assigned.)
The option RemoteJobNotifications lets you say what notifications you want, and how you want to receive them. There can be notifications whenever the status of a job changes, or at specific time intervals, or when specific numbers of credits have been used. You can get notifications either by email, or by text message. And, yes, if you get notified that your job is going to run out of credits, you can always go to the Wolfram Account portal to top up your credits.
There are many properties of jobs that you can query. A central one is "EvaluationResult". But, for example, "EvaluationData" gives you a whole association of related information:

If your job succeeds, it’s pretty likely "EvaluationResult" will be all you need. But if something goes wrong, you can easily drill down to study the details of what happened with the job, for example by looking at "JobLogTabular".
If you want to know all the jobs you’ve initiated, you can always look at the web dashboard, but you can also get symbolic representations of the jobs from:

For any of these job objects, you can ask for properties, and you can for example also apply RemoteBatchJobAbort to abort them.
Once a job has completed, its result will be stored in Wolfram Compute Services—but only for a limited time (currently two weeks). Of course, once you’ve got the result, it’s very easy to store it permanently, for example, by putting it into the Wolfram Cloud using CloudPut[expr]. (If you know you’re going to want to store the result permanently, you can also do the CloudPut right inside your RemoteBatchSubmit.)
Talking about programmatic uses of Wolfram Compute Services, here’s another example: let’s say you want to generate a compute-intensive report once a week. Well, then you can put together several very high-level Wolfram Language functions to deploy a scheduled task that will run in the Wolfram Cloud to initiate jobs for Wolfram Compute Services:

And, yes, you can initiate a Wolfram Compute Services job from any Wolfram Language system, whether on the desktop or in the cloud.
Wolfram Compute Services is going to be very useful to many people. But actually it’s just part of a much larger constellation of capabilities aimed at broadening the ways Wolfram Language can be used.
Mathematica and the Wolfram Language started—back in 1988—as desktop systems. But even at the very beginning, there was a capability to run the notebook front end on one machine, and then have a “remote kernel” on another machine. (In those days we supported, among other things, communication via phone line!) In 2008 we introduced built-in parallel computation capabilities like ParallelMap and ParallelTable. Then in 2014 we introduced the Wolfram Cloud—both replicating the core functionality of Wolfram Notebooks on the web, and providing services such as instant APIs and scheduled tasks. Soon thereafter, we introduced the Enterprise Private Cloud—a private version of Wolfram Cloud. In 2021 we introduced Wolfram Application Server to deliver high-performance APIs (and it’s what we now use, for example, for Wolfram|Alpha). Along the way, in 2019, we introduced Wolfram Engine as a streamlined server and command-line deployment of Wolfram Language. Around Wolfram Engine we built WSTPServer to serve Wolfram Engine capabilities on local networks, and we introduced WolframScript to provide a deployment-agnostic way to run command-line-style Wolfram Language code. In 2020 we then introduced the first version of RemoteBatchSubmit, to be used with cloud services such as AWS and Azure. But unlike with Wolfram Compute Services, this required “do it yourself” provisioning and licensing with the cloud services. And, finally, now, that’s what we’ve automated in Wolfram Compute Services.
OK, so what’s next? An important direction is the forthcoming Wolfram HPCKit—for organizations with their own large-scale compute facilities to set up their own back ends to RemoteBatchSubmit, etc. RemoteBatchSubmit is built in a very general way, that allows different “batch computation providers” to be plugged in. Wolfram Compute Services is initially set up to support just one standard batch computation provider: "WolframBatch". HPCKit will allow organizations to configure their own compute facilities (often with our help) to serve as batch computation providers, extending the streamlined experience of Wolfram Compute Services to on-premise or organizational compute facilities, and automating what is often a rather fiddly job process of submission (which, I must say, personally reminds me a lot of the mainframe job control systems I used in the 1970s).
Wolfram Compute Services is currently set up purely as a batch computation environment. But within the Wolfram System, we have the capability to support synchronous remote computation, and we’re planning to extend Wolfram Compute Services to offer this—allowing one, for example, to seamlessly run a remote kernel on a large or exotic remote machine.
But this is for the future. Today we’re launching the first version of Wolfram Compute Services. Which makes “supercomputer power” immediately available for any Wolfram Language computation. I think it’s going to be very useful to a broad range of users of Wolfram Language. I know I’m going to be using it a lot.
Not everyone is keen doing scripting from command line with vi.
Someone has to pay the bills for development effort, and when it based on volunteer work, it is mostly followers and not innovators.
To my knowledge, at least in academia, Wolfram (Mathematica) seems to be used quite a bit by physicists. Also in some areas of mathematics it is used (but many mathematicians seems to prefer Maple). Concerning mathematical research, I want to mention that by now also some open-source (and often more specialized) CASs seem to have become more widespread, such as SageMath, SymPy, Macaulay2, GP/PARI or GAP.
Different languages are better at different things, so it rarely makes much sense to say that one language is better than another in general. Python is definitely much better than Mathematica for "typical" imperative programming tasks (web servers, CLI programs, CRUD apps, etc.), but Mathematica is much better at data processing, symbolic manipulation, drawing plots, and other similar tasks.
> there is no real scoping (even different notebooks share all variables, Module[] is incredibly clumsy)
Scoping is indeed an absolute mess, and the thing that I personally find the most irritating about the language.
> no real control flow (If[] is just a function)
You're meant to program Mathematica by using patterns and operating on lists as a whole, so you should rarely need to use branching/control flow/If[]. It's a very different style of programming that takes quite a while to get used to, but it works really well for some tasks.
> no real error handling
For input validation, you should use the pattern features to make it impossible to even call the function with invalid input. And for errors in computation, it often makes the most sense to return "Undefined", "Null", "Infinity", or something similar, and then propagate that through the rest of the expression.
> The notebooks are also difficult to version control (unreadable diffs for minor changes)
Mathematica notebooks tend to do slightly better with version control than Jupyter Notebooks, although they're both terrible. You can work around this with Git clean/smudge filters, or you can just use ".wls"/".py" files directly.
Some of those tools aren't yet fully there, but also aren't completely dumb, they get more done in a day, than trying to do the same workflows with classical programming.
Workato, Boomi, Powerapps, Opal,...
So as great as Mathematica sounds for interactive math and science computations, sounds like a poor tool for building systems that will be deployed and used by many people.
Yes, I definitely agree there. Mathematica is definitely great for interactive use, but I'm not really aware of anyone aside from Wolfram himself who tries to deploy it at scale.
Yup, I use a long "jq" command [0] as a Git clean filter for my Jupyter notebooks, and it works really well. I use a similar program [1] for Mathematica notebooks, and it also works really well.
[0]: https://stackoverflow.com/a/74104693
[1]: https://github.com/JP-Ellis/mathematica-notebook-filter
In many cases, people are free to write their own implementation. Your claim "Source code should enter public domain in a decade at most." means that every software vendor shall be obliged after some time to hand out their source code, which is something very strong to ask for.
What is the true crime are the laws that in some cases make such an own implementation illegal (software patents, probitions of reverse-engineering, ...).
Obviously. Since software is as much vital to the modern world as water, making people who deal with it disclose implementation details is a very small ask.
Access to the market is not a right but a privilege. If you want to sell things we can demand things of you.
I can't tell if you're saying that as if it's a good thing, or a bad thing.
https://gitlab.freedesktop.org/xkeyboard-config/xkeyboard-co...
Now, I really could've used something like this on macOS…
Karabiner to the rescue https://genesy.github.io/karabiner-complex-rules-generator/#...
The analogy would be ever-so-slightly more accurate if you said "software is as much vital to the modern world as beverages".
It would also be more accurate if all water was free.
Neither of which is the case.
Infringing on that should be justified in terms of protecting the rights of those involved, such as ensuring the quality of goods, enforcement of reasonable contract terms and such. We are involved in the process as participants in the market, and that’s the basis of any legitimacy we have to impose any rules in the market. That includes an obligation to fair treatment of other participants.
If someone writes notes, procedures, a diary, software etc for their own use they are under no obligation to publish it, ever. That’s basic privacy protection. Whether an executable was written from scratch in an assembler or is compiled from high level source code isn’t anyone else’s business. It should meet quality standards for commercial transactions and that’s it. There’s no more obligation to publish source than there is to publish design documents, early versions, or unpublished material. That would be an overreaching invasion of privacy.
People shouldn’t lose their rights to what they own, just because they do so through a company.
I do think reasonable taxation and regulation is justifiable but on the understanding that it is an imposition. There is a give and take when it comes to rights and obligations, but this seems like overreach.