The stock market crashes once in a while. Shit happens. The long-term outlook is unlikely to change nearly as much, unless you think there will be systemic macroeconomic changes.
Example of LLMs doing well in similar tasks: https://arxiv.org/abs/2602.16800
The underlying paper itself is more precise, comparing against LUAR, a 2021 method based on bert-style embeddings (i.e. a model with 82M parameters, which is 0.2% the size of e.g. the recent OS Gemma models). I don't fault the authors of the paper at all for this, their method is interesting and more interpretable! But you can check the publication history, their paper was uploaded originally in 2024: https://arxiv.org/abs/2403.08462
A good example of why some folks are bearish on journals.
"AI bad" seems to sell in some circles, and while there are many level-headed criticisms to be made of current AI fads, I don't think this qualifies.
https://www.nature.com/articles/s41599-025-06340-3/figures/2
What applications do you think make the most sense so far?
Because LLM models have already amortized the man-years cost of collecting, curating and training on text corpuses?
TL;DR, probably never.
GettyImages-1458045238
AI speaks letters, text-to-speech or TTS, text-to-voice, speech synthesis applications, generative Artificial Intelligence, futuristic technology in language and communication.
15
April
2026
|
09:55
Europe/London
“
There’s a growing assumption that you need complex AI to solve problems like authorship analysis, but our findings show that isn’t necessarily the case. By grounding our approach in the science of how language actually works, we can achieve results that are just as good — and often better — while being more transparent.
Dr Andrea Nini
„