I wanted a way to make submitting Inventory Changes at work easier, so I took the pdf, used StirlingPDF to convert to an html bundled zip, I converted the .png with the form border and symbols to base64, and then wrote a powershell script to replace <p> tags with variable data from a csv export of our inv data(I tried to use odbc to extract it but once a dev showed me the logical, physical, and views that made up our Inventory lookup I went back to using a xlsx export that is built into our environment and letting the ps1 trim and sanitatize the input). As the conversion places text with absolute positioning, I was able to fine tune the layout and spacing. I then used my local AI qwen3.6-27b to convert my ps1 to a single html binary webapp with html/css/js, no external framework, two js scripts are loaded via cdn for now.
Inspired by how well that worked I vibecoded a drag and drop editor to build forms for other processes, I upload a png w which gets converted to base64 and then I can drag and drop text elements to where they need to be and export.
I know how many people feel about AI coded projects so these are really only for me, I didn't expect my coworkers to adopt it or anything but they did.
There's even a package (cmarker) than can translate Markdown to Typst which could be enough for a MVP.
We have responsive and open standards like HTML and EPUB (zipped XTML) and they work great. arXiv has HTML papers, and libgen and anna's archive often have EPUB versions of books. The issue for me with EPUB is the lack of good readers now.
Sure, I would like that beautifully designed page to magically become a single column beautiful document on my phone, but I will take the former over a badly designed text extract where the relevant figure is 10 pages away.
Epub (=html) is good for novels, but there is nothing replacing PDF for science papers. If anything, the latex (or ideally typst) source would come the closest, if properly written (not absolute offsets). That could be used to produce different page sized versions.
I months into building a pasteboard transform library that normalises VS Code, Google Docs, PDFs and a bunch of Chromium apps provider-specific data so I can start pasting everything everywhere exactly how I want it. It's much, much messier than I expected.
Apps put different UTTypes on the pasteboard that are not really compatible with each other. Usually there's a plain text fallback, then rich text/HTML, then provider-specific data. You show how much insane work is needed just to make text selectable with glyph mappings, layout, links, code blocks, rendered styles, etc. But once you copy from that PDF, most viewers still only expose raw text, and often broken raw text at that...
I haven't had a need to use annotations. I guess that could be solved by EPUB editors, but I haven't tested any, apart from any text editors after unzipping the EPUB.
For justified text - what's the point of stretching each line artificially just so they align at the end? It looks awful to me even when done "correctly". Having uneven spaces makes it harder to read. Having every line align on the right also makes it harder to read. When you have uneven lines, I subconsciously use the different at the end as an anchor for where I am in the text or where a certain phrase was. Hyphenating words is another thing that doesn't make a lot of sense nowadays - we have enough words with a hyphen naturally in them, so reading a broken up word is mentally taxing as I have to figure out if it's a normal word with a hyphen or a broken up one.
All the arXiv HTML papers are much better to read in the browser, IMO. And they'll only get better. PDF will likely stay the same.
For small screens like phones or tablets, having to constantly scroll up and down and left and right for a 2-column paper is just painful. PDF is much better on a big screen.
It’s on of the few examples when converting it in to picture and chucking it in a multimodal llm is a more sensible solution than trying to parse it.
Purely psychologically, I think there’s something that feels more "secure" or long-lasting about PDF’s perceived quasi-immutability compared to formats designed to be edited.
In my experience it's the NON software engineers who tend to underestimate the complexity
Except the PDF is not responsive at all and you can't increase or decrease the font size without increasing the whole width of page.
> Some vendors have switched to online-only for some documents and it always annoys me.
HTML shouldn't mean online-only. If the vendor isn't trying to make it hard to download, you should always be able to convert to PDF. But PDF to HTML is very hard or impossible.
So macOS does not really give you a clean "this app copied this semantic object" API. Clipboard-history apps generally poll NSPasteboard.changeCount, which already makes provenance fuzzy, since you can observe that the pasteboard changed, but not reliably know the source app.
Pasting is fuzzy too. You know what representations were available, but not what the destination app actually accepted, because that decision happens inside the app and is generally opaque for the OS. So what even is history? Is it the raw object, the fallback text, the richest representation, the thing you intended to paste, or the thing the target app consumed? And even if you define history as "the observed events", polling can also miss states. And once you add transforms (like I want to), you basically have to define your own history model. A coherent OS clipboard-history API probably will never happen without big effort and liability policy changes from providers.
I've seen rendering differences on different readers over the years. Rarely, but it happens. Probably not for basic documents or scanned papers. At least with HTML or Markdown you can read the source.