Prism

by meetpateltechon 1/27/26, 6:03 PMwith 524 comments
by Perseidson 1/28/26, 7:12 AM

I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.

[1] https://news.ycombinator.com/item?id=46787165

by vitalnodoon 1/27/26, 7:04 PM

Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937

by i2kmon 1/28/26, 6:32 AM

This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

by syntexon 1/27/26, 11:08 PM

The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse

a little joke on Prism)

by JBorrowon 1/27/26, 8:09 PM

From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

by tarconon 1/28/26, 10:10 AM

This is a actual prompt in the video: "What are the papers in the literature that are most relevant to this draft and that I should consider citing?"

They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.

by parentheseson 1/28/26, 6:11 AM

It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research

by raincoleon 1/28/26, 1:17 AM

I know many people have negative opinions about this.

I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.

by asveikauon 1/27/26, 11:05 PM

Good idea to name this after the spy program that Snowden talked about.

by bmaranvilleon 1/27/26, 10:46 PM

Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

by plastic041on 1/27/26, 11:35 PM

The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what's the point of citing papers that aren't referenced in the paper you're actually writing? Can you do that?

Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.

by danelskion 1/27/26, 11:39 PM

Many people here talk about Overleaf as if it was the 'dumb' editor without any of these capabilities. It had them for some time via Writefull integration (https://www.writefull.com/writefull-for-overleaf). Who's going to win will probably be decided by brand recognition with Overleaf having a better starting position in this field, but money obviously being on OAI's side. With some of Writefull's features being dependent on ChatGPT's API, it's clear they are set to be priced-out unless they do something smart.

by DominikPeterson 1/27/26, 7:42 PM

This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

by rockskonon 1/28/26, 2:09 AM

Naming their tool after the program where private companies run searches on behalf of and give resulting customer data to the NSA....was certainly a choice.

by bekleinon 1/27/26, 9:54 PM

The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU

by Onavoon 1/27/26, 10:35 PM

It would be interesting to see how they would compete with the incumbents like

https://Elicit.com

https://Consensus.app

https://Scite.ai

https://Scispace.com

https://Scienceos.ai

https://Undermind.ai

Lots of players in this space.

by jumploopson 1/27/26, 8:27 PM

I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

by PrismerAIon 1/28/26, 3:31 PM

Prismer-AI team here. We’ve actually been building an open-source stack for this since early 2025. We were fed up with the fragmented paper-to-code workflow too. If you're looking for an open-source alternative to Prism that's already modular and ready to fork, check us out: https://github.com/Prismer-AI/Prismer

by sva_on 1/27/26, 8:49 PM

> In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

I can't wait

by falcor84on 1/27/26, 9:56 PM

It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I'm very ambivalent about it.

by maeston 1/28/26, 12:03 AM

Burried halfway through the article.

> Prism is a free workspace for scientific writing and collaboration

by jeffybefffy519on 1/27/26, 8:35 PM

I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...

by vitalnodoon 1/27/26, 8:07 PM

With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.

by markbaoon 1/27/26, 8:16 PM

Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that


by BizarroLandon 1/27/26, 11:49 PM

https://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging

In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world's first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.

by sbszllron 1/27/26, 8:01 PM

The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.

by anon1253on 1/28/26, 7:44 AM

Slightly off-topic but related: currently I'm in a research environment (biomedicine) where a lot of AI is used. Sometimes well, often poorly. So as an exercise I drafted some rules and commitments about AI and research ("Research After AI: Principles for Accelerated Exploration" [1]), I took the Agile manifesto as a starting point. Anyways, this might be interesting as a perspective on the problem space as I see it.

[1] https://gist.github.com/joelkuiper/d52cc0e5ff06d12c85e492e42...

by WolfOliveron 1/27/26, 7:08 PM

Check out MonsterWriter if you are concerned about the recent acquisition of this.

It also offers LaTeX workspaces

see video: https://www.youtube.com/watch?v=feWZByHoViw

by bonsai_spoolon 1/28/26, 12:35 AM

The example proposed in "and speeding up experimental iteration in molecular biology" has been done since at least the mid-2000s.

It's concerning that this wasn't identified and augur poorly for their search capabilities.

by tyteen4a03on 1/28/26, 2:25 PM

If you're not a fan of OpenAI: I work at RSpace (https://github.com/rspace-os/rspace-web) and we're an open-source research data management system. While we're not as modern as Obsidian or NotebookLM (yet - I'm spearheading efforts to change that :)) we have been deployed at universities and institutions for years now.

The solution is currently quite focused on life science needs but if you're curious, check us out!

by reassess_blindon 1/27/26, 9:37 PM

Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or


by MattDaEskimoon 1/27/26, 8:03 PM

What's the goal here?

There was an idea of OpenAI charging commission or royalties on new discoveries.

What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

by AuthAuthon 1/27/26, 8:17 PM

This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.

by epolanskion 1/27/26, 11:14 PM

Not gonna lie, I cringed when it asked to insert citations.

Like, what's the point?

You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.

As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.

Let me explain how scientific publishing works to people out of the loop:

- science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.

2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).

3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.

4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

by uwehnon 1/28/26, 12:04 AM

If you're looking for something like this for typst: any VSCode fork with AI (Cursor, Antigravity, etc) plus the tinymist extension (https://github.com/Myriad-Dreamin/tinymist) is pretty nice. Since it's local, it won't have the collaboration/sharing parts built in, but that can be solved too in the usual ways.

by pwdisswordfishyon 1/27/26, 9:31 PM

Oh, like that mass surveillance program!

by matteocantielloon 1/28/26, 8:21 PM

At first I was a bit puzzled about why OpenAI would want to get involved in this somewhat niche project. Obviously, they don't give a damn about Overleaf’s market, which is a drop in the bucket. What OpenAI is after -- I think -- it’s a very specific kind of “training data.” Not Overleaf’s finished papers (those are already public), but the entire workflow. The path from a messy draft to a polished paper captures how ideas actually form: the back-and-forth, the false starts, the collaborative refinement at the frontier of knowledge. That’s an unusually distilled form of cognitive work, and I could imagine that's something one would want in order to train advanced models how to think.

Keeping LaTeX as the language is a feature, not a bug: it filters out noise and selects for people trained in STEM, who’ve already learned how to think and work scientifically.

by andrepdon 1/27/26, 10:34 PM

"Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?

by drakenoton 1/28/26, 5:22 PM

This is handy for maintaining a resume!

I converted my resume to LaTeX with Claude Code recently. Being able to iterate on this code-form of my document is so much nicer than fighting the formatting with in Word/Google Docs.

I dropped my .tex file into Prism and it makes it nice to instantly render it.

by hulituon 1/27/26, 7:51 PM

> Introducing Prism Accelerating science writing and collaboration with AI.

I thought this was introduced by the NSA some time ago.

by AndrewKemendoon 1/27/26, 10:34 PM

I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

As other top level posters have indicated the review portion of this is the limiting factor

unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

by radioactiviston 1/27/26, 8:56 PM

Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.

by soulofmischiefon 1/27/26, 11:22 PM

I understand the collaborative aspects, but I wonder how this is going to compare to my current workflow of just working with LaTeX files in my IDE and using whichever model provider I like. I already have a good workflow and modern models do just fine generating and previewing LaTeX with existing toolchains.

Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.

by flockonuson 1/27/26, 10:37 PM

Curious in terms of trademark, does it could infringe in Vercel's Prisma (very popular ORM / framework in node.js) ?

EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)

by mfldon 1/28/26, 11:42 AM

I'd like to hypothesize a little bit about the strategy of OpenAI. Obviously, it is nice for academic users that there is a new option for collaborative LaTeX editing plus LLM integration for free. At the same time, I don't think there is much added revenue expected here, for example, from Pro features or additional LLM usage plans. My theory is that the value lies in the training data received from highly skilled academics in the form of accepted and declined suggestions.

by melagonsteron 1/28/26, 2:31 AM

Prism is a famous software before OpenAI use this name: https://www.graphpad.com/features

by nxobjecton 1/27/26, 11:13 PM

What they mean by "academic" is fairly limited here, if LaTeX is the main writing platform. What are their plans for expanding past that, and working with, say Jane Biomedical Researcher with a GSuite or Microsoft org, that has to use Word/Docs and a redlining-based collaboration workflow? I can certainly see why they're making it free at this point.

FWIW, Google Scholar has a fairly compelling natural-language search tool, too.

by khalicon 1/27/26, 8:17 PM

All your papers are belong to us

by jonas_kgomoon 1/28/26, 2:13 AM

I actually found it quite robinhood for openai to acqhire, bascially this startup was my favourite thing for the past few months, but they were experiencing server overload and other issues on reliability, i think openai taking them under their wing is a good/neutral storyline. I think its net good for science given the opai toolchain

by CobrastanJorjion 1/27/26, 10:43 PM

"Hey, you know how everybody's complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?"

"Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."

"Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."

"I dunno, does anybody want that?"

"Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."

"Yeah, I guess you're right, let's do your scientific paper generation thing."

by arnejenssenon 1/28/26, 8:55 AM

This assumes that the article, the artifact, is most valuable. But often it is the process of writing the article that has the most value. Prism can be a nice tool for increasing output. But the second order consequence could be that the skill of deep thinking and writing will atrophy.

"There is no value added without sweating"

by unicodeveloperon 1/28/26, 1:53 PM

Not too bad an acquisition though. Scientists need more tech tools just like everyone else to accelerate their work. The faster scientists are, the more discoveries & world class solutions to problems we can have.

Maybe OpenAI should acquire Valyu too. They allow you deepresearch on academic papers.

by homerowilsonon 1/28/26, 1:49 AM

Adding

% !TEX program = lualatex

to the top of your document allows you to switch LaTeX engine. This is required for recent accessibility standards compliance (support for tagging and \DocumentMetadata). Compilation takes a bit longer though, but works fine, unlike with Overleaf where using the lualatex engine does not work in the free version.

by estebarbon 1/28/26, 7:46 AM

I'm really surprised OpenAI went with LaTeX. ChatGPT still has issues maintaining LaTeX syntax. It still happily switches to markdown notation for quotes or emph. Gemini has a similar problem as well. I guess that there aren't enough good LaTeX documents in the training set.

by jackblemmingon 1/28/26, 12:10 AM

There is zero chance this is worth billions of dollars, let alone the trillion$ OpenAI desparately needs. Why are they wasting time with this kind of stuff? Each of their employees needs to generate insane amounts of money to justify their salaries and equity and I doubt this is it.

by bariswheelon 1/28/26, 12:50 AM

I used overleaf during grad school and was easy enough, I'm interested to see what more value this will bring. Sometimes making less decisions is the better route, e.g. vi vs MS word, but I won't speak too much without trying it just yet.

by Myrmornison 1/28/26, 4:53 AM

Away from applied math/stats, and physics etc, not that many scientists use LaTeX. I'm not saying it's not useful, just I don't think many scientists will feel like a product that's LaTeX based is intended for them.

by jf___on 1/28/26, 6:56 AM

<typst>and just when i thought i was out they pull me back in</typst>

by zb3on 1/27/26, 10:44 PM

Is this the product where OpenAI will (soon) take profit share from inventions made there?

by ILoveHorseson 1/28/26, 7:39 AM

So, basically SciGen[https://davidpomerenke.github.io/scigen.js/] but burning through more GPUs?

by ozgungon 1/28/26, 8:49 AM

I don’t see anything regarding Privacy of your data. Did I miss it or they just use your unpublished research and your prompts as a real human researcher to train their own AI researchers?

by flumpcakeson 1/27/26, 9:59 PM

This is terrible for Science.

I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

by ggmon 1/28/26, 12:42 AM

A competition for the longest sequence of \relax in a document ensues. If enough people do this, the AI will acquire merit and seek to "win" ...

by asadmon 1/27/26, 9:43 PM

Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.

by pmbanugoon 1/28/26, 1:08 PM

I don't see anything fancy here that Google doesn't do with their Gemini products, and even better

by zmmmmmon 1/28/26, 1:07 AM

They compare it to software development but there is such a crucial difference to software development: by and large, software is an order of magnitude easier to verify than it is to create. By comparison, reviewing a vibe generated manuscript will be MUCH more work to verify than a piece of software with equivalent complexity. On top of that, review of academic literature is largely outsourced to the academic community for free. There is no model to support it that scales to an increased volume of output.

I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.

by smuenkelon 1/28/26, 5:32 PM

That click towards accepting the bibliography without checking it is absolutely mindboggling.

by butlikeon 1/28/26, 3:14 PM

> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Great, so now I'll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?

by legitsteron 1/27/26, 8:16 PM

It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

by noahbpon 1/27/26, 10:35 PM

They seem to have copied Cursor in hijacking ⌘Y shortcut for "Yes" instead of Undo.

by camillomilleron 1/28/26, 2:26 AM

Given what Prism was at the NSA, why the hell would any tech company greenlight this name?

by tzahifadidaon 1/28/26, 2:42 PM

Since it offers collaboration for free, it can take a bite out of overleaf market.

by r_thambapillaion 1/28/26, 4:51 PM

didn't OpenAI just say they needed a code red to be relentlessly focussed on making ChatGPT market leading again? Why are they launching new products? Is the code red over is the gemini threat considered done?

by ai_criticon 1/27/26, 8:17 PM

Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

This is all pageantry.

by unixziion 1/28/26, 6:32 AM

It may be useful, but it also encourages people to stop writing their own papers.

by 0daymanon 1/27/26, 8:13 PM

in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out

by chaosprinton 1/27/26, 11:01 PM

As a researcher who has to use LaTeX, I used to use Overleaf, but lately I've been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won't subscribe to ChatGPT.

by delducaon 1/27/26, 11:15 PM

First 5 seconds reading and I have spotted that was written by AI.

by oytmealon 1/27/26, 9:55 PM

Some things are worth doing the "hard way".

by postaticon 1/28/26, 2:45 AM

ok I don't care what people say, this would've helped me a lot during my PhD days fighting with LateX and diagrams. :)

by dash2on 1/28/26, 9:53 AM

“LaTeX-native“

Oh NO. We will be stuck in LaTeX hell forever.

by wasmainiacon 1/27/26, 9:52 PM

The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window

by AlexCoventryon 1/27/26, 8:51 PM

I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?

by random_duckon 1/28/26, 6:17 PM

So you build overleaf with bloat?

by hit8runon 1/27/26, 9:20 PM

They are really desperate now, right?

by addedlovelyon 1/28/26, 5:57 PM

Ahhhh. It happily re-wrote the example paper to be from Google AI and added references that supported that falsehood.

Slop science papers is just what the world needs.

by pigeonson 1/27/26, 10:16 PM

Naming things is hard.

by preommron 1/27/26, 8:00 PM

Very underwhelming.

Was this not already possible in the web ui or through a vscode-like editor?

by slashdaveon 1/28/26, 5:54 PM

Not a PR person myself, but why use as an example a parody topic for a paper? Couldn't someone have invented something realistic to show? Or, heck, just get permission to show a real paper?

The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.

by Min0taurron 1/28/26, 4:33 PM

Dog turd, will be used to mine research data and train some sort of research AI model, do not trust. I would much rather support Overleaf which is made by academics for academics than some vibe coded alternative with deep data mining. No wonder we have so much slop in research at the moment

by divanon 1/27/26, 10:57 PM

No Typst support?

by mklon 1/28/26, 8:48 AM

> Turn whiteboard equations or diagrams directly into LaTeX, saving hours of time manipulating graphics pixel-by-pixel

What a bizarre thing to say! I'm guessing it's slop. Makes it hard to trust anything the article claims.

by i2kmon 1/28/26, 6:25 AM

LaTeX was one of the last bastions against AI slop. Sadly it's now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?

by lispisokon 1/27/26, 8:49 PM

Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.

by rcastellottion 1/28/26, 11:23 AM

wow, this is useless!

by shevy-javaon 1/27/26, 7:57 PM

"Accelerating science writing and collaboration with AI"

Uhm ... no.

I think we need to put an end to AI as it is currently used (not all of it but most of it).

by egorfineon 1/28/26, 11:52 AM

> Chat with GPT‑5.2

> Draft and revise papers with the full document as context

> ...

And pay the finder's fee on every discovery worth pursuing.

Yeah, immediately fuck that.

by jsrozneron 1/27/26, 9:41 PM

AI: enshittifying everything you once cared about or relied upon

(re the decline of scientific integrity / signal-to-noise ratio in science)

by mveson 1/28/26, 7:19 AM

Less thinking, reading, and reflection, and more spouting of text, yay! Just what we need.

by geekamonguson 1/28/26, 1:53 AM

Fuck...there are already too many things called Prism.

by lsh0on 1/28/26, 3:12 AM

... aaaand now it's JATS.

by hahahahhaahon 1/27/26, 10:50 PM

Bringing slop to science.

by random_duckon 1/28/26, 6:16 PM

"Science"

by postalcoderon 1/27/26, 7:08 PM

Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.

by lifetimerubyiston 1/28/26, 12:25 AM

As if there wasn't enough AI slop in the scientific community already.

by verdvermon 1/27/26, 8:46 PM

I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?

Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?