Panorix logo Panorix
FEATURE — Built for LLMs

Long screenshots, finally readable by AI.

Why your AI says “I can't read the image” — and how we fix it.

THE PROBLEM

Your AI says 'I can't read the image.'

You took a careful long screenshot of a chat, an article, or a database table. You uploaded it to ChatGPT, Claude, or Gemini — and asked the model to summarize, translate, or extract something.

What you didn't see: the LLM backend silently re-compresses your image to save tokens. Long screenshots get the worst of it — text turns to mush, numbers smudge, the whole point of the screenshot is lost.

The AI tells you “I can't read the text in this image.” That's not the AI's fault — it's looking at a blurred copy of what you sent.

OUR SOLUTION

Slice below the compression threshold. Pack lossless. Done.

We slice your screenshot at the height where major LLM backends stop re-compressing. We pack the pieces in a no-compression ZIP. Drop them all into the AI chat — and they'll actually read.

PNG lossless ZIP no compression AI Capture Slice at 2880 px PNG, no loss ZIP, no compression Drop into AI
HOW IT WORKS

How it works

Five engineering decisions, in plain words.

01

2880-pixel chunks

The empirical threshold where major LLM backends stop re-compressing your image. Below this height, each chunk lands at the lossless boundary on its own.

02

PNG, lossless

Every pixel from the source is preserved. We never re-encode through JPEG, never apply chroma subsampling, never re-quantize.

03

ZIP, no compression

STORE mode only. The PNG bytes inside the archive are bit-for-bit identical to the standalone PNGs — the ZIP is a delivery container, not a compression step.

04

Sequential filenames

panorix-<timestamp>_01.png, _02.png, _03.png… Drop them into the AI chat in file-name order, and the conversation reads top-to-bottom.

05

99-chunk ceiling

Captures up to roughly 285,120 px tall (99 × 2880) come out as 99 separate files. For longer captures, the chunk size grows slightly to fit within 99 — sharpness is still preserved.

HOW TO USE IT

How to use it

Seven steps. No login, no upload.

  1. 01 Capture a long screenshot with Panorix as usual.
  2. 02 Open the Panorix editor — it loads automatically when capture finishes.
  3. 03 (Optional) Annotate or crop the screenshot.
  4. 04 Click Export, then choose “AI Reading — For AI” from the dropdown.
  5. 05 Wait a few seconds while we slice and pack — your browser downloads a ZIP.
  6. 06 Unzip the file, then drag all the chunk PNGs into your AI chat in one batch.
  7. 07 Ask your question. The AI reads each chunk in order, top to bottom.
THE HONEST PART

What AI Reading doesn't do.

AI Reading solves the compression problem. It doesn't solve every problem. Here's what it won't do — so you don't get caught off-guard:

  • Doesn't bypass per-chat image-count limits — you're still subject to whatever cap your AI provider sets.
  • Doesn't improve the AI's image understanding itself. If the model is bad at OCR, even a sharp input has limits.
  • Doesn't replace OCR — the AI still has to read the text. We just make sure the text is sharp enough to be readable.

Stop fighting LLM compression.

Install Panorix, capture, choose AI Reading export, drop the chunks into your AI chat.