Supported endpoints
The container currently supports: It does not support/api/v1/table_rec
or /api/v1/layout
at the moment, but will in an upcoming release.
Authentication
API authentication is not supported in the container. We assume customers will be running our image on their own infrastructure in private networks. You may send theX-API-Key
header detailed here, but it will be ignored and any value works.
use_llm
is supported with your own keys
Many Datalab customers use Marker’s LLM post-processors to improve their parse output, supported by the use_llm
flag in /api/v1/marker
.
use_llm
is supported in the container image, but you must bring your own keys.
The container will support any service supported by Marker, which currently includes:
- OpenAI (via OpenAI’s platform & Azure)
- Anthropic (via Anthropic’s platform only)
- Gemini (via Google AI Studio & Vertex)
- Ollama (if you want a fully on-prem setup)
use_llm
. Google’s models are best-supported.
We can’t speak to performance using other models, but you’re welcome to try (and we’d love to hear from you when you do, as would folks in our Discord channel).
Note: use_llm
will only work with a vision-capable model.
How to call marker
with use_llm
in the container
When you call /api/v1/marker
in the container, you must send your keys and configuration for use_llm
in the API request itself.
Here’s what that looks like if you want to use Gemini 2.0 Flash (the model we use in production) via OpenRouter:
additional_config
parameter is a dictionary of configs passed to marker
itself.
We allow a small subset of configuration keys on our hosted API via additional_config
, but the entirety of Marker’s configs are usable in the container.
Below are the keys required for each provider.
Ollama
The Ollama service supports calls via Ollama instances.Gemini (via Google Vertex)
The Google Vertex service support Gemini model calls on Google Vertex.Gemini (via Google AI Studio)
The Google Gemini service supports Gemini calls using a Google AI Studio key.OpenAI
The OpenAI service supports any OpenAI-compatible interface by settingopenai_base_url
, which defaults to OpenAI’s servers.