Hoarder vs Wallabag: Read-Later Apps Compared

Quick Verdict

Hoarder is the better choice if you want AI-powered automatic tagging and full-page archiving with a modern interface. Wallabag is the better choice if you want a mature, battle-tested Pocket replacement focused on clean article extraction and offline reading. For most people starting fresh in 2026, Hoarder’s AI features and modern stack make it the more compelling option — but Wallabag’s years of stability and lighter resource footprint still earn it a strong recommendation.

Updated March 2026: Verified with latest Docker images and configurations.

Overview

Hoarder and Wallabag solve the same core problem — saving web content to read later on your own terms — but they approach it from different angles.

Hoarder is a newer project built on Next.js and TypeScript. Its headline feature is AI-powered automatic tagging: save a link, and Hoarder uses an LLM (OpenAI, Ollama, or any OpenAI-compatible API) to categorize and tag it for you. It also does full-page archiving, storing complete snapshots of pages so content survives link rot. The interface is modern and snappy, with browser extensions and mobile apps.

Wallabag has been around since 2013 (originally as poche). It is a direct self-hosted replacement for Pocket and Instapaper. Built on PHP with Symfony, it focuses on article extraction — stripping away ads, navigation, and clutter to give you clean, readable text. It has mature browser extensions, solid mobile apps, and import tools for migrating from Pocket, Instapaper, and Pinboard. Wallabag is proven, stable, and widely deployed.

Feature Comparison

FeatureHoarderWallabag
AI Auto-TaggingYes — LLM-powered (OpenAI, Ollama, compatible APIs)No
Full-Page ArchivingYes — complete page snapshotsNo — article text extraction only
Article ExtractionBasic — relies on archivingStrong — Mozilla Readability-based parser
Browser ExtensionChrome, FirefoxChrome, Firefox, Opera, Safari
Mobile AppsiOS, AndroidiOS, Android (mature, well-maintained)
Offline ReadingVia archived snapshotsYes — dedicated offline mode with epub export
TaggingAutomatic (AI) + manualManual only
SearchFull-text via MeilisearchFull-text built-in
Import from Pocket/InstapaperLimitedFull import support (Pocket, Instapaper, Pinboard, browser bookmarks)
APIREST APIREST API (mature, well-documented)
RSS FeedsNoYes — per-tag and per-category RSS feeds
AnnotationsNoYes — highlight and annotate saved articles
Multi-UserYesYes
LicenseAGPL-3.0MIT
Language/FrameworkTypeScript / Next.jsPHP / Symfony
Primary DatabasePostgreSQL + MeilisearchPostgreSQL, MySQL, or SQLite

Docker Compose: Hoarder

Hoarder requires PostgreSQL for data storage and Meilisearch for full-text search. Optionally, connect an LLM for AI tagging.

Create a docker-compose.yml:

services:
  hoarder:
    image: ghcr.io/karakeep-app/karakeep:0.31.0
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      # Required — database connection
      DATABASE_URL: "postgresql://hoarder:changeme-hoarder-db@hoarder-db:5432/hoarder"
      # Required — Meilisearch connection
      MEILI_ADDR: "http://meilisearch:7700"
      MEILI_MASTER_KEY: "changeme-meili-master-key"  # Must match Meilisearch config
      # Required — encryption key for sessions (generate with: openssl rand -hex 32)
      NEXTAUTH_SECRET: "changeme-generate-a-random-64-char-hex-string"
      NEXTAUTH_URL: "http://localhost:3000"
      # Optional — AI tagging via OpenAI-compatible API
      # OPENAI_API_KEY: "sk-your-openai-key"
      # OPENAI_BASE_URL: "http://ollama:11434/v1"  # Use this for local Ollama
      # INFERENCE_TEXT_MODEL: "gpt-4o-mini"
    volumes:
      - hoarder_data:/data
    depends_on:
      hoarder-db:
        condition: service_healthy
      meilisearch:
        condition: service_started
    networks:
      - hoarder-net

  hoarder-db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: hoarder
      POSTGRES_PASSWORD: changeme-hoarder-db  # Change this — must match DATABASE_URL above
      POSTGRES_DB: hoarder
    volumes:
      - hoarder_pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U hoarder"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - hoarder-net

  meilisearch:
    image: getmeili/meilisearch:v1.12.3
    restart: unless-stopped
    environment:
      MEILI_MASTER_KEY: "changeme-meili-master-key"  # Must match Hoarder config
      MEILI_ENV: "production"
    volumes:
      - hoarder_meili:/meili_data
    networks:
      - hoarder-net

volumes:
  hoarder_data:
  hoarder_pgdata:
  hoarder_meili:

networks:
  hoarder-net:

Start the stack:

docker compose up -d

Access Hoarder at http://your-server:3000. Create your account on first visit. AI tagging requires uncommenting and configuring the OpenAI environment variables — either with an OpenAI API key or a local Ollama instance.

Docker Compose: Wallabag

Wallabag works with PostgreSQL, MySQL, or SQLite. This config uses PostgreSQL with Redis for caching.

Create a docker-compose.yml:

services:
  wallabag:
    image: wallabag/wallabag:2.6.14
    restart: unless-stopped
    ports:
      - "8080:80"
    environment:
      # Required — database configuration
      SYMFONY__ENV__DATABASE_DRIVER: "pdo_pgsql"
      SYMFONY__ENV__DATABASE_HOST: "wallabag-db"
      SYMFONY__ENV__DATABASE_PORT: "5432"
      SYMFONY__ENV__DATABASE_NAME: "wallabag"
      SYMFONY__ENV__DATABASE_USER: "wallabag"
      SYMFONY__ENV__DATABASE_PASSWORD: "changeme-wallabag-db"  # Must match PostgreSQL config
      # Required — application secret (generate with: openssl rand -hex 32)
      SYMFONY__ENV__SECRET: "changeme-generate-a-random-hex-string"
      # Required — your domain (update for production)
      SYMFONY__ENV__DOMAIN_NAME: "http://localhost:8080"
      SYMFONY__ENV__SERVER_NAME: "selfhosting.sh Wallabag"
      # Required — Redis for caching and async
      SYMFONY__ENV__REDIS_HOST: "redis"
      SYMFONY__ENV__REDIS_PORT: "6379"
      # Required — default admin credentials (change after first login)
      SYMFONY__ENV__FOSUSER_REGISTRATION: "false"
      SYMFONY__ENV__FOSUSER_CONFIRMATION: "false"
    volumes:
      - wallabag_images:/var/www/wallabag/web/assets/images
      - wallabag_data:/var/www/wallabag/data
    depends_on:
      wallabag-db:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - wallabag-net

  wallabag-db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: wallabag
      POSTGRES_PASSWORD: changeme-wallabag-db  # Change this — must match Wallabag config
      POSTGRES_DB: wallabag
    volumes:
      - wallabag_pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U wallabag"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - wallabag-net

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - wallabag_redis:/data
    networks:
      - wallabag-net

volumes:
  wallabag_images:
  wallabag_pgdata:
  wallabag_data:
  wallabag_redis:

networks:
  wallabag-net:

Start the stack:

docker compose up -d

Access Wallabag at http://your-server:8080. Default credentials are wallabag / wallabag — change these immediately after first login.

Installation Complexity

Hoarder has a straightforward Docker setup with three containers (app, PostgreSQL, Meilisearch). The main complexity is configuring AI tagging — you need either an OpenAI API key (costs money per request) or a local Ollama instance (needs a GPU or beefy CPU). Without AI, Hoarder still works for manual bookmarking and archiving, but you lose the headline feature.

Wallabag also runs three containers (app, PostgreSQL, Redis) with a comparable setup. The Symfony-based configuration uses long environment variable names, which looks verbose but is actually well-documented. The initial database migration runs automatically on first start. Wallabag is more forgiving on hardware — no search engine or AI model to run.

Both are manageable for anyone comfortable with Docker Compose. Wallabag edges ahead on simplicity because it has no optional AI configuration to think about.

Performance and Resource Usage

ResourceHoarderWallabag
App RAM~200 MB~150 MB
Search/Cache RAM~100 MB (Meilisearch)~30 MB (Redis)
Database RAM~50 MB (PostgreSQL)~50 MB (PostgreSQL)
Total RAM~350 MB~230 MB
CPU (idle)LowLow
CPU (archiving/extraction)Medium — full-page screenshots and archivingLow — text extraction only
Disk usageHigher — stores full page archivesLower — stores extracted article text

Hoarder uses more resources across the board. Full-page archiving stores significantly more data than text extraction, and Meilisearch is heavier than Redis. If you’re running AI tagging locally via Ollama, add another 2-4 GB of RAM for the model.

Wallabag is the lighter option. It stores just the extracted article content, uses Redis for simple caching, and has no AI workload. On a Raspberry Pi 4 or low-end VPS, Wallabag runs comfortably. Hoarder is feasible on the same hardware but will feel the squeeze when archiving many pages simultaneously.

Community and Support

Wallabag has the advantage of time. Active since 2013, it has a large user base, extensive documentation, and a well-established community. The project has over 10,000 GitHub stars, regular releases, and a strong track record. Documentation covers every feature, API endpoint, and configuration option. You will find answers to almost any Wallabag question on forums, Reddit, and the official docs.

Hoarder is newer but growing fast. The community is active on GitHub and Discord. Development pace is rapid, with frequent releases adding features. Documentation is good but not as comprehensive as Wallabag’s — some edge cases require reading GitHub issues. The project has strong momentum and an engaged contributor base.

For stability and long-term confidence, Wallabag wins. For active development pace and feature velocity, Hoarder leads.

Use Cases

Choose Hoarder If…

  • You want AI-powered automatic tagging — save a link and let the LLM categorize it for you
  • You care about full-page archiving — preserving complete snapshots of pages, not just extracted text
  • You prefer a modern, polished UI built with contemporary web technologies
  • You already run Ollama or have an OpenAI API key and want to leverage AI in your workflow
  • You value visual bookmarking — Hoarder stores screenshots and previews of saved pages
  • You are building a research archive where preserving the exact original page matters

Choose Wallabag If…

  • You want a direct Pocket/Instapaper replacement with mature import tools
  • Offline reading is critical — Wallabag’s epub export and dedicated offline mode are strong
  • You need annotations — highlighting and annotating articles within the app
  • You want RSS feeds generated from your saved articles (per tag or category)
  • You are on limited hardware — Wallabag’s lighter footprint fits small servers and Raspberry Pis
  • You value proven stability — Wallabag has 10+ years of production use
  • You prefer an MIT license over AGPL-3.0

Final Verdict

Hoarder is the better pick for most new users in 2026. The AI auto-tagging is genuinely useful — it eliminates the friction of organizing saved content, which is the main reason most people’s bookmarks devolve into an unsearchable mess. Full-page archiving means you actually keep the content, not just a link that might die. The interface is clean and fast.

Wallabag remains the better choice for dedicated readers. If your primary workflow is “save article, read it later on the couch, maybe annotate it,” Wallabag’s superior article extraction, epub export, offline reading, and annotation features serve that use case better than Hoarder does. It is also the safer bet if you want something with a decade of stability behind it.

If you are unsure: start with Hoarder. The AI tagging will save you hours of manual organization, and you can always export your data if you decide to switch later.

FAQ

Can I import my Pocket library into either tool?

Wallabag has mature import support for Pocket, Instapaper, Pinboard, and browser bookmark exports. The import runs through the web UI and preserves tags. Hoarder supports Netscape HTML bookmark format import but does not have a dedicated Pocket importer. For migrating from Pocket, Wallabag is the smoother path.

Does Hoarder’s AI tagging work without internet?

Yes — if you use Ollama for local AI inference. All tagging happens on your hardware with no data sent to external services. You will need 2-4 GB of additional RAM for the language model. Without Ollama or an OpenAI API key, Hoarder still works for manual bookmarking and archiving — AI tagging is optional.

Can Wallabag archive full web pages?

Not in the way Hoarder does. Wallabag extracts the article content (text, images) and stores it in a clean, readable format. It does not take screenshots or store full HTML snapshots with CSS/JavaScript. If the original page design and layout matter to you, Hoarder’s full-page archiving is the better approach. If you just want the text content preserved, Wallabag’s extraction is cleaner and more storage-efficient.

Which has better mobile apps?

Both have native iOS and Android apps. Wallabag’s mobile apps are more mature — they have been developed for years and include dedicated offline reading mode with epub export. Hoarder’s mobile apps are newer but functional. For heavy mobile reading, Wallabag’s apps are the better experience.

Can I annotate articles in Hoarder?

Not currently. Hoarder focuses on saving, archiving, and auto-tagging. Wallabag supports highlighting and annotating saved articles — you can select text and add notes. If annotations are part of your reading workflow, Wallabag is the only option among these two.

Which uses less disk space?

Wallabag — significantly less. Wallabag stores extracted article text, which is compact (a few KB per article). Hoarder stores full-page archives including screenshots and complete HTML, which can be 1-10 MB per page. Over 1,000 saved pages, the difference can be several gigabytes.

Comments