wip - MVP out in 04/2026

AI Products

Foresighter | Strategy

Frame an industry scope -> Distill signals online to foresight scenarios for strategy work

What is it about?

The vision is to build a strategy workbench. Foresigther MVP will help strategists follow how scenarios emerge and develop in an industry.

The workflow now:

  1. Define an industry (e.g. digital operation tools for healthcare in Europe)
  2. Run an AI pipeline from scope, source search to scenarios with a tracable trail
  3. Explore created scenarios, propositions, pathways and their impact on strategy
  4. Refresh with new sources e.g. every quarter to follow change

Why I’m building it?

In all simplicity - need it for Product Leadership work.

Having worked with strategy, I know teh workflows and can decompose them to an ai-assisted data and analytics pipeline. In addition, foresight and scenario work don’t require company-internal data, so its easy to make the tool public.

How am I building it?

I built a set of AI code agents. Knowing what I want, it was quite straight forward to iterate with reseracher, prototyper and architect to find the right choices in:

  • source search and treatment
  • scenario analyse approach
  • tech stack
  • Visuals (spent a lot of time getting one animation right!)

Currently, I’ve run the first source to scenarios pipeline with 3000+ exceprts and am iterating on the next approach.

What’s my goal?

  1. Have it at hand when needed for strategy work.
  2. Get at least one person to use it (besides me).

Tech Corner

// STATUS ──────────────────────────────────────────────────────
pipeline MVP complete | frontend in active development

lines_of_code: 38400   |   mvp_estimate: 65% done

// PIPELINE ────────────────────────────────────────────────────
approach        12-phase Python orchestrator

deterministic   content fetching, parsing, DB writes, hashing
                — exact and verifiable, no hallucination risk

llm             Claude Code for excerpt extraction, taxonomy
                tagging, signal creation, scenario generation

reasoning       deterministic for structure and repeatability;
                LLM where language understanding is the task

// HUMAN IN THE LOOP ───────────────────────────────────────────
where           scenario review before strategic use;
                quarterly refresh is user-triggered

why             wrong scenarios mislead real decisions —
                human review before use is non-negotiable

// EVALUATION ──────────────────────────────────────────────────
how             Pydantic v2 validates all LLM responses;
                phase checkpoints log progress; resume-from-
                phase for debugging mid-pipeline failures

traces          structlog JSON per phase; task UUID ties run
                together; provenance hashes link signals back
                to source excerpts

// CONTEXT ─────────────────────────────────────────────────────
approach        per-phase system prompts; pgvector stores
                signal embeddings for clustering; each LLM
                call is stateless — context injected via prompt

why             stateless = cheaper and debuggable; semantic
                memory lives in pgvector not the context window

// INTEGRATIONS ─────────────────────────────────────────────────
claude_code     Claude for 8 of 12 phases; embeddings for
                signal clustering fed into HDBSCAN

postgresql17    primary store + pgvector for vector search

redis_celery    background jobs for long-running phases

alembic         DB migrations versioned with code

// OTHER TOOLS ──────────────────────────────────────────────────
hdbscan         no fixed k, handles noise naturally —
                vs k-means which forces cluster shape

trafilatura     cleaner text from web than bs4 alone

shadcn_ui       unstyled base, full Tailwind control,
                no fight with the design system