Fair Seas · Sprint 1 · v2

1 Hour AI-Accelerated
Design Sprint

How do I orchestrate a high-speed design sprint utilizing AI and deliver a useful outcome?

AI Sprint 1 Hour Self-Directed Process Retrospective 6 AI Tools

"I want to run a rapid design sprint right now for making a prototype of something I can share as a design sprint example. I'm choosing save the ocean as the problem to solve because it's so large and undoable and people won't notice as much the nuances of deep intimate knowledge being incorrect."

— My actual starting prompt. Honest about my constraints from the first keystroke.

The problem I chose — and why it has real stakes

Illegal fishing isn't a fringe issue. It's a market failure at global scale.

3B people
rely on the ocean for primary protein
1 in 5 fish
stolen annually — vessels go dark
$26–50B
in diverted trade & lost revenue annually
No 911
Satellites can see vessels. Seeing ≠ proving.

I built a repeatable process, not a one-off experiment

v1 was the starting hypothesis — a compressed GV sprint adapted for solo AI-assisted work. Every stage had a defined job and time budget.

1. Define a Big Problem
Scope the challenge and identify the core pain point.
2. Research Problem Space
Gather context and existing data to ground the sprint.
3. Brainstorm Solutions
Ideate rapidly with AI assistance for diverse options.
4. Make
High-fidelity prototyping or drafting phase.
5. Test / Tweak
Review outputs, gather quick feedback, and iterate.

I ran 3 AI assistants simultaneously to research the problem space

Same prompt, three different tools. I evaluated outputs comparatively — not sequentially.

Gemini

Best formatting and content. Kicked out HMW questions, a user, and a Phase 1 user flow in one shot. Could quickly grok the thinking and agreed with its direction.

Won: Research round
Claude

More verbose on research — hard to digest fast. But Claude's table/bullet format on the elevator pitch iteration got me unstuck faster than run-on sentences with the other tools.

Won: Pitch iteration
ChatGPT

Only tool to show images with each problem — refreshing, but too long to scan. Did use my professional background from prior threads to tailor recommendations.

Noted: Memory context

What I narrowed to — and why

After aligning all three tools on the elevator pitch, I chose a user and a single workflow to prototype.

The Problem

Illegal fishing + the evidence gap

Satellites can detect dark vessels. But detection isn't enforcement. Authorities need structured, defensible evidence — AIS gap analysis, SAR cross-reference, zone violations, vessel history — assembled fast enough to act.

The design target: Collapse a 20-minute manual investigation into a 90-second confident decision.

The User

Maritime Intelligence Analyst

Analysts at ocean NGOs, fisheries monitoring centers, coast guard units, and supply chain compliance officers at seafood retailers.

Core pain: Making defensible enforcement decisions with data not built for legal action. Bad escalations damage credibility — so they under-escalate.

The Workflow Chosen — Prototype Target

The "First Look" — triage in 90 seconds

1
The Trigger
An amber flash on their map. A detection flagged for review.
2
The Context Check
AIS silence — a track that stops while the vessel is still moving.
3
The Correlation
Overlay SAR. If a metallic signature is moving where AIS isn't broadcasting — heart rate goes up.
4
The Known Offender Check
Has this vessel been flagged before? Repeat behavior changes everything.
5
The Decision
Escalate with a timestamped reason, or Dismiss with logged justification — protecting the analyst if the vessel resurfaces.

7 directions in 1 hour — across 6 AI prototyping tools

I ran each tool with the same project context and evaluated outputs comparatively.

No winner selected. Choosing without user research would be guessing.

Claude VS Code — Project Triton Dashboard
Claude · VS Code
Evidence + map focus
Claude Browser — Dark Fleet
Claude · Browser v1
Dark Fleet command center
Claude Browser — Watchstander
Claude · Browser v2
Accessibility iteration
Figma Make — Save The Ocean
Figma Make
Save The Ocean concept
Gemini — Triton Command Dashboard
Gemini
Triton Command — map-led
Lovable — Project Triton
Lovable
Project Triton — sci-fi look
Claude Code Desktop — Maritime Dispatcher Triage Dashboard
Claude · Desktop
Dispatcher triage view

What I looked for

Information hierarchy, accessibility, tone (tool vs weapon), and how example data was represented across each output.

Comparative judgment

Gemini felt less militaristic. Lovable had the most polish. Claude's VS Code had the clearest evidence hierarchy. All had meaningful problems.

Why no winner yet

Choosing a direction without user research is guessing, not deciding. Stopping here is the right call for a concept sprint.

AI created a problem I had to catch

Not flagged by a tool or a reviewer. Caught during Test/Tweak — which is exactly when it should be caught.

[ Chang Xing 7 screenshot ] Issue: Political bias introduced
What happened

AI-generated prototypes defaulted to real country flags and vessel names as primary signals. "CHANG XING 7 · 🇨🇳 CHN · CRITICAL · 94% match" appeared prominently in the triage queue.

Why it's structurally a problem

Introduces political bias into analyst workflows. Encourages operators to assume nationality correlates with guilt. Flag state doesn't indicate vessel origin — flags of convenience are common in IUU fishing.

Root cause

AI defaults to salient data. Salience ≠ relevance. Country flags are visually prominent and culturally legible — so the model used them. That's a structural AI behavior, not a one-off mistake.

Design correction: Data → Signal

The fix wasn't cosmetic — it required rethinking what signals the UI should surface.

Before — Poor indication
CHANG XING 7
🇨🇳 CHN  ·  94% match  ·  CRITICAL

Country of flag state used as the primary identity signal.

After — Behavioral signals
FV-2041 — PACIFIC MARLIN
Risk Score: 82
AIS suppression
EEZ boundary proximity
Suspicious transshipment
Weak enforcement registry

Human review caught what AI missed. AI surfaced nationality cues because they're salient. Product judgment identified the ethical and interpretive risk. Moving fast didn't mean moving carelessly.

What I skipped — and why it happened

Not confessions. A process diagnosis. Each miss traces to a structural gap in v1 — which is exactly what v2 fixes.

No competitive research
v1 had no step that forced market awareness before building. Didn't know what already existed.
No business model defined
Who pays for Fair Seas? v1 never asked. Without a customer definition, the user was floating.
No end artifact defined
Root cause of documentation gaps: not working backwards from the shareout deliverable at the start.
No accessibility constraints
Not set before prompting AI — led to universally low-contrast dark UIs across all 7 prototypes.
No screenshot discipline
Prototype iterations lived in 6 locations, not easily recoverable for the retrospective.

The sprint changed the process

Every v2 change traces directly to a Sprint 1 miss. The loop on Research & Explore is intentional — that phase is meant to iterate.

1. Define the Sprint Deliverable
Choose the target artifact: presentation, video, case study, or prototype demo.
2. Research & Explore
Ground the sprint with research, define the anchor card, set accessibility and tone requirements.
3. Make
Prototype and document decisions with screenshots at each step.
4. Test / Tweak
Validate against the deliverable. Review example data for real-world accusation implications.
5. Sprint Closeout
Capture final state, write the anchor card, designate artifacts as verified or illustrative.
Added
Define Sprint Deliverable first
Added
Sprint Anchor Card output
Added
Documentation discipline in Make
Added
Example data review in Test/Tweak
Added
Sprint Closeout + Go/No Go
GO ✓

Sprint 1 closed with a Go decision. The concept has legs. The process is better. What comes next isn't another sprint — it's a structured validation phase before narrowing further.

What comes between sprints

Not every next step is a full sprint. This phase validates before building further — and may loop before moving on.

Research & Validate
Users, competitors, technical feasibility. Don't build further on an assumption.
Synthesize & React
Review findings, update the brief, decide what changes — and what doesn't.
Targeted Iterations
Minimum effective change before the next validation loop.
Focused Design Sprint
One problem space at a time, as the concept matures.

Fair Seas · Sprint 1 · Cami Farley

×

The 1-Hour Sprint

I wanted to build a repeatable process, not a one-off experiment

v1 was my starting hypothesis

PROCESS DESIGN