Fair Seas · Sprint 1 · v2

1 Hour AI-Accelerated
Design Sprint

How do I orchestrate a high-speed design sprint utilizing AI and deliver a useful outcome?

AI Sprint 1 Hour Self-Directed Process Retrospective 6 AI Tools

I want to build a repeatable process, not a one-off experiment

v1 compressed a typical 5-day design sprint down to a 1 hour solo activity.

1. Define a Big Problem
Identify the core problem to solve.
2. Research Problem Space
Gather context and existing data to ground the sprint.
3. Brainstorm Solutions
Ideate rapidly with AI chatbots for diverse options.
4. Make
High-fidelity AI prototyping using context defined.
5. Test / Tweak
Review outputs, iterate.

1. Define a Big Problem

I ran 3 AI assistants simultaneously to research the problem space - Claude, ChatGPT, and Gemini. I evaluated outputs in parallel, feeding them back into each other for cross-checking and narrowing.

5 minutes was plenty of time to figure out the problem to solve. Illegal deep sea fishing enforcement is difficult and impactful: there's too much data to sift through (ie Satellite SAR) and packaging the evidence is arduous. That's a perfect opportunity for AI-powered software to solve.

Show prompt

"I want to run a rapid design sprint right now for making a prototype of something I can share as a design sprint example. I'm choosing save the ocean as the problem to solve because it's so large and undoable and people won't notice as much the nuances of deep intimate knowledge being incorrect."

My embarrassingly simple starting prompt. I wanted to push the tools to see if they could fill in the gaps.

Gemini output screenshot

Not only came up with 3 great problems to solve, it synthesized the information into "2026 Tech State" and software opportunities.

Won: Best brainstorming partnerGreat synthesis
Claude output screenshot

It gave similar ideas as other platforms, but there was too much to dig through. It wasn't synthesized as nicely.

Gave source linksDistracting emoji icons
ChatGPT output screenshot

It was the only tool to show images with each problem, which helped my eyes. And it called out a personal note - that illegal fishing "aligns very well with your regulated systems brain."

Used memory contextShowed photos/imagesToo lengthy

"The most aligned concept would be: An AI-powered transparency and compliance platform to detect and prevent illegal fishing using satellite + trade data."

- ChatGPT

2. Research Problem Space + 3. Brainstorm Solutions

What I thought would be distinct research & brainstorm phases ended up happening at the same time. As I asked questions about the problem, the LLMs generated users, workflows, solution ideas without my always asking.

I wanted to start with a structured deliverable to act as a high level definition.

Show prompt

"let's focus on research i need to fill in this: What are we building and why now? A clear, jargon-free description of the initiative that anyone can understand - your elevator pitch. This gives readers the "what" immediately, then the subsequent sections explain the "why" (problem), "how it wins" (business case), "how it's possible" (technical feasibility), and "how we'll execute" (the rest).""

My prompt template, which I had made previously with AI.

Deliverable 1: Elevator Pitch
The Elevator Pitch

🌊 Save the Ocean by Turning Satellite Data into Enforceable Evidence

Every year, roughly 1 in 5 wild-caught fish is taken illegally — generating up to $36 billion in illicit profits while accelerating ecosystem collapse and eroding a key natural defense against global warming. The vessels responsible don't need sophisticated tools to hide. They simply turn off their tracking transponders and "go dark."

Satellites can often see ships at sea — but seeing isn't the same as proving wrongdoing. Enforcement depends on structured, defensible evidence.

We're building an AI-powered Maritime Evidence Engine that detects dark-vessel behavior using satellite and vessel-pattern analysis, then automatically transforms that data into enforcement-ready evidence packs. Instead of just surfacing alerts, the platform delivers explainable, auditable reports that regulators and seafood buyers can use to deny market access, escalate inspections, and block illegal catch before it enters the supply chain.

We're not just watching the ocean. We're enforcing its laws.

Skipped

Buyer — who are we selling this to? That changes the user and motivations, I should NOT have skipped this.

Fact checking / sources — 1 in 5 and $36B need fact checking, but they seemed roughly right, proceed at risk.

Used for minimal edits.

Lengthy responses

Claude had a better bullets & table format that would have been better to go with. I regret forcing the elevator pitch paragraph style.

Fav Format Suggestion

Its initial pitch lacked the urgency of "dark vessels" and "save the ocean". I quickly got fatigued editing iterations.

Wrangled Final Pitch TextPainful Iterations

After aligning all three tools on the elevator pitch, I used the AI tools to identify the core user & hero workflow.

Deliverable 2: Problem
The Problem

Illegal fishing + the evidence gap

Satellites can detect dark vessels. But detection isn't enforcement. Authorities need structured, defensible evidence — AIS gap analysis, SAR cross-reference, zone violations, vessel history — assembled fast enough to act.

The platform must: Collapse the "glitch, buoy, or bust" question from a 20-minute manual investigation into a 90-second confident decision. Speed to confident decision — not just detection.

Skipped

Fact checking — 20 min to 90 sec not fact checked (13.3x gain). Seemed close enough to proceed at risk.

Deliverable 3: User
Maritime Intelligence Analyst
The User

Maritime Intelligence Analyst

Analysts at ocean NGOs, fisheries monitoring centers, coast guard units, and supply chain compliance officers at seafood retailers.

Core pain: Making defensible enforcement decisions with data not built for legal action. Bad escalations damage credibility — so they under-escalate.

Pain Points
Data volume with no triage
Thousands of anomalies surface daily. Everything looks equally urgent — or equally ignorable.
Explainability gap
They can see something looks wrong but can't articulate why in terms a regulator will accept. "The AI flagged it" isn't defensible.
Jurisdiction confusion
A vessel going dark in international waters triggers different rules than one in an EEZ. They're manually cross-referencing legal frameworks mid-analysis.
Tool fragmentation
AIS in one platform. SAR imagery in another. Port records elsewhere. Assembling a coherent picture means copy-pasting between systems — invisible, unrepeatable, legally fragile.
False positive fatigue
Escalating a bad flag to authorities damages credibility. So they sit on findings, waiting for certainty that never arrives.
Skipped

Will they or their boss pay for this service? — Risky that I defined a primary user when I didn't know who I was selling this to. Step miss noted above in Elevator Pitch.

Persona Icon — I created the icon above after the sprint had ended to make this presentation better.

Synthesized Persona — I synthesized post-sprint. During the sprint I used a lengthy persona AI had spit out that wasn't consolidated.

Deliverable 4: Workflow
The Workflow Chosen — Prototype Target

The "First Look" — triage in 90 seconds

1
The Trigger
An amber flash on their map. A detection flagged for review.
2
The Context Check
AIS silence — a track that stops while the vessel is still moving.
3
The Correlation
Overlay SAR. If a metallic signature is moving where AIS isn't broadcasting — heart rate goes up.
4
The Known Offender Check
Has this vessel been flagged before? Repeat behavior changes everything.
5
The Decision
Escalate with a timestamped reason, or Dismiss with logged justification — protecting the analyst if the vessel resurfaces.
Skipped

Is this the best workflow to choose? — It drives urgency in the prototype, but is this the 80% use case? Seemed valid enough, proceed at risk.

4. Make + 5. Tweak/Test

The most fun: 7 prototypes in 30 min — across 6 tools

Each tool had varying amounts of context based on who had access to above conversations. Tools with no context I gave varying amounts to, in order to get variety in prototypes.

I thought Make and Tweak/Test would be two distinct sprint phases, but because each tool needed time to "think" it made sense to tweak ones while I waited for others to finish.

No winner selected. There were pros/cons to each and it'd be best to validate with users and more research.

Deliverable 5: Prototypes
7 Prototypes · 6 Tools
Claude VS Code — Project Triton Dashboard
Claude · VS Code
Map + evidence focus. No alert list.
Claude Browser — Dark Fleet
Claude · Browser v1
Dark Fleet command center. Tiny, dim fonts - poor readability.
Claude Browser — Watchstander
Claude · Browser v2
Accessibility iteration, still hard to read.
Figma Make — Save The Ocean
Figma Make
Easiest to read. Confusing map & collapsible lists are a pain for this context.
Gemini — Triton Command Dashboard
Gemini
Triton Command had more approachable colors.
Lovable — Project Triton
Lovable
Project Triton had the most cinematic sci-fi look but used biasing nationality signals (see below analysis).
Claude Code Desktop — Maritime Dispatcher Triage Dashboard
Claude · Desktop
Dispatcher triage view had an interesting action bar at bottom.

Human Judgement

What I evaluated in then end prototypes

Information hierarchy, accessibility, tone, and example data. All had pros and cons, so there's no clear "winner". Figma Make was the friendliest. Lovable was the most cinematic.

Tone and Example Data: What AI Missed

Three of seven prototypes had targeting reticles on the map, which I hadn't asked for. The dark theme was partly my own doing (starting prompts), but the military surveillance conventions crept in on their own. Flags and real vessel names introduced political bias I wasn't comfortable with.

I caught it mid-sprint but didn't have time to resolve it all. Post-sprint I fixed the below example data. I also added tone and example data review to my v2 design sprint process.

Before — AI-generated prototype
Before: AI-generated prototype showing country flags and vessel names
CHANG XING 7🇨🇳 CHN
LUCKY FORTUNE🇹🇼 TWN
After — Human-corrected data
After: Human-corrected prototype with neutral vessel identifiers
FV-2041 – PACIFIC MARLINflag removed
FV-3892flag removed

Design correction

Vessel name nationality cues were replaced with neutral vessel identifiers to avoid unintended geopolitical implications in prototype data.

I could have changed the flags to be fictional, however post-sprint I did enough analysis to understand there is no user benefit to having a flag shown. It's just noise. An efficient compliance system prioritizes behavioral signals and evidence confidence.

Targeting reticles are being removed from map components, as unprompted military conventions that don't belong in a compliance tool.

What I skipped — and why

I evaluated what I missed in this Design Sprint process and decided whether it should lead to a sprint workflow change.

No business model defined
Who pays for Fair Seas? v1 never surfaced this. Without a customer definition, the user and workflow were ill defined.
No competitive research
What exists today? Post-sprint I found there are existing solutions which could have helped me strengthen a competitive advantage.
No end artifact defined
I had to do a lot of retroactive artifact gathering after the sprint. Next time I want stronger deliverables defined.
No accessibility constraints
Nearly every AI prototype had readability problems. I need to improve the starting prompts to include some guardrails.
No screenshot discipline
Prototype iterations lived in 6 locations, not easily recoverable for the retrospective.

Updating the Design Sprint Process (v1 → v2)

Learnings from v1 of my Design Sprint Process

Design Sprint Process v1

1. Define a Big Problem
Identify the core problem to solve.
2. Research Problem Space
Gather context and existing data to ground the sprint.
3. Brainstorm Solutions
Ideate rapidly with AI chatbots for diverse options.
4. Make
High-fidelity AI prototyping using context defined.
5. Test / Tweak
Review outputs, iterate.
Pain Points:
  • No business model defined: Who pays for Fair Seas? v1 never surfaced this. Without a customer definition the user and workflow were ill defined.
  • No competitive research: What exists today? Post-sprint I found there are existing solutions which could have helped me strengthen a competitive advantage.
  • No end artifact defined: I had to do a lot of retroactive artifact gathering after the sprint. Next time I want stronger deliverables defined.
  • No accessibility constraints: Nearly every AI prototype had readability problems. I need to improve the starting prompts to include some guardrails.
  • No screenshot discipline: Prototype iterations lived in 6 locations, not easily recoverable for the retrospective.

Design Sprint Process v2

1. Define the Sprint Deliverable
Choose the target artifacts: presentation, video, case study, or prototype demo.
2. Research & Explore
Ground the sprint with research, define the anchor card, set accessibility and tone requirements.
3. Make
Prototype and document decisions with screenshots at each step.
4. Test / Tweak
Validate against the deliverable. Review example data for real-world accusation implications.
5. Sprint Closeout
Capture final state, the anchor card, designate artifacts as verified or illustrative.
Deliverable: Sprint Anchor Card
  • Problem statement (1-2 sentences)
  • Customer (who pays, why)
  • User (who, context, what they need)
  • Key constraints (accessibility, tone, audience)
  • 2-3 source verified facts to ground design direction
Changes Made:
  • Defined deliverables: Added step "Define the Sprint Deliverables" first
  • Consolidated inseparable steps: Combined steps 1–3 into one step "Research & Explore"
  • Tone constraintAdded tone definition (i.e. tool, not weapon) to Research & Explore
  • Documentation rigor: Added artifact "Sprint Anchor Card" to be maintained throughout the Design Sprint
  • Documentation rigor: Added artifacts to the Make step (screenshots, url's, tips for each prototype) and Test/Tweak (changes made, screenshots)
  • Added human judgement review: for example data in Test/Tweak phase
  • Finishing cleanly: Added step "Sprint Closeout" to wrap up deliverables
GO ✓

Sprint 1 closed with a Go decision. The concept has legs. The process is better. What comes next isn't another sprint — it's a structured validation phase before narrowing further.

What comes between sprints

Not every next step is a full sprint. This phase validates before building further — and may loop before moving on.

Research & Validate
Users, competitors, technical feasibility. Don't build further on an assumption.
Synthesize & React
Review findings, update the brief, decide what changes — and what doesn't.
Targeted Iterations
Minimum effective change before the next validation loop.
Focused Design Sprint
One problem space at a time, as the concept matures.

Fair Seas · Sprint 1 · Cami Farley

×

The 1-Hour Sprint

I wanted to build a repeatable process, not a one-off experiment

v1 was my starting hypothesis

PROCESS DESIGN