Private
Public Access
1
0
Files
rowsandall/rowing-courses-spec.md
2026-03-16 10:57:16 +01:00

37 KiB
Raw Blame History

Rowing Courses Platform — Requirements & Technical Specification

Successor to the measured-courses feature of Rowsandall.com.
Companion service to intervals.icu. Designed for minimal hosting cost and community maintainability.


Background

Rowsandall.com will be shut down by end of 2026. Among its features, the measured courses system is not replicated elsewhere and serves two distinct audiences:

  • On-the-water rowers using CrewNerd (iOS), who sync polygon-defined courses to their phone for real-time navigation and automatic course timing.
  • Challenge organisers who run time-windowed GPS speed orders with handicap scoring across boat classes, age groups, and genders.

This document specifies a replacement that preserves both use cases across two staged deliveries.

The new platform is explicitly a companion to intervals.icu, not a standalone product. intervals.icu serves as the identity provider: users log in via intervals.icu OAuth, so the platform never manages credentials or user accounts. This keeps scope tightly bounded and makes the relationship with intervals.icu structural rather than just stated.


Key references

Resource URL Relevance
Rowsandall source code https://git.wereldraadsel.tech/sander/rowsandall Reference implementation. Especially: rowers/courses.py, rowers/courseutils.py, rowers/scoring.py, rowers/urls.py, rowers/models.py
intervals.icu API docs https://intervals.icu/api-docs.html OAuth flow, activity streams endpoint
intervals.icu forum — OAuth https://forum.intervals.icu/t/intervals-icu-oauth-support/2759 OAuth setup details
intervals.icu forum — extending https://forum.intervals.icu/t/extending-intervals-icu/46565 Extension/widget framework
intervals.icu forum — rowing migration https://forum.intervals.icu/t/support-for-rowing-data-migrating-from-rowsandall-com/117915 Community context and priorities
CrewNerd integration blog post https://analytics.rowsandall.com/2024/04/16/rowsandall-crewnerd-courses/ Full description of CrewNerd ↔ Rowsandall course sync protocol
Cloudflare Workers docs https://developers.cloudflare.com/workers/ Runtime, wrangler CLI, D1, KV
Cloudflare D1 docs https://developers.cloudflare.com/d1/ SQLite-at-edge, migrations, local dev
Cloudflare KV docs https://developers.cloudflare.com/kv/ Per-user liked-course storage
Overpass API https://overpass-api.de/ Optional: water-proximity check (not used in automated CI)
OpenLayers / Leaflet.js https://openlayers.org / https://leafletjs.com Course map browser
turf.js https://turfjs.org Client-side polygon intersection for course time calculation
KML spec https://developers.google.com/kml/documentation Wire format used by CrewNerd

Architecture overview

GitHub (data + code)          Cloudflare (compute + state)      Clients
─────────────────────         ──────────────────────────────    ────────
courses-library repo          Worker (TypeScript)               CrewNerd (iOS)
  courses/*.json        ←──   serves KML, liked, challenge UI   intervals.icu
  kml/*.kml (cached)    ──→   D1 (SQLite)                       Browser
  site/ (Leaflet)             KV (liked courses per athlete ID)
GitHub Pages (static)    ↑
  map browser            └─── intervals.icu OAuth (identity + GPS)
  leaderboard pages (S2)

Authentication model:

Users log in to the new platform via intervals.icu OAuth. The platform issues its own session token after the OAuth handshake, but never stores passwords or manages registration. The intervals.icu athlete_id is the stable user identifier throughout. CrewNerd continues to authenticate via API key (issued by the new platform after first login, stored in KV against the athlete ID).

This design means:

  • No registration form, no email verification, no password reset to build or maintain.
  • Rowsandall users who already use intervals.icu can log in immediately with no new account.
  • The platform's scope remains clearly bounded as a companion service.

Guiding constraints:

  • Monthly cost: €0 within Cloudflare free tier for realistic rowing-community traffic.
  • Community maintainability: any contributor can run the full stack locally with wrangler dev; no server credentials required.
  • CrewNerd compatibility: existing CrewNerd users experience zero disruption — only a base URL change in the app.
  • No single point of human failure: the platform remains functional without active maintainer involvement.
  • Identity delegated to intervals.icu: the platform never stores passwords or manages user accounts.

Stage 1 — Course library and CrewNerd integration

Goal: Replace the four CrewNerd-facing API endpoints from Rowsandall with a low-cost, maintainable equivalent. Migrate the existing course database. Unblock CrewNerd users on day one of Rowsandall shutdown.

1.1 Course data model

Each course is stored as a single JSON file in the courses/ directory of the library repository. The format mirrors the Rowsandall GeoCourse / GeoPolygon / GeoPoint model, flattened:

{
  "id": "001",
  "name": "Amstel Buiten",
  "country": "NL",
  "center_lat": 52.3512,
  "center_lon": 4.9284,
  "distance_m": 1500,
  "notes": "Optional description shown in CrewNerd and on the course page.",
  "status": "established",
  "polygons": [
    {
      "name": "Start",
      "order": 0,
      "points": [
        {"lat": 52.3500, "lon": 4.9270},
        {"lat": 52.3505, "lon": 4.9275},
        {"lat": 52.3495, "lon": 4.9280}
      ]
    },
    {
      "name": "Finish",
      "order": 1,
      "points": [
        {"lat": 52.3520, "lon": 4.9300},
        {"lat": 52.3525, "lon": 4.9305},
        {"lat": 52.3515, "lon": 4.9310}
      ]
    }
  ]
}

status values:

Value Meaning
provisional Submitted and structurally valid; not yet proven in a timed row. Served to CrewNerd normally.
established Has been used for at least one timed result, or manually endorsed by a curator.

A GitHub Actions workflow maintains courses/index.json — a flat array of all course metadata (id, name, country, center, distance, status) without the polygon detail. This index is what the Worker queries for the course list and near-duplicate detection. The polygons are only fetched when generating KML for a specific course.

1.2 Course validation (CI, automated)

On every PR that adds or modifies a file under courses/, the Actions workflow runs scripts/validate_course.py. This script performs two checks only:

Check 1 — Structural validity (hard failure, PR blocked):

  • File is valid JSON conforming to the schema above.
  • At least two polygons present.
  • Each polygon has at least three points.
  • Each polygon has non-zero area (cross-product check).
  • No self-intersecting polygon edges.

Check 2 — Distance sanity (hard failure, PR blocked):

  • Total course length (sum of centroid-to-centroid distances along the polygon chain) is between 100 m and 25 000 m.
  • No two consecutive centroids are more than 5 000 m apart.
  • No two polygons overlap (bounding-box pre-check, then full intersection).

Both checks use only standard Python geometry — no external API calls. Failures produce a human-readable error message attached to the PR as a comment.

Courses that pass both checks are auto-merged and deployed. Status is set to provisional on creation. Promotion to established is a manual label change requiring curator role.

What is explicitly not checked automatically: whether the course is on water, whether it duplicates an existing course, and whether the polygon design is navigationally sensible. These are left to community feedback and the provisional/established distinction.

1.3 Course submission flow

  1. Submitter visits the GitHub Pages course browser (/submit), uploads a KML file (from Google Earth or CrewNerd export), fills in name and country, submits.
  2. Worker endpoint POST /api/courses/submit receives the KML, parses it using the kmltocourse() logic from courses.py, converts to the JSON schema, and opens a draft PR to the library repo via the GitHub API (using a GitHub App installation token stored as a Worker secret).
  3. GitHub Actions runs validation. Pass → PR auto-merged. Fail → PR stays open with error comment; submitter notified via email if provided.
  4. On merge, Actions redeploys Pages and regenerates the KML cache.

Alternatively, technically confident submitters may open a PR directly.

1.4 Authentication and API key issuance (Stage 1)

Even in Stage 1, users need a way to log in to like courses and have those likes synced to CrewNerd. The full intervals.icu OAuth flow is implemented from the start — it is not deferred to Stage 2.

Login flow:

  1. User visits the course browser and clicks "Sign in with intervals.icu".
  2. Worker redirects to https://intervals.icu/oauth/authorize?client_id=...&scope=PROFILE_READ&response_type=code.
  3. intervals.icu redirects to GET /oauth/callback?code=....
  4. Worker exchanges code for tokens, fetches the athlete profile (GET /api/v1/athlete/self), and stores the session in D1 user_sessions.
  5. Worker issues a platform API key (random 32-byte hex, stored in KV as apikey:{key}athlete_id) and sets it as a secure cookie.

The CrewNerd API key is this platform-issued key — the same scheme as the current Rowsandall API key. It never expires unless the user explicitly revokes it or re-generates it from their profile page.

In Stage 1, the only scope required is PROFILE_READ (to identify the athlete). ACTIVITY_READ is added in Stage 2 for GPS validation. Both scopes should be requested in Stage 1 to avoid a second OAuth prompt when Stage 2 launches, if intervals.icu supports incremental scope grants — confirm with David Tinker (see open question 7).

D1 tables required in Stage 1:

The user_sessions table (see Stage 2 schema) is created in Stage 1. The is_organizer column defaults to 0 and is unused until Stage 2.

1.5 KML generation

The Worker generates KML from the JSON course data on request, implementing the same output format as coursetokml() / getcoursefolder() in courses.py. Key requirements:

  • Polygon coordinates in lon,lat,0 format (KML convention — note longitude first).
  • Polygon points sorted counterclockwise (matching sort_coordinates_ccw() in courses.py).
  • When ?cn=true query parameter is set, polygon names are renamed to CrewNerd convention: first → Start, last → Finish, intermediate → WP1, WP2, etc. This matches the crewnerdify() function.
  • KML envelope includes Style and StyleMap elements with the standard Rowsandall cyan fill (ff7fffff) and outline (ff00ffff).

Pre-generated KML files are stored in kml/{id}.kml in the repository and served as static files where possible. The Worker falls back to on-the-fly generation if the static file is absent (e.g. for very recently merged courses before the next deploy).

1.6 CrewNerd API surface

These four endpoints must be present and respond identically to the current Rowsandall endpoints. Authentication is via Authorization: ApiKey {key} header, matching the existing Rowsandall API key scheme. The API key is looked up in KV (apikey:{key}athlete_id) to identify the user.

GET  /api/courses/

Returns a JSON array of course metadata from index.json, filtered to status: established | provisional. Supports optional ?lat=&lon=&radius= query parameters for geographic filtering (haversine against center_lat/center_lon).

GET  /api/courses/{id}/

Returns KML for a single course. Accepts ?cn=true for CrewNerd polygon naming. Course ID is the string identifier from the JSON filename (e.g. "001").

GET  /api/courses/kml/liked/

Returns a single KML document containing all courses the authenticated user has liked, as separate <Folder> elements. Liked course IDs are read from Cloudflare KV key liked:{athlete_id}.

GET  /api/courses/kml/

Returns a KML document for the course IDs specified in the ?ids=1,2,3 query parameter.

POST /rowers/courses/{id}/follow/
POST /rowers/courses/{id}/unfollow/

Add or remove a course ID from the liked:{athlete_id} KV entry. Return 200 on success.

1.7 Course map browser (GitHub Pages)

A single-page Leaflet application served from the site/ directory. Features:

  • Map centred on user geolocation on first load, falling back to world view.
  • Loads courses/index.json on init; renders a marker per course.
  • Clicking a marker shows course name, distance, country, status badge, and a link to the course detail page.
  • Course detail page fetches the full JSON and renders the polygon chain on the map.
  • Filter controls: country dropdown, distance range slider, status toggle (provisional / established / both).
  • Search by name (client-side, against the index).
  • "Submit a course" link leading to the submission form.
  • KML download button per course (links to kml/{id}.kml).

No backend calls needed for browsing — entirely static.

1.8 Infrastructure setup (Stage 1)

Repositories:

  • rowing-courses — course data library (JSON files, KML cache, Leaflet site, validation scripts, Actions workflows).
  • rowing-courses-worker — Cloudflare Worker TypeScript source, wrangler.toml, D1 migration files (empty at Stage 1).

Both repos are public on GitHub under a shared organisation (e.g. rowing-courses or similar — to be decided). The organisation has at least two admin members to avoid single-person bus factor.

Cloudflare resources:

  • One Worker (free tier: 100k requests/day, 10ms CPU per invocation).
  • One KV namespace: ROWING_COURSES (stores apikey:{key} → athlete_id, and liked:{athlete_id} → JSON array of course IDs).
  • One D1 database: rowing-courses-db (created in Stage 1 for user_sessions; extended in Stage 2 for challenges and results).
  • One GitHub App (for opening PRs from the Worker) — alternatively a fine-grained Personal Access Token scoped to the library repo only.
  • Worker secret: INTERVALS_CLIENT_ID, INTERVALS_CLIENT_SECRET (needed from Stage 1 for login).

GitHub Actions workflows:

  • validate.yml — triggered on PRs modifying courses/**; runs validate_course.py; posts result as PR comment; auto-merges on pass.
  • deploy.yml — triggered on push to main; regenerates index.json and kml/*.kml; deploys to GitHub Pages.

Local development:

git clone https://github.com/rowing-courses/rowing-courses-worker
cd rowing-courses-worker
npm install
wrangler d1 execute rowing-courses-db --local --file=migrations/001_sessions.sql
wrangler dev          # Worker on localhost:8787 with local KV and D1

No external credentials needed for local development. The Worker fetches course data from the live GitHub raw URLs by default; a LOCAL_COURSES_PATH env variable can redirect to a local checkout of the library repo.

1.9 Data migration from Rowsandall

Migration separates cleanly into two categories with different handling: course geometry (not personal data, bulk-export immediately) and user state (personal data, requires explicit user consent).

Course geometry — bulk export, no consent needed:

Course polygon data is geographic fact, not personal data. All courses are exported unconditionally using scripts/export_from_rowsandall.py, which reads the Rowsandall Django database (via ORM or database dump) and writes one JSON file per GeoCourse. All migrated courses receive status: established. Course authorship is not transferred — the submitted_by field is set to "migrated from Rowsandall" for all migrated courses. This script should be run while Rowsandall is still live so the output can be verified against known course times.

User state (liked courses, course ownership) — self-service migration:

Liked-course lists and ownership are personal data under GDPR: they reveal training locations and habits. These are not transferred server-to-server. Instead, Rowsandall provides a self-service export, and users upload it themselves to the new platform.

A "Download my courses" button is added to the Rowsandall courses page (a modest addition to the existing Django app, building on the already-functional /courses/{id}/downloadkml/ endpoint). It produces a ZIP containing:

my-rowsandall-courses.zip
├── courses/
│   ├── 066-amstel-buiten.kml    ← courses this user owns
│   └── 123-charles-river.kml
└── manifest.json                ← {"owned": ["066", "123"], "liked": ["066", "123", "200"]}

No account data, email addresses, or activity data is included in the export.

On the new platform, an authenticated user (logged in via intervals.icu OAuth) uploads this ZIP. The Worker:

  1. Parses manifest.json.
  2. Submits owned courses as provisional PR entries (same pipeline as a normal course submission).
  3. Restores the liked-course list by writing liked:{athlete_id} to KV.

Users are notified of this migration path via the Rowsandall shutdown announcement and a banner on the courses page. The export ZIP can be generated at any time before Rowsandall shuts down.

What is not migrated: challenge history, race results, and workout data. Challenge history could be migrated as a separate effort (see Stage 2 delivery checklist) but is lower priority than the course library and live user state.

1.10 Stage 1 delivery checklist

Rowsandall (prerequisite work, separate scope):

  • scripts/export_from_rowsandall.py — bulk course geometry export to JSON
  • "Download my courses" ZIP export button in Rowsandall Django app

Course library repo:

  • Repository created under GitHub organisation
  • Course JSON schema and example files
  • scripts/validate_course.py (structural + distance checks)
  • scripts/generate_index.py (regenerates courses/index.json)
  • scripts/generate_kml.py (regenerates kml/*.kml cache)
  • validate.yml GitHub Actions workflow (PR validation + auto-merge)
  • deploy.yml GitHub Actions workflow (Pages deployment)
  • Initial course data committed (migrated from Rowsandall)

Cloudflare Worker:

  • wrangler.toml with KV and D1 bindings
  • migrations/001_sessions.sql (user_sessions table)
  • intervals.icu OAuth login flow (GET /oauth/authorize, GET /oauth/callback)
  • Platform API key issuance and storage in KV
  • GET /api/courses/ — course index with geo filtering
  • GET /api/courses/{id}/ — single course KML
  • GET /api/courses/kml/liked/ — liked courses KML bundle
  • GET /api/courses/kml/ — multi-course KML bundle
  • POST /rowers/courses/{id}/follow/ and /unfollow/
  • POST /api/courses/submit — KML upload → GitHub PR
  • POST /api/courses/import-zip — ZIP import (owned courses + liked list)
  • KML generation logic (port of courses.py: coursetokml, getcoursefolder, crewnerdify, sort_coordinates_ccw)

GitHub Pages site:

  • Leaflet map browser with course markers and detail view
  • Filter controls (country, distance, status)
  • Course submission form (KML upload)
  • ZIP import form (for Rowsandall migrants)
  • "Sign in with intervals.icu" link

Stage 2 — Challenges and leaderboards

Goal: Replace the Rowsandall challenge / virtual race / speed order functionality. Full handicap scoring, time-windowed row and submission windows, GPS validation via intervals.icu, organiser moderation tools.

Stage 2 extends the same Worker. D1 already exists from Stage 1 (user_sessions table); Stage 2 adds the challenges, results, and standards tables via new migration files. No new hosting infrastructure is required.

2.1 Data model (D1 — SQLite)

All migrations are versioned TypeScript files in worker/migrations/ and applied via wrangler d1 migrations apply.

challenges table:

CREATE TABLE challenges (
  id           TEXT PRIMARY KEY,          -- uuid
  name         TEXT NOT NULL,
  course_id    TEXT NOT NULL,             -- references course JSON id
  row_start    TEXT NOT NULL,             -- ISO 8601 datetime
  row_end      TEXT NOT NULL,
  submit_end   TEXT NOT NULL,
  collection_id TEXT,                     -- fk → standard_collections.id (nullable)
  organizer_id TEXT NOT NULL,             -- intervals.icu athlete id
  is_public    INTEGER NOT NULL DEFAULT 1,
  notes        TEXT,
  created_at   TEXT NOT NULL
);

challenge_results table:

CREATE TABLE challenge_results (
  id              TEXT PRIMARY KEY,
  challenge_id    TEXT NOT NULL REFERENCES challenges(id),
  athlete_id      TEXT NOT NULL,           -- intervals.icu athlete id
  activity_id     TEXT NOT NULL,           -- intervals.icu activity id
  raw_time_s      REAL NOT NULL,
  standard_id     TEXT,                    -- fk → course_standards.id (nullable)
  corrected_time_s REAL,
  start_time      TEXT NOT NULL,           -- actual row start (from GPS)
  validation_status TEXT NOT NULL DEFAULT 'pending',
                                           -- pending | valid | invalid | manual_ok | dq
  validation_note TEXT,                    -- human-readable gate-by-gate log for debugging and athlete feedback
  submitted_at    TEXT NOT NULL
);

standard_collections table:

CREATE TABLE standard_collections (
  id        TEXT PRIMARY KEY,
  name      TEXT NOT NULL,
  notes     TEXT,
  is_public INTEGER NOT NULL DEFAULT 0,
  owner_id  TEXT NOT NULL               -- intervals.icu athlete id
);

course_standards table:

CREATE TABLE course_standards (
  id              TEXT PRIMARY KEY,
  collection_id   TEXT NOT NULL REFERENCES standard_collections(id),
  name            TEXT NOT NULL,
  boat_class      TEXT NOT NULL,  -- water | rower | dynamic | coastal | c-boat | churchboat
  boat_type       TEXT NOT NULL,  -- 1x | 2x | 2- | 4x | 4- | 4+ | 8+ | etc.
  sex             TEXT NOT NULL,  -- male | female | mixed
  weight_class    TEXT NOT NULL,  -- hwt | lwt
  age_min         INTEGER NOT NULL DEFAULT 0,
  age_max         INTEGER NOT NULL DEFAULT 120,
  adaptive_class  TEXT NOT NULL DEFAULT 'None',
  skill_class     TEXT NOT NULL DEFAULT 'Open',
  course_distance REAL NOT NULL,
  reference_speed REAL NOT NULL   -- m/s, computed from standard time at upload
);

user_sessions table:

CREATE TABLE user_sessions (
  session_token        TEXT PRIMARY KEY,
  athlete_id           TEXT NOT NULL,
  access_token_enc     TEXT NOT NULL,   -- AES-GCM encrypted
  refresh_token_enc    TEXT NOT NULL,
  expires_at           TEXT NOT NULL,
  is_organizer         INTEGER NOT NULL DEFAULT 0
);

2.2 Handicap scoring

The scoring logic from rowers/scoring.py translates directly. At result submission, the Worker:

  1. Looks up the challenge's collection_id. If null, no handicap is applied (corrected_time_s = raw_time_s).
  2. Queries course_standards for the row matching the athlete's declared category (boat class, type, sex, weight, age). Falls back to the open heavyweight male row for the same boat class if no exact match.
  3. Computes: corrected_time_s = raw_time_s × (athlete_reference_speed / baseline_reference_speed) where baseline is the open HWT male 1x standard for the course distance.
  4. Stores both raw_time_s and corrected_time_s. The leaderboard UI can toggle between the two.

Standard collections are uploaded as CSV (identical format to the existing Rowsandall import in scoring.py): columns name, boatclass, boattype, sex, weightclass, agemin, agemax, adaptiveclass, skillclass, coursedistance, coursetime. The Worker parses the CSV, computes reference_speed per row (coursedistance / coursetime_in_seconds), and inserts into course_standards.

2.3 GPS validation via intervals.icu OAuth

The OAuth infrastructure is already in place from Stage 1 (login). Stage 2 extends the token scope to include ACTIVITY_READ and uses the stored access token to fetch GPS data for course time validation.

If intervals.icu supports requesting multiple scopes in the initial grant (confirm with David Tinker — see open question 7), both PROFILE_READ and ACTIVITY_READ should be requested at Stage 1 login to avoid a re-authorisation prompt in Stage 2.

Result submission flow:

  1. Athlete navigates to a challenge page and clicks "Submit result" (requires login).
  2. Worker presents a list of the athlete's recent activities fetched from GET /api/v1/athlete/{id}/activities.
  3. Athlete selects the relevant activity.
  4. Worker fetches GPS stream: GET /api/v1/athlete/{id}/activities/{activity_id}/streams?streams=latlng,time.
  5. Worker runs the course validation pipeline (see below).
  6. Validates against challenge time window: start_time must be within row_startrow_end.
  7. Sets validation_status = valid and raw_time_s from the computed elapsed time. Any failure sets validation_status = invalid with a descriptive validation_note. The validation log is stored in validation_note so organisers and athletes can see exactly which gates were passed and when.

Submission window enforcement is a simple timestamp check before any GPS fetching: now() > challenge.submit_end returns 403 immediately.

Course validation pipeline

The authoritative reference implementation is handle_check_race_course() in rowers/tasks.py (line 1299). The TypeScript port must preserve all of the following behaviour.

Step 1 — GPS track preparation:

The intervals.icu stream returns latlng (array of [lat, lon] pairs) and time (array of elapsed seconds) as separate arrays. These are zipped into an array of {lat, lon, time} objects, then the track is resampled to 100ms resolution using linear interpolation between consecutive samples:

function interpolateTrack(
  points: {lat: number, lon: number, time: number}[],
  intervalMs = 100
): {lat: number, lon: number, time: number}[] {
  const result = [];
  for (let i = 0; i < points.length - 1; i++) {
    const a = points[i], b = points[i + 1];
    const steps = Math.ceil((b.time - a.time) * 1000 / intervalMs);
    for (let s = 0; s < steps; s++) {
      const t = s / steps;
      result.push({
        lat: a.lat + t * (b.lat - a.lat),
        lon: a.lon + t * (b.lon - a.lon),
        time: a.time + t * (b.time - a.time),
      });
    }
  }
  result.push(points[points.length - 1]);
  return result;
}

This step is critical. GPS watches typically record at 1s intervals, giving ~4m resolution at rowing pace. Without interpolation, a narrow gate polygon can be missed entirely if consecutive samples land on either side of it with none inside. At 100ms resolution the gap is ~40cm, sufficient for any gate polygon in practice. The Python version uses pandas resample('100ms') + interpolate() which is slow due to DataFrame overhead; the TypeScript array implementation performs the identical calculation in single-digit milliseconds.

Step 2 — Multi-pass detection:

Many rowers warm up by rowing through part or all of the course before their actual timed attempt. The algorithm must not use the first passage through the start polygon — it must find the best completed passage.

All entry times through the start polygon are found (not just the first), using time_in_path(..., getall=True). For each entry time, the algorithm attempts to complete the full polygon chain from that point forward using coursetime_paths(). All completed attempts are collected; if none complete, the course is marked invalid.

Step 3 — Net time calculation:

For each completed attempt:

  • endsecond = time of exit through finish polygon
  • startsecond = time of exit through start polygon (not entry — the clock starts when the rower clears the start gate, matching real racing practice)
  • net_time = endsecond startsecond

The best (lowest) net time across all completed attempts is the official result.

Step 4 — Logging:

A per-submission log is written recording: each start polygon entry time found, each gate passage time, whether each attempt completed, and the final selected time. This log is stored in validation_note in D1. It serves two purposes:

  • Organisers can inspect it to verify or override a result.
  • Athletes whose submission was rejected can see exactly which gate they missed and approximately where their GPS track diverged from the course.

The log format should be human-readable plain text, matching the style of the existing Rowsandall course log files. Example:

Course id 66, Record id 12345
Found 2 entrytimes
Path starting at 142.3s
  Gate 0 (Start): passed at 142.3s, 0m
  Gate 1 (WP1): passed at 198.7s, 245m
  Gate 2 (Finish): passed at 287.1s, 498m
  Course completed: true, net time: 144.8s
Path starting at 412.1s
  Gate 0 (Start): passed at 412.1s, 0m
  Gate 1 (WP1): passed at 469.4s, 247m
  Gate 2 (Finish): passed at 554.2s, 501m
  Course completed: true, net time: 142.1s
Best time: 142.1s (attempt 2)

Step 5 — Points calculation (handicap):

velo = course_distance_m / net_time_s
points = 100 × (2  reference_speed / velo)

Where reference_speed is the athlete's registered category reference speed from course_standards. Points are stored alongside raw and corrected times.

Point-in-polygon implementation:

Both coordinate_in_path() and coursetime_paths() from rowers/courseutils.py are ported directly to TypeScript. The point-in-polygon test uses the standard ray casting algorithm — no external library needed. The recursive structure of coursetime_paths() (pass start, slice remaining track, recurse for next gate) is preserved exactly.

2.4 Challenge organiser interface

The Worker serves a minimal HTML/CSS organiser panel at /organiser/ — no JavaScript framework, plain form posts. Access requires is_organizer = 1 on the session record. The is_organizer flag is set manually by an admin via a protected POST /admin/grant-organiser endpoint.

Organiser capabilities:

  • Create a challenge (form: name, select course from library, row window dates, submission deadline, optional standard collection).
  • Upload a handicap CSV (parsed and stored in a new standard_collection linked to the challenge).
  • View the moderation panel for their challenges: each result shown with validation_status, raw_time_s, corrected_time_s, and a map link showing the GPS track against the course polygons.
  • Override validation_status from invalid to manual_ok with a mandatory note (audit trail).
  • Disqualify a result (sets status to dq).
  • Download results as CSV.

2.5 Leaderboard pages

Challenge leaderboard pages are served at /challenges/{id}/ by the Worker. They are publicly accessible without authentication. The page renders:

  • Challenge metadata (course name, row window, submission deadline, status: open / closed).
  • Results table, sortable by raw time or corrected time, filtered by boat class / sex.
  • Map showing the course polygons (Leaflet, loads from course JSON).
  • Each result links to the intervals.icu activity if the athlete's profile is public.

For challenges with is_public = 0, the leaderboard requires the athlete's session token to be present (private club challenges).

2.6 Infrastructure additions (Stage 2)

New Cloudflare resources:

  • Additional Worker secrets: TOKEN_ENCRYPTION_KEY (for encrypting stored OAuth tokens in D1).

INTERVALS_CLIENT_ID and INTERVALS_CLIENT_SECRET are already present from Stage 1. The D1 database is already provisioned. Stage 2 adds only migration files and the encryption key secret.

intervals.icu OAuth scope addition:

  • If ACTIVITY_READ was not included in the Stage 1 OAuth grant, users will need to re-authorise when they first attempt to submit a challenge result. This is a minor friction point, not a blocker.

Local development with Stage 2 migrations:

wrangler d1 execute rowing-courses-db --local --file=migrations/002_challenges.sql
wrangler d1 execute rowing-courses-db --local --file=migrations/003_standards.sql
wrangler dev

2.7 Stage 2 delivery checklist

  • D1 migration files (002_challenges.sql, 003_standards.sql)
  • TOKEN_ENCRYPTION_KEY secret; encrypt/decrypt helpers for stored tokens
  • Extend OAuth token scope to ACTIVITY_READ (or confirm it was included at Stage 1)
  • Activity list fetch from intervals.icu
  • GPS stream fetch from intervals.icu
  • GPS track interpolation to 100ms resolution (interpolateTrack())
  • Point-in-polygon ray casting (pointInPolygon(), port of coordinate_in_path())
  • Single-gate time detection (timeInPath(), port of time_in_path())
  • Multi-gate course time (coursetimePaths(), port of coursetime_paths())
  • Multi-pass detection — all start entries, best completed time wins
  • Net time calculation (start exit to finish exit, not GPS-start to finish)
  • Validation log generation (gate-by-gate, stored in validation_note)
  • Points calculation (100 × (2 reference_speed / velo))
  • Challenge CRUD endpoints
  • Standard collection CSV upload and parser
  • Handicap scoring computation
  • Result submission endpoint (with GPS validation)
  • Organiser panel HTML
  • Leaderboard page HTML
  • Admin grant-organiser endpoint
  • Migration of existing Rowsandall challenges and results (optional, lower priority)

Out of scope (both stages)

  • Rowing analytics (stroke rate charts, force curves, etc.) — targeted for intervals.icu native support.
  • Indoor challenge (Concept2 erg) — intervals.icu already handles erg data.
  • User account management and credential storage — identity is fully delegated to intervals.icu OAuth; the platform stores only the athlete ID and session token.
  • Workout upload / import — handled by intervals.icu directly (CrewNerd now exports there natively).
  • Email notifications — can be added as Stage 3 using Cloudflare Email Workers if needed.

Open questions for developer kickoff

  1. CrewNerd base URL configurability. Confirm with Tony Andrews (CrewNerd) whether a configurable Rowsandall base URL already exists in the app, or whether a CrewNerd release is needed before Stage 1 is useful. This affects the Stage 1 deadline and should be the first external conversation to have.

  2. intervals.icu OAuth app registration. Confirm with David Tinker (@david on the intervals.icu forum) whether a community/open-source OAuth app can be registered for this project, or whether each instance operator registers separately. Also confirm the available scopes — specifically whether PROFILE_READ and ACTIVITY_READ can be requested in the same grant or require separate authorisation flows. The redirect URI will be https://{worker-domain}/oauth/callback.

  3. Course ID scheme. Rowsandall uses integer primary keys. The new library uses string identifiers derived from filenames. The migration script should assign stable IDs matching the original Rowsandall IDs (e.g. "066" for course 66) to preserve any bookmarked course URLs and to make the user ZIP migration (see point 8) unambiguous.

  4. Cloudflare account structure. Decision needed on whether to use an existing personal Cloudflare account (quick start, harder to transfer) or set up a dedicated organisation account (recommended for community maintainability, ~30 minutes overhead). Recommendation: dedicated account under a shared organisation email, with at least two account members from day one.

  5. GitHub organisation name. Needs to be chosen before any repos are created; renaming later breaks clone URLs for all contributors.

  6. Standard collection library. Should a set of canonical handicap tables (FISA masters, HOCR categories, KNRB) be included in the initial data migration, or left for organiser community upload? Including them reduces friction for the first challenge organisers to migrate.

  7. intervals.icu OAuth scope strategy. If PROFILE_READ and ACTIVITY_READ can be combined in a single OAuth grant, both should be requested at Stage 1 login. This avoids a re-authorisation prompt when Stage 2 launches. If intervals.icu only supports one scope per grant, Stage 2 users will need to re-authorise — acceptable but slightly awkward. Confirm with David Tinker before finalising the Stage 1 OAuth implementation.

  8. Rowsandall ZIP export feature. Build a "Download my courses" button in the existing Rowsandall Django app, producing a ZIP of owned course KML files plus a manifest.json with owned and liked course ID lists (no account data, no activity data). This is a Rowsandall deliverable, not a new-platform deliverable, and should be scoped and scheduled separately. The new platform's import endpoint (parse ZIP, submit courses as provisional PRs, restore liked list in KV) is a Stage 1 deliverable. The Rowsandall export should be live well before the shutdown announcement so users have time to act on it.