Private
Public Access
1
0
Files
rowsandall/rowing-courses-spec.md

42 KiB
Raw Blame History

Rowing Courses Platform — Requirements & Technical Specification

Successor to the measured-courses feature of Rowsandall.com.
Companion service to intervals.icu. Designed for minimal hosting cost and community maintainability.


Background

Rowsandall.com will be shut down by end of 2026. Among its features, the measured courses system is not replicated elsewhere and serves two distinct audiences:

  • On-the-water rowers using CrewNerd (iOS), who sync polygon-defined courses to their phone for real-time navigation and automatic course timing.
  • Challenge organisers who run time-windowed GPS speed orders with handicap scoring across boat classes, age groups, and genders.

This document specifies a replacement that preserves both use cases across two staged deliveries.

The new platform is explicitly a companion to intervals.icu, not a standalone product. intervals.icu serves as the identity provider: users log in via intervals.icu OAuth, so the platform never manages credentials or user accounts. This keeps scope tightly bounded and makes the relationship with intervals.icu structural rather than just stated.


Key references

Resource URL Relevance
Rowsandall source code https://git.wereldraadsel.tech/sander/rowsandall Reference implementation. Especially: rowers/courses.py, rowers/courseutils.py, rowers/scoring.py, rowers/urls.py, rowers/models.py
intervals.icu API docs https://intervals.icu/api-docs.html OAuth flow, activity streams endpoint
intervals.icu forum — OAuth https://forum.intervals.icu/t/intervals-icu-oauth-support/2759 OAuth setup details
intervals.icu forum — extending https://forum.intervals.icu/t/extending-intervals-icu/46565 Extension/widget framework
intervals.icu forum — rowing migration https://forum.intervals.icu/t/support-for-rowing-data-migrating-from-rowsandall-com/117915 Community context and priorities
CrewNerd integration blog post https://analytics.rowsandall.com/2024/04/16/rowsandall-crewnerd-courses/ Full description of CrewNerd ↔ Rowsandall course sync protocol
Cloudflare Workers docs https://developers.cloudflare.com/workers/ Runtime, wrangler CLI, D1, KV
Cloudflare D1 docs https://developers.cloudflare.com/d1/ SQLite-at-edge, migrations, local dev
Cloudflare KV docs https://developers.cloudflare.com/kv/ Per-user liked-course storage
Overpass API https://overpass-api.de/ Optional: water-proximity check (not used in automated CI)
OpenLayers / Leaflet.js https://openlayers.org / https://leafletjs.com Course map browser
turf.js https://turfjs.org Client-side polygon intersection for course time calculation
KML spec https://developers.google.com/kml/documentation Wire format used by CrewNerd

Architecture overview

GitHub (data + code)          Cloudflare (compute + state)      Clients
─────────────────────         ──────────────────────────────    ────────
courses-library repo          Worker (TypeScript)               CrewNerd (iOS)
  courses/*.json        ←──   serves KML, liked, challenge UI   intervals.icu
  kml/*.kml (cached)    ──→   D1 (SQLite)                       Browser
  site/ (Leaflet)             KV (liked courses per athlete ID)
GitHub Pages (static)    ↑
  map browser            └─── intervals.icu OAuth (identity + GPS)
  leaderboard pages (S2)

Authentication model:

Users log in to the new platform via intervals.icu OAuth. The platform issues its own session token after the OAuth handshake, but never stores passwords or manages registration. The intervals.icu athlete_id is the stable user identifier throughout. CrewNerd continues to authenticate via API key (issued by the new platform after first login, stored in KV against the athlete ID).

This design means:

  • No registration form, no email verification, no password reset to build or maintain.
  • Rowsandall users who already use intervals.icu can log in immediately with no new account.
  • The platform's scope remains clearly bounded as a companion service.

Guiding constraints:

  • Monthly cost: €0 within Cloudflare free tier for realistic rowing-community traffic.
  • Community maintainability: any contributor can run the full stack locally with wrangler dev; no server credentials required.
  • CrewNerd compatibility: existing CrewNerd users experience zero disruption — only a base URL change in the app.
  • No single point of human failure: the platform remains functional without active maintainer involvement.
  • Identity delegated to intervals.icu: the platform never stores passwords or manages user accounts.

Stage 1 — Course library and CrewNerd integration

Goal: Replace the four CrewNerd-facing API endpoints from Rowsandall with a low-cost, maintainable equivalent. Migrate the existing course database. Unblock CrewNerd users on day one of Rowsandall shutdown.

1.1 Course data model

Each course is stored as a single JSON file in the courses/ directory of the library repository. The format mirrors the Rowsandall GeoCourse / GeoPolygon / GeoPoint model, flattened:

{
  "id": "001",
  "name": "Amstel Buiten",
  "country": "NL",
  "center_lat": 52.3512,
  "center_lon": 4.9284,
  "distance_m": 1500,
  "notes": "Optional description shown in CrewNerd and on the course page.",
  "status": "established",
  "polygons": [
    {
      "name": "Start",
      "order": 0,
      "points": [
        {"lat": 52.3500, "lon": 4.9270},
        {"lat": 52.3505, "lon": 4.9275},
        {"lat": 52.3495, "lon": 4.9280}
      ]
    },
    {
      "name": "Finish",
      "order": 1,
      "points": [
        {"lat": 52.3520, "lon": 4.9300},
        {"lat": 52.3525, "lon": 4.9305},
        {"lat": 52.3515, "lon": 4.9310}
      ]
    }
  ]
}

status values:

Value Meaning
provisional Submitted and structurally valid; not yet proven in a timed row. Served to CrewNerd normally.
established Has been used for at least one timed result, or manually endorsed by a curator.

A GitHub Actions workflow maintains courses/index.json — a flat array of all course metadata (id, name, country, center, distance, status) without the polygon detail. This index is what the Worker queries for the course list and near-duplicate detection. The polygons are only fetched when generating KML for a specific course.

1.2 Course validation (CI, automated)

On every PR that adds or modifies a file under courses/, the Actions workflow runs scripts/validate_course.py. This script performs two checks only:

Check 1 — Structural validity (hard failure, PR blocked):

  • File is valid JSON conforming to the schema above.
  • At least two polygons present.
  • Each polygon has at least three points.
  • Each polygon has non-zero area (cross-product check).
  • No self-intersecting polygon edges.

Check 2 — Distance sanity (hard failure, PR blocked):

  • Total course length (sum of centroid-to-centroid distances along the polygon chain) is between 100 m and 25 000 m.
  • No two consecutive centroids are more than 5 000 m apart.
  • No two polygons overlap (bounding-box pre-check, then full intersection).

Both checks use only standard Python geometry — no external API calls. Failures produce a human-readable error message attached to the PR as a comment.

Courses that pass both checks are auto-merged and deployed. Status is set to provisional on creation. Promotion to established is a manual label change requiring curator role.

What is explicitly not checked automatically: whether the course is on water, whether it duplicates an existing course, and whether the polygon design is navigationally sensible. These are left to community feedback and the provisional/established distinction.

1.3 Course submission flow

  1. Submitter visits the GitHub Pages course browser (/submit), uploads a KML file (from Google Earth or CrewNerd export), fills in name and country, submits.
  2. Worker endpoint POST /api/courses/submit receives the KML, parses it using the kmltocourse() logic from courses.py, converts to the JSON schema, and opens a draft PR to the library repo via the GitHub API (using a GitHub App installation token stored as a Worker secret).
  3. GitHub Actions runs validation. Pass → PR auto-merged. Fail → PR stays open with error comment; submitter notified via email if provided.
  4. On merge, Actions redeploys Pages and regenerates the KML cache.

Alternatively, technically confident submitters may open a PR directly.

1.4 Authentication and API key issuance (Stage 1)

Even in Stage 1, users need a way to log in to like courses and have those likes synced to CrewNerd. The full intervals.icu OAuth flow is implemented from the start — it is not deferred to Stage 2.

Confirmed by David Tinker (intervals.icu): the OAuth bearer token doubles as a login indicator. A successful token exchange means the user is authenticated with intervals.icu. No separate server-side session table is needed — the Worker encrypts the athlete ID and access token together and stores them in an HTTP-only cookie.

Login flow:

  1. User visits the course browser and clicks "Sign in with intervals.icu".
  2. Worker redirects to https://intervals.icu/oauth/authorize?client_id=...&scope=PROFILE_READ,ACTIVITY_READ&response_type=code.
  3. intervals.icu redirects to GET /oauth/callback?code=....
  4. Worker exchanges code for tokens and fetches the athlete profile (GET /api/v1/athlete/self) to confirm identity and retrieve the athlete ID.
  5. Worker encrypts {athleteId, accessToken, refreshToken, expiresAt} using AES-GCM with TOKEN_ENCRYPTION_KEY.
  6. Worker sets the encrypted blob as an HTTP-only, Secure, SameSite=Lax cookie named rn_session.
  7. On all subsequent authenticated requests, the Worker decrypts the cookie to recover the athlete ID and access token. No D1 lookup needed.

Token refresh in Stage 1: On each authenticated request the Worker decrypts the cookie and checks expiresAt. If the access token has expired, the Worker calls POST https://intervals.icu/oauth/token with the refreshToken from the cookie, receives a new {accessToken, refreshToken, expiresAt}, and rewrites the rn_session cookie before continuing. The refresh token is never written to D1 or KV — the cookie is the only storage. This is sufficient for Stage 1.

D1 is not required in Stage 1. The cookie is entirely self-contained. D1 is introduced in Stage 2 for the is_organizer flag and challenge-related state. At that point the refresh token can optionally migrate to D1 (stored encrypted), which would enable server-side session revocation — currently not possible since there is no server-side session record to invalidate.

CrewNerd API key:

CrewNerd authenticates via Authorization: ApiKey {key} header — it cannot use cookies. Rather than storing API keys in KV or D1, the key is derived deterministically from the athlete ID using HMAC-SHA256 with TOKEN_ENCRYPTION_KEY as the secret:

async function apiKeyForAthlete(athleteId: string, secret: string): Promise<string> {
  const key = await crypto.subtle.importKey(
    'raw', new TextEncoder().encode(secret),
    { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
  );
  const sig = await crypto.subtle.sign(
    'HMAC', key, new TextEncoder().encode(athleteId)
  );
  const mac = btoa(String.fromCharCode(...new Uint8Array(sig)))
    .replace(/\+/g, '-').replace(/\//g, '_').replace(/=/g, '');
  return `${athleteId}.${mac}`;
}

The same function run twice on the same athlete ID always produces the same key. On an incoming CrewNerd API request, the Worker splits the key on ., recomputes the HMAC for the embedded athlete ID, and compares using constant-time equality — no storage or lookup needed.

Scopes: both PROFILE_READ and ACTIVITY_READ are requested at Stage 1 login. ACTIVITY_READ is not used until Stage 2 GPS validation, but requesting it upfront avoids a re-authorisation prompt when Stage 2 launches.

Stage 1 secrets required:

  • INTERVALS_CLIENT_ID
  • INTERVALS_CLIENT_SECRET
  • TOKEN_ENCRYPTION_KEY (used for both cookie encryption and API key derivation)

1.5 KML generation

The Worker generates KML from the JSON course data on request, implementing the same output format as coursetokml() / getcoursefolder() in courses.py. Key requirements:

  • Polygon coordinates in lon,lat,0 format (KML convention — note longitude first).
  • Polygon points sorted counterclockwise (matching sort_coordinates_ccw() in courses.py).
  • When ?cn=true query parameter is set, polygon names are renamed to CrewNerd convention: first → Start, last → Finish, intermediate → WP1, WP2, etc. This matches the crewnerdify() function.
  • KML envelope includes Style and StyleMap elements with the standard Rowsandall cyan fill (ff7fffff) and outline (ff00ffff).

Pre-generated KML files are stored in kml/{id}.kml in the repository and served as static files where possible. The Worker falls back to on-the-fly generation if the static file is absent (e.g. for very recently merged courses before the next deploy).

1.6 CrewNerd API surface

These endpoints must be present and respond identically to the current Rowsandall endpoints. Authentication is via Authorization: ApiKey {key} header — the same scheme CrewNerd uses today with Rowsandall, requiring no code change on Tony's side.

API key format: {athleteId}.{HMAC(athleteId, TOKEN_ENCRYPTION_KEY)} — for example i12345.abc123.... The athlete ID is embedded in the key so the Worker can recover it without any KV or D1 lookup.

Verification (apiKeyForAthlete()):

  1. Split the presented key on .athleteId + mac.
  2. Recompute HMAC(athleteId, TOKEN_ENCRYPTION_KEY).
  3. Compare using constant-time equality — if it matches, the key is authentic and athleteId is trusted.
  4. Use athleteId directly for all subsequent operations (e.g. liked:{athleteId} KV lookup).

API key issuance for CrewNerd — no browser redirect needed:

CrewNerd already holds the user's intervals.icu bearer token from its existing intervals.icu integration. Rather than requiring users to manually copy a key from a web page, CrewNerd can exchange its existing intervals.icu token for a rownative API key in a single background HTTP call:

POST /api/auth/crewnerd
Authorization: Bearer {intervals_access_token}

← {"api_key": "abc123..."}

The Worker verifies the bearer token by calling GET https://intervals.icu/api/v1/athlete/self with it, extracts the athleteId from the response, constructs the key as {athleteId}.{HMAC(athleteId, TOKEN_ENCRYPTION_KEY)}, and returns it. CrewNerd stores the key and uses it for all subsequent calls. From the user's perspective: tap "Connect to rownative.icu" in CrewNerd, done — no browser redirect, no manual key entry. This requires agreement with Tony Andrews on the endpoint design (see open question 1).

Course endpoints:

GET  /api/courses/

Returns a JSON array of course metadata from index.json, filtered to status: established | provisional. Supports optional ?lat=&lon=&radius= query parameters for geographic filtering (haversine against center_lat/center_lon).

GET  /api/courses/{id}/

Returns KML for a single course. Accepts ?cn=true for CrewNerd polygon naming. Course ID is the string identifier from the JSON filename (e.g. "001").

GET  /api/courses/kml/liked/

Returns a single KML document containing all courses the authenticated user has liked, as separate <Folder> elements. Liked course IDs are read from Cloudflare KV key liked:{athlete_id}.

GET  /api/courses/kml/

Returns a KML document for the course IDs specified in the ?ids=1,2,3 query parameter.

POST /rowers/courses/{id}/follow/
POST /rowers/courses/{id}/unfollow/

Add or remove a course ID from the liked:{athlete_id} KV entry. Return 200 on success.

1.7 Course map browser (GitHub Pages)

A single-page Leaflet application served from the site/ directory. Features:

  • Map centred on user geolocation on first load, falling back to world view.
  • Loads courses/index.json on init; renders a marker per course.
  • Clicking a marker shows course name, distance, country, status badge, and a link to the course detail page.
  • Course detail page fetches the full JSON and renders the polygon chain on the map.
  • Filter controls: country dropdown, distance range slider, status toggle (provisional / established / both).
  • Search by name (client-side, against the index).
  • "Submit a course" link leading to the submission form.
  • KML download button per course (links to kml/{id}.kml).

No backend calls needed for browsing — entirely static.

1.8 Infrastructure setup (Stage 1)

Repositories:

  • rowing-courses — course data library (JSON files, KML cache, Leaflet site, validation scripts, Actions workflows).
  • rowing-courses-worker — Cloudflare Worker TypeScript source, wrangler.toml, D1 migration files (empty at Stage 1).

Both repos are public on GitHub under a shared organisation (e.g. rowing-courses or similar — to be decided). The organisation has at least two admin members to avoid single-person bus factor.

Cloudflare resources:

  • One Worker (free tier: 100k requests/day, 10ms CPU per invocation).
  • One KV namespace: ROWING_COURSES (stores liked:{athlete_id} → JSON array of course IDs).
  • One GitHub App (for opening PRs from the Worker) — alternatively a fine-grained Personal Access Token scoped to the library repo only.
  • Worker secrets: INTERVALS_CLIENT_ID, INTERVALS_CLIENT_SECRET, TOKEN_ENCRYPTION_KEY.

D1 is not required in Stage 1. Authentication state is carried in an encrypted HTTP-only cookie; liked-course state lives in KV. D1 is introduced in Stage 2 for challenges, results, standards, and the organiser flag.

GitHub Actions workflows:

  • validate.yml — triggered on PRs modifying courses/**; runs validate_course.py; posts result as PR comment; auto-merges on pass.
  • deploy.yml — triggered on push to main; regenerates index.json and kml/*.kml; deploys to GitHub Pages.

Local development:

git clone https://github.com/rownative/worker
cd worker
npm install
wrangler dev          # Worker on localhost:8787 with local KV

No external credentials needed for local development. The Worker fetches course data from the live GitHub raw URLs by default; a LOCAL_COURSES_PATH env variable can redirect to a local checkout of the library repo.

1.9 Data migration from Rowsandall

Migration separates cleanly into two categories with different handling: course geometry (not personal data, bulk-export immediately) and user state (personal data, requires explicit user consent).

Course geometry — bulk export, no consent needed:

Course polygon data is geographic fact, not personal data. All courses are exported unconditionally using scripts/export_from_rowsandall.py, which reads the Rowsandall Django database (via ORM or database dump) and writes one JSON file per GeoCourse. All migrated courses receive status: established. Course authorship is not transferred — the submitted_by field is set to "migrated from Rowsandall" for all migrated courses. This script should be run while Rowsandall is still live so the output can be verified against known course times.

User state (liked courses, course ownership) — self-service migration:

Liked-course lists and ownership are personal data under GDPR: they reveal training locations and habits. These are not transferred server-to-server. Instead, Rowsandall provides a self-service export, and users upload it themselves to the new platform.

A "Download my courses" button is added to the Rowsandall courses page (a modest addition to the existing Django app, building on the already-functional /courses/{id}/downloadkml/ endpoint). It produces a ZIP containing:

my-rowsandall-courses.zip
├── courses/
│   ├── 066-amstel-buiten.kml    ← courses this user owns
│   └── 123-charles-river.kml
└── manifest.json                ← {"owned": ["066", "123"], "liked": ["066", "123", "200"]}

No account data, email addresses, or activity data is included in the export.

On the new platform, an authenticated user (logged in via intervals.icu OAuth) uploads this ZIP. The Worker:

  1. Parses manifest.json.
  2. For each ID in owned: checks whether the course ID already exists in index.json.
    • If the ID exists — the course geometry was already imported via the bulk export. No PR is opened. The course is noted as "already in library".
    • If the ID does not exist — submits the KML as a new provisional course PR (same pipeline as a normal course submission).
  3. Restores the liked-course list by writing all IDs from manifest.liked to liked:{athlete_id} in KV, regardless of whether the IDs exist in the library yet. (A liked ID for a course not yet in the library is harmless — it will resolve once the course is added.)
  4. Returns a summary to the user: how many owned courses were already in the library, how many new PRs were opened, and confirmation that the liked list was restored.

Example response shown to the user:

Migration complete:
  12 owned courses already in the library ✓
  1 new course submitted for review (provisional)
  23 liked courses restored to your account ✓

This deduplication is important because the bulk export from Rowsandall runs before users migrate, so the vast majority of owned courses will already be present. Without this check, every migrating user would open duplicate PRs for courses already in the library.

Users are notified of this migration path via the Rowsandall shutdown announcement and a banner on the courses page. The export ZIP can be generated at any time before Rowsandall shuts down.

What is not migrated: challenge history, race results, and workout data. Challenge history could be migrated as a separate effort (see Stage 2 delivery checklist) but is lower priority than the course library and live user state.

1.10 Stage 1 delivery checklist

Rowsandall (prerequisite work, separate scope):

  • scripts/export_from_rowsandall.py — bulk course geometry export to JSON
  • "Download my courses" ZIP export button in Rowsandall Django app

Course library repo:

  • Repository created under GitHub organisation
  • Course JSON schema and example files
  • scripts/validate_course.py (structural + distance checks)
  • scripts/generate_index.py (regenerates courses/index.json)
  • scripts/generate_kml.py (regenerates kml/*.kml cache)
  • validate.yml GitHub Actions workflow (PR validation + auto-merge)
  • deploy.yml GitHub Actions workflow (Pages deployment)
  • Initial course data committed (migrated from Rowsandall)

Cloudflare Worker:

  • wrangler.toml with KV binding and secrets
  • intervals.icu OAuth login flow (GET /oauth/authorize, GET /oauth/callback)
  • Encrypted HTTP-only cookie (rn_session) — AES-GCM encrypt/decrypt of {athleteId, accessToken, refreshToken, expiresAt}
  • HMAC-derived CrewNerd API key (apiKeyForAthlete()) — shown on user profile page
  • Platform API key verification on incoming CrewNerd requests
  • GET /api/courses/ — course index with geo filtering
  • GET /api/courses/{id}/ — single course KML
  • GET /api/courses/kml/liked/ — liked courses KML bundle
  • GET /api/courses/kml/ — multi-course KML bundle
  • POST /rowers/courses/{id}/follow/ and /unfollow/
  • POST /api/courses/submit — KML upload → GitHub PR
  • POST /api/courses/import-zip — ZIP import: check each owned ID against index.json, open PR only for IDs not already present; restore liked list in KV unconditionally; return summary to user
  • KML generation logic (port of courses.py: coursetokml, getcoursefolder, crewnerdify, sort_coordinates_ccw)

GitHub Pages site:

  • Leaflet map browser with course markers and detail view
  • Filter controls (country, distance, status)
  • Course submission form (KML upload)
  • ZIP import form (for Rowsandall migrants)
  • "Sign in with intervals.icu" link

Stage 2 — Challenges and leaderboards

Goal: Replace the Rowsandall challenge / virtual race / speed order functionality. Full handicap scoring, time-windowed row and submission windows, GPS validation via intervals.icu, organiser moderation tools.

Stage 2 extends the same Worker. D1 already exists from Stage 1 (user_sessions table); Stage 2 adds the challenges, results, and standards tables via new migration files. No new hosting infrastructure is required.

2.1 Data model (D1 — SQLite)

All migrations are versioned TypeScript files in worker/migrations/ and applied via wrangler d1 migrations apply.

challenges table:

CREATE TABLE challenges (
  id           TEXT PRIMARY KEY,          -- uuid
  name         TEXT NOT NULL,
  course_id    TEXT NOT NULL,             -- references course JSON id
  row_start    TEXT NOT NULL,             -- ISO 8601 datetime
  row_end      TEXT NOT NULL,
  submit_end   TEXT NOT NULL,
  collection_id TEXT,                     -- fk → standard_collections.id (nullable)
  organizer_id TEXT NOT NULL,             -- intervals.icu athlete id
  is_public    INTEGER NOT NULL DEFAULT 1,
  notes        TEXT,
  created_at   TEXT NOT NULL
);

challenge_results table:

CREATE TABLE challenge_results (
  id              TEXT PRIMARY KEY,
  challenge_id    TEXT NOT NULL REFERENCES challenges(id),
  athlete_id      TEXT NOT NULL,           -- intervals.icu athlete id
  activity_id     TEXT NOT NULL,           -- intervals.icu activity id
  raw_time_s      REAL NOT NULL,
  standard_id     TEXT,                    -- fk → course_standards.id (nullable)
  corrected_time_s REAL,
  start_time      TEXT NOT NULL,           -- actual row start (from GPS)
  validation_status TEXT NOT NULL DEFAULT 'pending',
                                           -- pending | valid | invalid | manual_ok | dq
  validation_note TEXT,                    -- human-readable gate-by-gate log for debugging and athlete feedback
  submitted_at    TEXT NOT NULL
);

standard_collections table:

CREATE TABLE standard_collections (
  id        TEXT PRIMARY KEY,
  name      TEXT NOT NULL,
  notes     TEXT,
  is_public INTEGER NOT NULL DEFAULT 0,
  owner_id  TEXT NOT NULL               -- intervals.icu athlete id
);

course_standards table:

CREATE TABLE course_standards (
  id              TEXT PRIMARY KEY,
  collection_id   TEXT NOT NULL REFERENCES standard_collections(id),
  name            TEXT NOT NULL,
  boat_class      TEXT NOT NULL,  -- water | rower | dynamic | coastal | c-boat | churchboat
  boat_type       TEXT NOT NULL,  -- 1x | 2x | 2- | 4x | 4- | 4+ | 8+ | etc.
  sex             TEXT NOT NULL,  -- male | female | mixed
  weight_class    TEXT NOT NULL,  -- hwt | lwt
  age_min         INTEGER NOT NULL DEFAULT 0,
  age_max         INTEGER NOT NULL DEFAULT 120,
  adaptive_class  TEXT NOT NULL DEFAULT 'None',
  skill_class     TEXT NOT NULL DEFAULT 'Open',
  course_distance REAL NOT NULL,
  reference_speed REAL NOT NULL   -- m/s, computed from standard time at upload
);

user_sessions table:

CREATE TABLE user_sessions (
  athlete_id           TEXT PRIMARY KEY,  -- intervals.icu athlete id
  refresh_token_enc    TEXT NOT NULL,     -- AES-GCM encrypted, for token refresh
  is_organizer         INTEGER NOT NULL DEFAULT 0
);

Note: the access token and session state are carried in an encrypted HTTP-only cookie (set at login) rather than in D1. D1 stores only the refresh token (needed to obtain new access tokens when the cookie expires) and the is_organizer flag (needed for the organiser panel). The cookie approach was confirmed by David Tinker at intervals.icu as the recommended pattern.

2.2 Handicap scoring

The scoring logic from rowers/scoring.py translates directly. At result submission, the Worker:

  1. Looks up the challenge's collection_id. If null, no handicap is applied (corrected_time_s = raw_time_s).
  2. Queries course_standards for the row matching the athlete's declared category (boat class, type, sex, weight, age). Falls back to the open heavyweight male row for the same boat class if no exact match.
  3. Computes: corrected_time_s = raw_time_s × (athlete_reference_speed / baseline_reference_speed) where baseline is the open HWT male 1x standard for the course distance.
  4. Stores both raw_time_s and corrected_time_s. The leaderboard UI can toggle between the two.

Standard collections are uploaded as CSV (identical format to the existing Rowsandall import in scoring.py): columns name, boatclass, boattype, sex, weightclass, agemin, agemax, adaptiveclass, skillclass, coursedistance, coursetime. The Worker parses the CSV, computes reference_speed per row (coursedistance / coursetime_in_seconds), and inserts into course_standards.

2.3 GPS validation via intervals.icu OAuth

The OAuth infrastructure is already in place from Stage 1 (login). Stage 2 uses the ACTIVITY_READ scope already requested at Stage 1 login, and the stored refresh token in D1 to obtain fresh access tokens when the cookie has expired.

Result submission flow:

  1. Athlete navigates to a challenge page and clicks "Submit result" (requires login).
  2. Worker presents a list of the athlete's recent activities fetched from GET /api/v1/athlete/{id}/activities.
  3. Athlete selects the relevant activity.
  4. Worker fetches GPS stream: GET /api/v1/athlete/{id}/activities/{activity_id}/streams?streams=latlng,time.
  5. Worker runs the course validation pipeline (see below).
  6. Validates against challenge time window: start_time must be within row_startrow_end.
  7. Sets validation_status = valid and raw_time_s from the computed elapsed time. Any failure sets validation_status = invalid with a descriptive validation_note. The validation log is stored in validation_note so organisers and athletes can see exactly which gates were passed and when.

Submission window enforcement is a simple timestamp check before any GPS fetching: now() > challenge.submit_end returns 403 immediately.

Course validation pipeline

The authoritative reference implementation is handle_check_race_course() in rowers/tasks.py (line 1299). The TypeScript port must preserve all of the following behaviour.

Step 1 — GPS track preparation:

The intervals.icu stream returns latlng (array of [lat, lon] pairs) and time (array of elapsed seconds) as separate arrays. These are zipped into an array of {lat, lon, time} objects, then the track is resampled to 100ms resolution using linear interpolation between consecutive samples:

function interpolateTrack(
  points: {lat: number, lon: number, time: number}[],
  intervalMs = 100
): {lat: number, lon: number, time: number}[] {
  const result = [];
  for (let i = 0; i < points.length - 1; i++) {
    const a = points[i], b = points[i + 1];
    const steps = Math.ceil((b.time - a.time) * 1000 / intervalMs);
    for (let s = 0; s < steps; s++) {
      const t = s / steps;
      result.push({
        lat: a.lat + t * (b.lat - a.lat),
        lon: a.lon + t * (b.lon - a.lon),
        time: a.time + t * (b.time - a.time),
      });
    }
  }
  result.push(points[points.length - 1]);
  return result;
}

This step is critical. GPS watches typically record at 1s intervals, giving ~4m resolution at rowing pace. Without interpolation, a narrow gate polygon can be missed entirely if consecutive samples land on either side of it with none inside. At 100ms resolution the gap is ~40cm, sufficient for any gate polygon in practice. The Python version uses pandas resample('100ms') + interpolate() which is slow due to DataFrame overhead; the TypeScript array implementation performs the identical calculation in single-digit milliseconds.

Step 2 — Multi-pass detection:

Many rowers warm up by rowing through part or all of the course before their actual timed attempt. The algorithm must not use the first passage through the start polygon — it must find the best completed passage.

All entry times through the start polygon are found (not just the first), using time_in_path(..., getall=True). For each entry time, the algorithm attempts to complete the full polygon chain from that point forward using coursetime_paths(). All completed attempts are collected; if none complete, the course is marked invalid.

Step 3 — Net time calculation:

For each completed attempt:

  • endsecond = time of exit through finish polygon
  • startsecond = time of exit through start polygon (not entry — the clock starts when the rower clears the start gate, matching real racing practice)
  • net_time = endsecond startsecond

The best (lowest) net time across all completed attempts is the official result.

Step 4 — Logging:

A per-submission log is written recording: each start polygon entry time found, each gate passage time, whether each attempt completed, and the final selected time. This log is stored in validation_note in D1. It serves two purposes:

  • Organisers can inspect it to verify or override a result.
  • Athletes whose submission was rejected can see exactly which gate they missed and approximately where their GPS track diverged from the course.

The log format should be human-readable plain text, matching the style of the existing Rowsandall course log files. Example:

Course id 66, Record id 12345
Found 2 entrytimes
Path starting at 142.3s
  Gate 0 (Start): passed at 142.3s, 0m
  Gate 1 (WP1): passed at 198.7s, 245m
  Gate 2 (Finish): passed at 287.1s, 498m
  Course completed: true, net time: 144.8s
Path starting at 412.1s
  Gate 0 (Start): passed at 412.1s, 0m
  Gate 1 (WP1): passed at 469.4s, 247m
  Gate 2 (Finish): passed at 554.2s, 501m
  Course completed: true, net time: 142.1s
Best time: 142.1s (attempt 2)

Step 5 — Points calculation (handicap):

velo = course_distance_m / net_time_s
points = 100 × (2  reference_speed / velo)

Where reference_speed is the athlete's registered category reference speed from course_standards. Points are stored alongside raw and corrected times.

Point-in-polygon implementation:

Both coordinate_in_path() and coursetime_paths() from rowers/courseutils.py are ported directly to TypeScript. The point-in-polygon test uses the standard ray casting algorithm — no external library needed. The recursive structure of coursetime_paths() (pass start, slice remaining track, recurse for next gate) is preserved exactly.

2.4 Challenge organiser interface

The Worker serves a minimal HTML/CSS organiser panel at /organiser/ — no JavaScript framework, plain form posts. Access requires is_organizer = 1 on the session record. The is_organizer flag is set manually by an admin via a protected POST /admin/grant-organiser endpoint.

Organiser capabilities:

  • Create a challenge (form: name, select course from library, row window dates, submission deadline, optional standard collection).
  • Upload a handicap CSV (parsed and stored in a new standard_collection linked to the challenge).
  • View the moderation panel for their challenges: each result shown with validation_status, raw_time_s, corrected_time_s, and a map link showing the GPS track against the course polygons.
  • Override validation_status from invalid to manual_ok with a mandatory note (audit trail).
  • Disqualify a result (sets status to dq).
  • Download results as CSV.

2.5 Leaderboard pages

Challenge leaderboard pages are served at /challenges/{id}/ by the Worker. They are publicly accessible without authentication. The page renders:

  • Challenge metadata (course name, row window, submission deadline, status: open / closed).
  • Results table, sortable by raw time or corrected time, filtered by boat class / sex.
  • Map showing the course polygons (Leaflet, loads from course JSON).
  • Each result links to the intervals.icu activity if the athlete's profile is public.

For challenges with is_public = 0, the leaderboard requires the athlete's session token to be present (private club challenges).

2.6 Infrastructure additions (Stage 2)

New Cloudflare resources:

  • Additional Worker secrets: TOKEN_ENCRYPTION_KEY (for encrypting stored OAuth tokens in D1).

INTERVALS_CLIENT_ID and INTERVALS_CLIENT_SECRET are already present from Stage 1. The D1 database is already provisioned. Stage 2 adds only migration files and the encryption key secret.

intervals.icu OAuth scope addition:

  • If ACTIVITY_READ was not included in the Stage 1 OAuth grant, users will need to re-authorise when they first attempt to submit a challenge result. This is a minor friction point, not a blocker.

Local development with Stage 2 migrations:

wrangler d1 execute rowing-courses-db --local --file=migrations/002_challenges.sql
wrangler d1 execute rowing-courses-db --local --file=migrations/003_standards.sql
wrangler dev

2.7 Stage 2 delivery checklist

  • D1 migration files (002_challenges.sql, 003_standards.sql)
  • TOKEN_ENCRYPTION_KEY secret; encrypt/decrypt helpers for stored tokens
  • Extend OAuth token scope to ACTIVITY_READ (or confirm it was included at Stage 1)
  • Activity list fetch from intervals.icu
  • GPS stream fetch from intervals.icu
  • GPS track interpolation to 100ms resolution (interpolateTrack())
  • Point-in-polygon ray casting (pointInPolygon(), port of coordinate_in_path())
  • Single-gate time detection (timeInPath(), port of time_in_path())
  • Multi-gate course time (coursetimePaths(), port of coursetime_paths())
  • Multi-pass detection — all start entries, best completed time wins
  • Net time calculation (start exit to finish exit, not GPS-start to finish)
  • Validation log generation (gate-by-gate, stored in validation_note)
  • Points calculation (100 × (2 reference_speed / velo))
  • Challenge CRUD endpoints
  • Standard collection CSV upload and parser
  • Handicap scoring computation
  • Result submission endpoint (with GPS validation)
  • Organiser panel HTML
  • Leaderboard page HTML
  • Admin grant-organiser endpoint
  • Migration of existing Rowsandall challenges and results (optional, lower priority)

Out of scope (both stages)

  • Rowing analytics (stroke rate charts, force curves, etc.) — targeted for intervals.icu native support.
  • Indoor challenge (Concept2 erg) — intervals.icu already handles erg data.
  • User account management and credential storage — identity is fully delegated to intervals.icu OAuth; the platform stores only the athlete ID and session token.
  • Workout upload / import — handled by intervals.icu directly (CrewNerd now exports there natively).
  • Email notifications — can be added as Stage 3 using Cloudflare Email Workers if needed.

Open questions for developer kickoff

  1. CrewNerd integration design — confirm with Tony Andrews. Two sub-questions:

    a. Auth endpoint. The proposed UX requires no browser redirect and no manual key entry. CrewNerd already holds the user's intervals.icu bearer token from its existing intervals.icu integration. A single background call is all that is needed:

    POST /api/auth/crewnerd
    Authorization: Bearer {intervals_access_token}
    ← {"api_key": "abc123..."}
    

    The Worker verifies the token against intervals.icu (GET /api/v1/athlete/self), derives the API key using apiKeyForAthlete(), and returns it. From the user's perspective: tap "Connect to rownative.icu" in CrewNerd, done. Confirm with Tony that (i) CrewNerd can make this call on the user's behalf and store the returned key, and (ii) users who have already connected CrewNerd to intervals.icu do not need to re-authenticate — the existing token can be reused immediately.

    b. Base URL configurability. Confirm whether a configurable Rowsandall base URL already exists in CrewNerd, or whether a new release is needed before Stage 1 is usable. This is the Stage 1 deadline driver.

  2. intervals.icu OAuth app registration. Confirm with David Tinker (@david on the intervals.icu forum) whether a community/open-source OAuth app can be registered for this project, or whether each instance operator registers separately. Also confirm the available scopes — specifically whether PROFILE_READ and ACTIVITY_READ can be requested in the same grant or require separate authorisation flows. The redirect URI will be https://{worker-domain}/oauth/callback.

  3. Course ID scheme — resolved. Rowsandall integer IDs are preserved exactly as strings (e.g. course 66 → courses/66.json, "id": "66"). No zero-padding. This ensures liked-course migration works without a translation table.

  4. Cloudflare account structure — resolved. Dedicated organisation account created under the rownative name. Tony Andrews added as second account member.

  5. GitHub organisation name — resolved. rownative org created; rownative/courses and rownative/worker repos to be initialised.

  6. Standard collection library. Should a set of canonical handicap tables (FISA masters, HOCR categories, KNRB) be included in the initial data migration, or left for organiser community upload? Including them reduces friction for the first challenge organisers to migrate.

  7. intervals.icu OAuth scope strategy — resolved. Confirmed by David Tinker: a successful token exchange is sufficient as a login indicator. The recommended approach is to encrypt the athlete ID and access token and store them in an HTTP-only cookie. Both PROFILE_READ and ACTIVITY_READ are requested in the single Stage 1 OAuth grant — no re-authorisation prompt will be needed when Stage 2 launches.

  8. Rowsandall ZIP export feature — resolved. Implemented in rowers/views/racesviews.py as course_export_zip_view(). Produces a ZIP of owned course KML files plus a manifest.json listing owned and liked course IDs. The Rowsandall export endpoint is live; the rownative import endpoint (parse ZIP, submit courses as provisional PRs, restore liked list in KV) remains a Stage 1 deliverable.