Skip to content

Showcase Architecture

This note maps related showcases into cohesive, in-repo learning tracks.

Why this architecture

  • Keep each showcase focused on one learning outcome.
  • Preserve reproducibility and short demo runtime.
  • Avoid monolithic project structure for students.

Ranking Track

  1. projects/learning-to-rank-foundations-showcase
  2. Grouped ranking data preparation and relevance labeling.
  3. LambdaRank model training.
  4. NDCG-focused evaluation and split artifacts.

  5. projects/ranking-api-productization-showcase

  6. FastAPI ranking endpoints (/health, /model/schema, /score, /rank).
  7. Model artifact loading and schema-safe scoring.
  8. Structured request logging and OpenAPI export workflow.

Forecasting And Observability Track

  1. projects/nyc-demand-forecasting-foundations-showcase
  2. TLC-style hourly aggregation and time feature engineering.
  3. Explicit time-ordered train/val/test split.
  4. Demand forecasting metrics (MAE, RMSE, sMAPE).
  5. Optional real TLC download path with synthetic default mode.

  6. projects/demand-api-observability-showcase

  7. FastAPI demand serving endpoint (/predict) and health checks.
  8. Prometheus metrics endpoint (/metrics) and request latency counters.
  9. Optional OpenTelemetry instrumentation hooks.
  10. OpenAPI export/check and API behavior tests.

Intentional Scope Boundaries

  • Full-size raw datasets are excluded to keep clone and run workflows lightweight.
  • Large generated caches are excluded from version control.
  • Each showcase keeps only teaching-critical components and artifacts.