Ranking Track Deep Dive¶
Projects:
projects/learning-to-rank-foundations-showcaseprojects/ranking-api-productization-showcase
Why This Deep Dive¶
Use this track when you want to go from ranking model training to production-style ranking inference APIs:
- Learn grouped ranking data and NDCG-focused evaluation.
- Productize ranked inference with contract-first FastAPI endpoints.
Phase 1: Ranking Foundations¶
Key outputs:
artifacts/eval/ranking_metrics.jsonartifacts/eval/test_rankings_top10.csvartifacts/splits/group_split_manifest.json
Phase 2: Ranking API Productization¶
cd projects/ranking-api-productization-showcase
make sync
make train-demo
make test
make export-openapi
make dev
Key outputs:
artifacts/model.txtartifacts/feature_names.jsonartifacts/model_meta.jsonopenapi.json
Example: Inspect NDCG Metrics¶
cd projects/learning-to-rank-foundations-showcase
python - <<'PY'
import json
from pathlib import Path
path = Path("artifacts/eval/ranking_metrics.json")
print(json.dumps(json.loads(path.read_text()), indent=2))
PY
Example: API Smoke Checks¶
For complete ranking API request and response examples, see Ranking API docs.
How To Interpret Outputs¶
group_split_manifest.jsonshould show strict group isolation across train/val/test.- NDCG gains are meaningful only if evaluation is group-correct and leakage-safe.
- API request schema and model feature schema should remain aligned across training and serving.
- Exported OpenAPI should be kept in sync with docs assets to avoid contract drift.
Next Step¶
Continue with Forecasting Track Deep Dive for a time-aware prediction + observability pipeline pattern.