Forecasting Track Deep Dive¶
Projects:
projects/nyc-demand-forecasting-foundations-showcaseprojects/demand-api-observability-showcase
Why This Deep Dive¶
Use this track when you want a time-aware forecasting workflow that extends into API serving and observability:
- Train and evaluate forecasting models with chronological split discipline.
- Serve predictions through an API with metrics and tracing hooks.
Phase 1: Forecasting Foundations¶
Quick demo mode:
Key outputs:
artifacts/eval/metrics_summary.csvartifacts/eval/prediction_examples.csvartifacts/splits/time_split_manifest.json
Phase 2: Demand API Observability¶
cd projects/demand-api-observability-showcase
make sync
make train-demo
make test
make export-openapi
make verify
make dev
Key outputs:
artifacts/model.joblibartifacts/metrics.jsonopenapi.json
Example: Inspect Forecast Metrics¶
cd projects/nyc-demand-forecasting-foundations-showcase
python - <<'PY'
import pandas as pd
df = pd.read_csv("artifacts/eval/metrics_summary.csv")
print(df.to_string(index=False))
PY
Example: Demand API Smoke Checks¶
curl -s http://127.0.0.1:8000/health
curl -s -X POST http://127.0.0.1:8000/predict \
-H "content-type: application/json" \
-d '{"pickup_zone_id":132,"pickup_datetime":"2026-02-13T09:00:00Z"}'
curl -s http://127.0.0.1:8000/metrics | head -n 10
For complete demand API request and response examples, see Demand API docs.
How To Interpret Outputs¶
time_split_manifest.jsonshould confirm strict chronological separation.- Evaluate forecast quality with multiple metrics (
MAE,RMSE,sMAPE) rather than one number. - Prediction API behavior should be checked alongside
/metricstelemetry for operational readiness. - Contract checks should keep OpenAPI and runtime endpoint behavior aligned.
Next Step¶
Use Coverage Matrix to map this track to adjacent topics like drift monitoring, rollout decisions, and experiment tracking.