Skip to content

Removing deprecated files#855

Draft
scotthawes wants to merge 40 commits intoPriorLabs:mainfrom
IFoA-ADSWP:scotthawes-patch-1
Draft

Removing deprecated files#855
scotthawes wants to merge 40 commits intoPriorLabs:mainfrom
IFoA-ADSWP:scotthawes-patch-1

Conversation

@scotthawes
Copy link
Copy Markdown

No description provided.

Cillian-Williamson and others added 30 commits August 5, 2025 10:59
…training

- Introduced `data_loader.py` for loading datasets from CSV or generating synthetic data, including preprocessing functions.
- Added `evaluation_metrics.py` to provide regression and classification metrics, including RMSE, MAE, accuracy, F1 score, and AUC calculations.
- Created `model_training.py` to define baseline models for regression and classification, along with functions for training, predicting, saving, and loading models. Includes cross-validation and evaluation capabilities.
- Modified the `events.out.tfevents` file to reflect recent training changes.
- Updated `time_left.tsv` with new iteration data, showing adjusted passed and remaining time for each iteration.
Add local baselining helpers and update training logs
Here are the files whihc I have been working on. The baselining notebook contains all of my code which was used to perform the experiments, the baselining notebook summary markdown file contains a much simpler output for the findings.
Update baseline utilities and notebook naming
- Rename n_ensemble -> n_estimators in TABPFN_CONFIG (7.x rename)
- Remove use_wandb (no longer a TabPFN param)
- Switch device from 'cpu' to 'auto' (7.x auto GPU detection)
- Fix DATA_PATH -> DATA_DIR import in data_loader_class.py
- Fix hardcoded LOCAL_CSV_PATH -> uses DATA_DIR from baseline_config
- Update GLOBAL_MAX_TRAIN comment (no longer API limit, now local model limit)
- Replace defunct pre_aux try/except block with clean TabPFNClassifier(n_estimators=8, device='auto')
…tiation

- TabPFNClassifier init now conditional: local uses n_estimators/device, client uses random_state only
- Full rerun complete on eudirectlapse.csv (23K rows, 10K cap, 80/20 split)
- tabpfn_extensions upgraded to fix AutoTabPFNClassifier import
Add scripts/download_datasets.py to fetch 3 additional public insurance
classification datasets programmatically:
  - coil2000.csv: COIL 2000 (Dutch insurer), OpenML ID 298, 9,822 rows, 85 features
  - ausprivauto0405.csv: Australian vehicle insurance 2004-05, CASdatasets GitHub,
    67,856 rows, 6 features, ClaimOcc target (6.8% pos rate)
  - freMTPL2freq_binary.csv: French MTPL binarised (50K sample),
    50,000 rows, 10 features, ClaimIndicator target (5.0% pos rate)

Add notebooks/baseline_experiments/07_multi_dataset_benchmark.ipynb which runs
TabPFN vs GLM (+ CatBoost, RandomForest, XGBoost) across all 4 insurance datasets
and produces a ROC/PR AUC comparison table and bar chart figure. All models capped
at 10,000 training samples for fair comparison.
- Implement `run_domain_finetune_stage_a.py` for controlled fine-tuning experiments on insurance datasets.
- Create batch scripts for fine-tuning trials: `run_finetune_crossover_batch_3000.sh`, `run_finetune_first_batch.sh`, and `run_finetune_stress_batch_2000.sh`.
- Introduce `run_small_finetune_classifier_trial.py` for smoke tests on TabPFN classifier fine-tuning.
- Enhance logging and result tracking in fine-tuning scripts.
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Note

The number of changes in this pull request is too large for Gemini Code Assist to generate a review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants