Skip to content

Fix evaluation metrics bugs: micro_auroc NaN, softmax dim, hardcoded …#195

Open
Ganglet wants to merge 1 commit intoML4SCI:mainfrom
Ganglet:fix-eval-metrics-bugs
Open

Fix evaluation metrics bugs: micro_auroc NaN, softmax dim, hardcoded …#195
Ganglet wants to merge 1 commit intoML4SCI:mainfrom
Ganglet:fix-eval-metrics-bugs

Conversation

@Ganglet
Copy link

@Ganglet Ganglet commented Mar 20, 2026

Fixes #192

What this PR does

Fixes 3 evaluation pipeline bugs in Classification Transformers.

Changes

1. Fixed micro_auroc always NaN (eval.py:61)
micro_auroc list was declared but never appended to. Added
micro_auroc.append(auroc_fn(..., average="micro")).

2. Fixed softmax missing dim= (eval.py:173)
Added dim=-1 to softmax call. Line 158 already did it correctly,
line 173 didn't.

3. Replaced hardcoded W&B entity (train.py, eval.py)
Replaced hardcoded entity="_archil" with --entity CLI arg
defaulting to WANDB_ENTITY env var. Prevents auth errors for
other contributors.

…W&B entity

Fixes ML4SCI#192:
- Fix micro_auroc never appended in eval loop
- Add dim=-1 to softmax call on eval.py:172
- Replace hardcoded entity with --entity CLI arg and WANDB_ENTITY env var
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] DeepLense_Classification_Transformers: Evaluation metrics bugs — micro_auroc always NaN, missing softmax dim, hardcoded W&B entity

1 participant