Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do CLASSIFICATION task with config file in 2.0.0 #2586

Open
panp4n opened this issue Mar 5, 2025 · 3 comments
Open

How to do CLASSIFICATION task with config file in 2.0.0 #2586

panp4n opened this issue Mar 5, 2025 · 3 comments

Comments

@panp4n
Copy link

panp4n commented Mar 5, 2025

How to do classification task with config file in 2.0.0b3?
Is there a parameter in the config file of 2.0.0b3 like task: CLASSIFICATION in 1.2.0?
I update anomalib to 2.0.0b3 from 1.2.0, there are too much difference in usage. I think detailed usage documentation is what we need more.

My custom config yaml file:

seed_everything: true
trainer:
  accelerator: auto
  strategy: auto
  devices: auto
  num_nodes: 1
  precision: null
  logger: null
  callbacks: null
  fast_dev_run: false
  max_epochs: null
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: null
  limit_val_batches: null
  limit_test_batches: null
  limit_predict_batches: null
  overfit_batches: 0.0
  val_check_interval: null
  check_val_every_n_epoch: 1
  num_sanity_val_steps: null
  log_every_n_steps: null
  enable_checkpointing: null
  enable_progress_bar: null
  enable_model_summary: null
  accumulate_grad_batches: 1
  gradient_clip_val: null
  gradient_clip_algorithm: null
  deterministic: null
  benchmark: null
  inference_mode: true
  use_distributed_sampler: true
  profiler: null
  detect_anomaly: false
  barebones: false
  plugins: null
  sync_batchnorm: false
  reload_dataloaders_every_n_epochs: 0

model:
  class_path: anomalib.models.Patchcore
  init_args:
    backbone: wide_resnet50_2
    layers:
    - layer2
    - layer3
    pre_trained: true
    coreset_sampling_ratio: 0.1
    num_neighbors: 9
    pre_processor: true
    post_processor: true
    evaluator: true
    visualizer: true
data:
  class_path: anomalib.data.Folder
  init_args:
    name: hazelnut
    normal_dir: train/good
    root: datasets/MVTec/hazelnut
    abnormal_dir: ['test/crack', 'test/cut', 'test/hole', 'test/print']
    normal_test_dir: test/good
    mask_dir: null
    normal_split_ratio: 0.0
    extensions:
    - .png
    train_batch_size: 16
    eval_batch_size: 16
    num_workers: 8
    train_augmentations: null
    val_augmentations: null
    test_augmentations: null
    augmentations: null
    test_split_mode: from_dir
    test_split_ratio: 0.2
    val_split_mode: same_as_test
    val_split_ratio: 0.5
    seed: null
    
logging:
  log_graph: false
default_root_dir: results
ckpt_path: null
@haimat
Copy link

haimat commented Mar 13, 2025

Hello, I have the same issue, don't know how to start with 2.0 -
have you had any luck in that regard so far?

@syshin0116
Copy link

Hi, same issue here, I would like to train a classification model without masks. any updates?

@djdameln
Copy link
Contributor

Hi, you can just omit the task parameter. Anomalib will automatically recognize that it should ignore pixel-level evaluation due to absence of the mask_dir parameter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants