Skip to content

USTC-KnowledgeComputingLab/InhibitoryAttention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lateral-Inhibition-enhanced-Attention

This folder contains the implementation of Lateral-Inhibition-enhanced-Attention based on ViT and DeiT models for image classification.

Install dependencies

uv venv
source .venv/bin/activate
uv sync

Data preparation

The dataset must be prepared as follows:

$ tree data
dataset_name
├── train
│   ├── class1
│   │   ├── img1.jpeg
│   │   ├── img2.jpeg
│   │   └── ...
│   ├── class2
│   │   ├── img3.jpeg
│   │   └── ...
│   └── ...
├── test
│   ├── class1
│   │   ├── img4.jpeg
│   │   ├── img5.jpeg
│   │   └── ...
│   ├── class2
│   │   ├── img6.jpeg
│   │   └── ...
│   └── ...
└── val
    ├── class1
    │   ├── img7.jpeg
    │   ├── img8.jpeg
    │   └── ...
    ├── class2
    │   ├── img9.jpeg
    │   └── ...
    └── ...

Train Models from Scratch

Example on 2090 server

  • To train vit on CIFAR-10 from scratch, this is a simply way:
torchrun main.py --cfg cfgs/CIFAR-100/vit.yaml
  • To test model on val set and test set, run:
torchrun main.py --cfg cfgs/CIFAR-100/vit.yaml --test --resume output/CIFAR-100/vit/default/max_acc.pth
  • To get attention map, run:
torchrun main.py --cfg cfgs/CIFAR-100/vit.yaml --get_attention_map --resume output/CIFAR-100/vit/default/max_acc.pth

you can add --nproc_per_node=4 to use 4 GPUs,add --master-port=28900 to run multiple commands in one session.

See more command in command.txt.

About

类脑组抑制性神经元与Attention

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •