Classification(3)
-
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., ... & Dosovitskiy, A. (2021). Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34. Main idea in this paper 기존의 ViT(Vision Transformer)의 경우 Transformer architecture에서의 Encoder layer를 활용하여 이미지 분류 문제를 해결하였다. 본 논문에서는 "Att..
2022.05.24 -
ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training
ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training. arXiv preprint arXiv:2107.04470 1. Domain Adaptation이란 무엇인가?? Domain Adaptation에서 알아야되는 것 1. Source Domain : 학습에 활용되는 데이터셋 분포 도메인 ..
2022.04.25 -
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distilltation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distilltation ####Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., & Ma, K. (2019). Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3713-3722). Abstract 본 논문에서는 self-distillation..
2021.06.02