Deep Learning(7)
-
[Paper Reivew] Class-Imbalanced Semi-Supervised Learning with Adaptive Thresholding
Class-Imbalanced Semi-Supervised Learningwith Adaptive Thresholding Guo, L. Z., & Li, Y. F. (2022, June). Class-imbalanced semi-supervised learning with adaptive thresholding. In International Conference on Machine Learning (pp. 8082-8094). PMLR. 본 논문은 기존의 FixMatch 방법론에서 high fixed confidence threshold의 문제가 있기 때문에 'Adaptive Threshold'이 필요하다고 주장하는 논문이다. 특히, imbalanced semi-supervised (lon..
2024.01.22 -
[Paper Review] FixMatch : Simplifying Semi-Supervised Learning with Consistency and Confidence
FixMatch : Simplifying Semi-Supervised Learning with Consistency and Confidence Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C. A., ... & Li, C. L. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33, 596-608. 본 논문은 Semi-supervised Learning에서 MixMatch 그리고 ReMixMatch와 함께 backbone archi..
2024.01.22 -
[Paper Review] FreeMatch: Self-Adaptive Thresholding for Semi-Supervised Learning
FreeMatch: Self-Adaptive Thresholding for Semi-Supervised Learning Wang, Y., Chen, H., Heng, Q., Hou, W., Fan, Y., Wu, Z., ... & Xie, X. (2022). Freematch: Self-adaptive thresholding for semi-supervised learning. arXiv preprint arXiv:2205.07246. 요약 : 본 논문은 semi-supervised learning에서 pseudo-label에 대한 confidence를 확인하여 사용할지 안할지를 결정하는 hyper-parameter인 고정된 scalar 값 $\tau$를 adaptive 하게 하여 (global thre..
2024.01.19 -
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., ... & Dosovitskiy, A. (2021). Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34. Main idea in this paper 기존의 ViT(Vision Transformer)의 경우 Transformer architecture에서의 Encoder layer를 활용하여 이미지 분류 문제를 해결하였다. 본 논문에서는 "Att..
2022.05.24 -
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distilltation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distilltation ####Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., & Ma, K. (2019). Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3713-3722). Abstract 본 논문에서는 self-distillation..
2021.06.02 -
[논문정리] Enhanced Deep Residual Networks for Single Image Super-Resolution
Enhanced Deep Residual Networks for Single Image Super-Resolution Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 136-144 Abstract SR관련 최근 연구들은 DCNN (deep convolutional neural networks)의 발전과 함께 진행되었다. 특히, residual learning 기술은 성능 향상에 기여했다. Enhanced deep super-resoution network (EDSR) 이라는 현재..
2019.12.24