SCIE, EI, Scopus, INSPEC, DBLP, CSCD, etc.
Citation: | Hua-Peng Wei, Ying-Ying Deng, Fan Tang, Xing-Jia Pan, Wei-Ming Dong. A Comparative Study of CNN- and Transformer-Based Visual Style Transfer[J]. Journal of Computer Science and Technology, 2022, 37(3): 601-614. DOI: 10.1007/s11390-022-2140-7 |
[1] |
Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2414-2423. DOI: 10.1109/CVPR.2016.265.
|
[2] |
Kolkin N, Salavon J, Shakhnarovich G. Style transfer by relaxed optimal transport and self-similarity. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.10051-10060. DOI: 10.1109/CVPR.2019.01029.
|
[3] |
Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.1501-1510. DOI: 10.1109/ICCV.2017.167.
|
[4] |
Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M H. Universal style transfer via feature transforms. In Proc. the 31st International Conference on Neural Information Processing Systems, December 2017, pp.385-395.
|
[5] |
Deng Y, Tang F, Dong W, Sun W, Huang F, Xu C. Arbitrary style transfer via multi-adaptation network. In Proc. the 28th ACM International Conference on Multimedia, Oct. 2020, pp.2719-2727. DOI: 10.1145/3394171.3414015.
|
[6] |
Deng Y, Tang F, Dong W, Huang H, Ma C, Xu C. Arbitrary video style transfer via multi-channel correlation. In Proc. the 35th AAAI Conference on Artificial Intelligence, February 2021, pp.1210-1217.
|
[7] |
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser L U, Polosukhin I. Attention is all you need. In Proc. the 31st International Conference on Neural Information Processing Systems, December 2017, pp.6000-6010.
|
[8] |
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. the 9th International Conference on Learning Representations, May 2021.
|
[9] |
Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S. End-to-end object detection with transformers. In Proc. the 16th European Conference on Computer Vision, August 2020, pp.213-229. DOI: 10.1007/978-3-030-58452-8.
|
[10] |
Yang F, Yang H, Fu J, Lu H, Guo B. Learning texture transformer network for image super-resolution. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp.5790-5799. DOI: 10.1109/CVPR42600.2020.00583.
|
[11] |
Lee K, Chang H, Jiang L, Zhang H, Tu Z, Liu C. ViTGAN: Training GANs with vision transformers. arXiv:2107.04589, 2021. https://arxiv.org/abs/2107.04589, Jan. 2022.
|
[12] |
Guo M H, Cai J X, Liu Z N, Mu T J, Martin R R, Hu S M. PCT: Point cloud transformer. Computational Visual Media, June 2021, 7(2): 187-199. DOI: 10.1007/s41095-021-0229-5.
|
[13] |
Tuli S, Dasgupta I, Grant E, Griffiths T L. Are convolutional neural networks or transformers more like human vision? arXiv:2105.07197, 2021. https://arxiv.org/abs/ 2105.07197, Jan. 2022.
|
[14] |
Naseer M, Ranasinghe K, Khan S, Hayat M, Khan F, Yang M H. Intriguing properties of vision transformers. In Proc. the 35th Conference on Neural Information Processing Systems, December 2021.
|
[15] |
Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2016, pp.2921-2929. DOI: 10.1109/CVPR.2016.319
|
[16] |
Jing Y, Yang Y, Feng Z, Ye J, Yu Y, Song M. Neural style transfer: A review. IEEE Trans. Visualization and Computer Graphics, 2020, 26(11): 3365-3385. DOI: 10.1109/TVCG.2019.2921336.
|
[17] |
Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In Proc. the 14th European Conference on Computer Vision, Oct. 2016, pp.694-711. DOI: 10.1007/978-3-319-46475-6.
|
[18] |
Ulyanov D, Vedaldi A, Lempitsky V. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4105-4113. DOI: 10.1109/CVPR.2017.437.
|
[19] |
An J, Huang S, Song Y, Dou D, Liu W, Luo J. ArtFlow: Unbiased image style transfer via reversible neural flows. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.862-871. DOI: 10.1109/CVPR46437.2021.00092.
|
[20] |
Park D Y, Lee K H. Arbitrary style transfer with style-attentional networks. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.5880-5888. DOI: 10.1109/CVPR.2019.00603.
|
[21] |
Li X, Liu S, Kautz J, Yang M H. Learning linear transformations for fast image and video style transfer. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.3809-3817. DOI: 10.1109/CVPR.2019.00393.
|
[22] |
Wang Z, Zhao L, Chen H, Qiu L, Mo Q, Lin S, Xing W, Lu D. Diversified arbitrary style transfer via deep feature perturbation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp.7786-7795. DOI: 10.1109/CVPR42600.2020.00781.
|
[23] |
Wu X, Hu Z, Sheng L, Xu D. StyleFormer: Real-time arbitrary style transfer via parametric style composition. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, October 2021, pp.14618-14627. DOI: 10.1109/ICCV48922.2021.01435.
|
[24] |
Chen M, Radford A, Child R, Wu J, Jun H, Luan D, Sutskever I. Generative pretraining from pixels. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.1691-1703.
|
[25] |
Xu Y, Wei H, Lin M, Deng Y, Sheng K, Zhang M, Tang F, Dong W, Huang F, Xu C. Transformers in computational visual media: A survey. Computational Visual Media, 2022, 8(1): 33-62. DOI: 10.1007/s41095-021-0247-3.
|
[26] |
Wang Y, Xu Z, Wang X, Shen C, Cheng B, Shen H, Xia H. End-to-end video instance segmentation with transformers. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.8741-8750. DOI: 10.1109/CVPR46437.2021.00863.
|
[27] |
Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W. Pre-trained image processing transformer. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.12299-12310. DOI: 10.1109/CVPR46437.2021.01212.
|
[28] |
Kumar M, Weissenborn D, Kalchbrenner N. Colorization transformer. In Proc. the 9th International Conference on Learning Representations, May 2021.
|
[29] |
Liu S, Lin T, He D, Li F, Deng R, Li X, Ding E, Wang H. Paint transformer: Feed forward neural painting with stroke prediction. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, October 2021, pp.6598-6607. DOI: 10.1109/ICCV48922.2021.00653.
|
[30] |
Jiang Y, Chang S, Wang Z. TransGAN: Two pure transformers can make one strong GAN, and that can scale up. In Proc. the 35th Conference on Neural Information Processing Systems, Dec. 2021.
|
[31] |
Cordonnier J B, Loukas A, Jaggi M. On the relationship between self-attention and convolutional layers. In Proc. the 8th International Conference on Learning Representations, April 2020.
|
[32] |
Xiong R, Yang Y, He D, Zheng K, Zheng S, Xing C, Zhang H, Lan Y, Wang L, Liu T. On layer normalization in the transformer architecture. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.10524-10533.
|
[33] |
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In Proc. the 3rd International Conference on Learning Representations, May 2015.
|
[34] |
Dosovitskiy A, Brox T. Generating images with perceptual similarity metrics based on deep networks. In Proc. the 30th International Conference on Neural Information Processing Systems, December 2016, pp.658-666.
|
[35] |
Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick C L. Microsoft COCO: Common objects in context. In Proc. the 13th European Conference on Computer Vision, September 2014, pp.740-755. DOI: 10.1007/978-3-319-10602-1.
|
[36] |
Phillips F, Mackintosh B. Wiki Art Gallery, Inc.: A case for critical thinking. Issues in Accounting Education, 2011, 26(3): 593-608. DOI: 10.2308/iace-50038.
|
[37] |
Kingma D P, Ba J. Adam: A method for stochastic optimization. In Proc. the 3rd International Conference on Learning Representations, May 2015.
|
[38] |
Baker N, Lu H, Erlikhman G, Kellman P J. Deep convolutional networks do not classify based on global object shape. PLoS Computational Biology, 2018, 14(12): Article No. e1006613. DOI: 10.1371/journal.pcbi.1006613.
|
[39] |
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778. DOI: 10.1109/CVPR.2016.90.
|
[40] |
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg A C, Li F F. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211-252. DOI: 10.1007/s11263-015-0816-y.
|
[41] |
Geirhos R, Rubisch P, Michaelis C, Bethge M, Wichmann F A, Brendel W. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In Proc. the 7th International Conference on Learning Representations, May 2019.
|
[42] |
Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers distillation through attention. In Proc. the 38th International Conference on Machine Learning, July 2021, pp.10347-10357.
|