KLI

Saliency Prediction with Relation-Aware Global Attention Module

Metadata Downloads
Abstract
The deep learning method has achieved great success in saliency prediction task. Like depth and depth, the attention mechanism has been proved to be effective in enhancing the performance of Convolutional Neural Network (CNNs) in many studies. In this paper, we propose a new architecture that combines encoder-decoder architecture, multi-level integration, relation-aware global attention module. The encoder-decoder architecture is the main structure to extract deeper features. The multi-level integration constructs an asymmetric path that avoid information loss. The Relation-aware Global Attention module is used to enhance the network both channel-wise and spatial-wise. The architecture is trained and tested on SALICON 2017 benchmark and obtain competitive results compared with related research.
Author(s)
차오 꺼조강현
Issued Date
2021
Type
Article
Keyword
Attention mechanismsRelation-aware global attentionSaliency prediction
DOI
10.1007/978-3-030-81638-4_25
URI
https://oak.ulsan.ac.kr/handle/2021.oak/9158
https://ulsan-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_cdi_springer_books_10_1007_978_3_030_81638_4_25&context=PC&vid=ULSAN&lang=ko_KR&search_scope=default_scope&adaptor=primo_central_multiple_fe&tab=default_tab&query=any,contains,Saliency%20Prediction%20with%20Relation-Aware%20Global%20Attention%20Module&offset=0&pcAvailability=true
Publisher
Communications in Computer and Information Science
Location
스위스
Language
영어
ISSN
1865-0929
Citation Volume
1405
Citation Number
1
Citation Start Page
309
Citation End Page
316
Appears in Collections:
Engineering > IT Convergence
Authorize & License
  • Authorize공개
Files in This Item:
  • There are no files associated with this item.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.