Paper ID | MLR-APPL-IP-1.8 | ||
Paper Title | SELF-ORGANIZED RESIDUAL BLOCKS FOR IMAGE SUPER-RESOLUTION | ||
Authors | Onur Keles, Ahmet Murat Tekalp, Koç University, Turkey; Junaid Malik, Tampere University, Finland; Serkan Kiranyaz, Qatar University, Qatar | ||
Session | MLR-APPL-IP-1: Machine learning for image processing 1 | ||
Location | Area E | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | It has become a standard practice to use the convolutional networks (ConvNet) with RELU non-linearity in image restoration and super-resolution (SR). Although the universal approximation theorem states that a multi-layer neural network can approximate any non-linear function with the desired precision, it does not reveal the best network architecture to do so. Recently, operational neural networks (ONNs) that choose the best non-linearity from a set of alternatives, and their "self-organized" variants (Self-ONN) that approximate any non-linearity via Taylor series have been proposed to address the well-known limitations and drawbacks of conventional ConvNets such as network homogeneity using only the McCulloch-Pitts neuron model. In this paper, we propose the concept of self-organized operational residual (SOR) blocks, and present hybrid network architectures combining regular residual and SOR blocks to strike a balance between the benefits of stronger non-linearity and the overall number of parameters. The experimental results demonstrate that the~proposed architectures yield performance improvements in both PSNR and perceptual metrics. |