Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      05 May 2019, Volume 34 Issue 3 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Special Section of CVM 2019
    Shi-Min Hu, Hongbo Fu, Marcus Magnor
    Journal of Computer Science and Technology, 2019, 34 (3): 507-508.  DOI: 10.1007/s11390-019-1922-z
    Abstract   PDF(105KB) ( 287 )   Chinese Summary
    Related Articles | Metrics
    A Large Chinese Text Dataset in the Wild
    Tai-Ling Yuan, Zhe Zhu, Kun Xu, Cheng-Jun Li, Tai-Jiang Mu, Shi-Min Hu
    Journal of Computer Science and Technology, 2019, 34 (3): 509-521.  DOI: 10.1007/s11390-019-1923-y
    Abstract   PDF(6292KB) ( 2264 )   Chinese Summary
    In this paper, we introduce a very large Chinese text dataset in the wild. While optical character recognition (OCR) in document images is well studied and many commercial tools are available, the detection and recognition of text in natural images is still a challenging problem, especially for some more complicated character sets such as Chinese text. Lack of training data has always been a problem, especially for deep learning methods which require massive training data. In this paper, we provide details of a newly created dataset of Chinese text with about 1 million Chinese characters from 3 850 unique ones annotated by experts in over 30 000 street view images. This is a challenging dataset with good diversity containing planar text, raised text, text under poor illumination, distant text, partially occluded text, etc. For each character, the annotation includes its underlying character, bounding box, and six attributes. The attributes indicate the character's background complexity, appearance, style, etc. Besides the dataset, we give baseline results using state-of-the-art methods for three tasks:character recognition (top-1 accuracy of 80.5%), character detection (AP of 70.9%), and text line detection (AED of 22.1). The dataset, source code, and trained models are publicly available.
    References | Supplementary Material | Related Articles | Metrics
    Bidirectional Optimization Coupled Lightweight Networks for Efficient and Robust Multi-Person 2D Pose Estimation
    Shuai Li, Zheng Fang, Wen-Feng Song, Ai-Min Hao, Hong Qin
    Journal of Computer Science and Technology, 2019, 34 (3): 522-536.  DOI: 10.1007/s11390-019-1924-x
    Abstract   PDF(1171KB) ( 1071 )   Chinese Summary
    For multi-person 2D pose estimation, current deep learning based methods have exhibited impressive performance, but the trade-offs among efficiency, robustness, and accuracy in the existing approaches remain unavoidable. In principle, bottom-up methods are superior to top-down methods in efficiency, but they perform worse in accuracy. To make full use of their respective advantages, in this paper we design a novel bidirectional optimization coupled lightweight network (BOCLN) architecture for efficient, robust, and general-purpose multi-person 2D (2-dimensional) pose estimation from natural images. With the BOCLN framework, the bottom-up network focuses on global features, while the top-down network places emphasis on detailed features. The entire framework shares global features along the bottom-up data stream, while the top-down data stream aims to accelerate the accurate pose estimation. In particular, to exploit the priors of human joints' relationship, we propose a probability limb heat map to represent the spatial context of the joints and guide the overall pose skeleton prediction, so that each person's pose estimation in cluttered scenes (involving crowd) could be as accurate and robust as possible. Therefore, benefiting from the novel BOCLN architecture, the time-consuming refinement procedure could be much simplified to an efficient lightweight network. Extensive experiments and evaluations on public benchmarks have confirmed that our new method is more efficient and robust, yet still attain competitive accuracy performance compared with the state-of-the-art methods. Our BOCLN shows even greater promise in online applications.
    References | Supplementary Material | Related Articles | Metrics
    Single Image Super-Resolution via Dynamic Lightweight Database with Local-Feature Based Interpolation
    Na Ding, Ye-Peng Liu, Lin-Wei Fan, Cai-Ming Zhang
    Journal of Computer Science and Technology, 2019, 34 (3): 537-549.  DOI: 10.1007/s11390-019-1925-9
    Abstract   PDF(4096KB) ( 576 )   Chinese Summary
    Single image super-resolution is devoted to generating a high-resolution image from a low-resolution one, which has been a research hotspot for its significant applications. A novel method that is totally based on the single input image itself is proposed in this paper. Firstly, a local-feature based interpolation method where both edge pixel property and location information are taken into consideration is presented to obtain a better initialization. Then, a dynamic lightweight database of self-examples is built with the aid of our in-depth study on self-similarity, from which adaptive linear regressions are learned to directly map the low-resolution patch into its high-resolution version. Furthermore, a gradually upscaling strategy accompanied by iterative optimization is employed to enhance the consistency at each step. Even without any external information, extensive experimental comparisons with state-of-the-art methods on standard benchmarks demonstrate the competitive performance of the proposed scheme in both visual effect and objective evaluation.
    References | Supplementary Material | Related Articles | Metrics
    Fast and Error-Bounded Space-Variant Bilateral Filtering
    Meng-Ke Yuan, Long-Quan Dai, Dong-Ming Yan, Li-Qiang Zhang, Jun Xiao, Xiao-Peng Zhang
    Journal of Computer Science and Technology, 2019, 34 (3): 550-568.  DOI: 10.1007/s11390-019-1926-8
    Abstract   PDF(1941KB) ( 618 )   Chinese Summary
    The traditional space-invariant isotropic kernel utilized by a bilateral filter (BF) frequently leads to blurry edges and gradient reversal artifacts due to the existence of a large amount of outliers in the local averaging window. However, the efficient and accurate estimation of space-variant kernels which adapt to image structures, and the fast realization of the corresponding space-variant bilateral filtering are challenging problems. To address these problems, we present a space-variant BF (SVBF), and its linear time and error-bounded acceleration method. First, we accurately estimate spacevariant anisotropic kernels that vary with image structures in linear time through structure tensor and minimum spanning tree. Second, we perform SVBF in linear time using two error-bounded approximation methods, namely, low-rank tensor approximation via higher-order singular value decomposition and exponential sum approximation. Therefore, the proposed SVBF can efficiently achieve good edge-preserving results. We validate the advantages of the proposed filter in applications including:image denoising, image enhancement, and image focus editing. Experimental results demonstrate that our fast and error-bounded SVBF is superior to state-of-the-art methods.
    References | Supplementary Material | Related Articles | Metrics
    Defocus Hyperspectral Image Deblurring with Adaptive Reference Image and Scale Map
    De-Wang Li, Lin-Jing Lai, Hua Huang
    Journal of Computer Science and Technology, 2019, 34 (3): 569-580.  DOI: 10.1007/s11390-019-1927-7
    Abstract   PDF(5006KB) ( 355 )   Chinese Summary
    Defocus blur is one of the primary problems among hyperspectral imaging systems equipped with simple lenses. Most of the previous deblurring methods focus on how to utilize structure information of a single channel, while ignoring the characteristics of hyperspectral images. In this work, we analyze the correlations and differences among spectral channels, and propose a deblurring framework for defocus hyperspectral images. First, we divide the hyperspectral image channels into two sets, and the set with less blur is treated as a group of spectral bases. Then, according to the inherent correlations of spectral channels, a reference image can be derived from the spectral bases to guide the restoration of blurry channels. Finally, considering the disagreement between the reference image and the ground truth, a scale map based on gradient similarity is introduced as a prior in the deblurring framework. The experimental results on public dataset demonstrate that the proposed method outperforms several image deblurring methods in both visual effect and quality metrics.
    References | Supplementary Material | Related Articles | Metrics
    Geometry-Aware ICP for Scene Reconstruction from RGB-D Camera
    Bo Ren, Jia-Cheng Wu, Ya-Lei Lv, Ming-Ming Cheng, Shao-Ping Lu
    Journal of Computer Science and Technology, 2019, 34 (3): 581-593.  DOI: 10.1007/s11390-019-1928-6
    Abstract   PDF(2864KB) ( 403 )   Chinese Summary
    The Iterative Closest Point (ICP) scheme has been widely used for the registration of surfaces and point clouds. However, when working on depth image sequences where there are large geometric planes with small (or even without) details, existing ICP algorithms are prone to tangential drifting and erroneous rotational estimations due to input device errors. In this paper, we propose a novel ICP algorithm that aims to overcome such drawbacks, and provides significantly stabler registration estimation for simultaneous localization and mapping (SLAM) tasks on RGB-D camera inputs. In our approach, the tangential drifting and the rotational estimation error are reduced by:1) updating the conventional Euclidean distance term with the local geometry information, and 2) introducing a new camera stabilization term that prevents improper camera movement in the calculation. Our approach is simple, fast, effective, and is readily integratable with previous ICP algorithms. We test our new method with the TUM RGB-D SLAM dataset on state-of-the-art real-time 3D dense reconstruction platforms, i.e., ElasticFusion and Kintinuous. Experiments show that our new strategy outperforms all previous ones on various RGB-D data sequences under different combinations of registration systems and solutions.
    References | Supplementary Material | Related Articles | Metrics
    A Survey of 3D Indoor Scene Synthesis
    Song-Hai Zhang, Shao-Kui Zhang, Yuan Liang, Peter Hall
    Journal of Computer Science and Technology, 2019, 34 (3): 594-608.  DOI: 10.1007/s11390-019-1929-5
    Abstract   PDF(1480KB) ( 1051 )   Chinese Summary
    Indoor scene synthesis has become a popular topic in recent years. Synthesizing functional and plausible indoor scenes is an inherently difficult task since it requires considerable knowledge to both choose reasonable object categories and arrange objects appropriately. In this survey, we propose four criteria which group a wide range of 3D (three-dimensional) indoor scene synthesis techniques according to various aspects (specifically, four groups of categories). It also provides hints, through comprehensively comparing all the techniques to demonstrate their effectiveness and drawbacks, and discussions of potential remaining problems.
    References | Supplementary Material | Related Articles | Metrics
    Artificial Intelligence and Pattern Recognition
    Blind Image Deblurring via Adaptive Optimization with Flexible Sparse Structure Control
    Ri-Sheng Liu, Cai-Sheng Mao, Zhi-Hui Wang, Hao-Jie Li
    Journal of Computer Science and Technology, 2019, 34 (3): 609-621.  DOI: 10.1007/s11390-019-1930-z
    Abstract   PDF(5007KB) ( 468 )   Chinese Summary
    Blind image deblurring is a long-standing ill-posed inverse problem which aims to recover a latent sharp image given only a blurry observation. So far, existing studies have designed many effective priors w.r.t. the latent image within the maximum a posteriori (MAP) framework in order to narrow down the solution space. These non-convex priors are always integrated into the final deblurring model, which makes the optimization challenging. However, due to unknown image distribution, complex kernel structure and non-uniform noises in real-world scenarios, it is indeed challenging to explicitly design a fixed prior for all cases. Thus we adopt the idea of adaptive optimization and propose the sparse structure control (SSC) for the latent image during the optimization process. In this paper, we only formulate the necessary optimization constraints in a lightweight MAP model with no priors. Then we develop an inexact projected gradient scheme to incorporate flexible SSC in MAP inference. Besides lp-norm based SSC in our previous work, we also train a group of denoising convolutional neural networks (CNNs) to learn the sparse image structure automatically from the training data under different noise levels, and we show that CNNs-based SSC can achieve similar results compared with lp-norm but are more robust to noise. Extensive experiments demonstrate that the proposed adaptive optimization scheme with two types of SSC achieves the state-of-the-art results on both synthetic data and real-world images.
    References | Supplementary Material | Related Articles | Metrics
    Cloud Detection Using Super Pixel Classification and Semantic Segmentation
    Han Liu, Hang Du, Dan Zeng, Qi Tian
    Journal of Computer Science and Technology, 2019, 34 (3): 622-633.  DOI: 10.1007/s11390-019-1931-y
    Abstract   PDF(4831KB) ( 405 )   Chinese Summary
    Cloud detection plays a very significant role in remote sensing image processing. This paper introduces a cloud detection method based on super pixel level classification and semantic segmentation. Firstly, remote sensing images are segmented into super pixels. Segmented super pixels compose a super pixel level remote sensing image database. Though cloud detection is essentially a binary classification task, our database is labeled into four categories to improve the generalization ability:thick cloud, cirrus cloud, building, and other culture. Secondly, the super pixel level database is used to train our cloud detection models based on convolution neural network (CNN) and deep forest. Hierarchical fusion CNN is proposed considering super pixel level images contain less semantic information than normal images. Taking full advantage of low-level features like color and texture information, it is more applicable for super pixel level classification. Besides, a distance metric is proposed to refine ambiguous super pixels. Thirdly, an end-to-end cloud detection model based on semantic segmentation is introduced. This model has no restrictions on the input size, and takes less time. Experimental results show that compared with other cloud detection methods, our proposed method achieves better performance.
    References | Supplementary Material | Related Articles | Metrics
    PVSS: A Progressive Vehicle Search System for Video Surveillance Networks
    Xin-Chen Liu, Wu Liu, Hua-Dong Ma, Shuang-Qun Li
    Journal of Computer Science and Technology, 2019, 34 (3): 634-644.  DOI: 10.1007/s11390-019-1932-x
    Abstract   PDF(1318KB) ( 265 )   Chinese Summary
    This paper is focused on the task of searching for a specific vehicle that appears in the surveillance networks. Existing methods usually assume the vehicle images are well cropped from the surveillance videos, and then use visual attributes, like colors and types, or license plate numbers to match the target vehicle in the image set. However, a complete vehicle search system should consider the problems of vehicle detection, representation, indexing, storage, matching, and so on. Besides, it is very difficult for attribute-based search to accurately find the same vehicle due to intra-instance changes in different cameras and the extremely uncertain environment. Moreover, the license plates may be mis-recognized in surveillance scenes due to the low resolution and noise. In this paper, a progressive vehicle search system, named as PVSS, is designed to solve the above problems. PVSS is constituted of three modules:the crawler, the indexer, and the searcher. The vehicle crawler aims to detect and track vehicles in surveillance videos and transfer the captured vehicle images, metadata and contextual information to the server or cloud. Then multi-grained attributes, such as the visual features and license plate fingerprints, are extracted and indexed by the vehicle indexer. At last, a query triplet with an input vehicle image, the time range, and the spatial scope is taken as the input by the vehicle searcher. The target vehicle will be searched in the database by a progressive process. Extensive experiments on the public dataset from a real surveillance network validate the effectiveness of PVSS.
    References | Supplementary Material | Related Articles | Metrics
    Competitive Cloud Pricing for Long-Term Revenue Maximization
    Jiang Rong, Tao Qin, Bo An
    Journal of Computer Science and Technology, 2019, 34 (3): 645-656.  DOI: 10.1007/s11390-019-1933-9
    Abstract   PDF(364KB) ( 357 )   Chinese Summary
    We study the pricing policy optimization problem for cloud providers while considering three properties of the real-world market:1) providers have only incomplete information about the market; 2) it is in evolution due to the increasing number of users and decreasing marginal cost of providers; 3) it is fully competitive because of providers' and users' revenuedriven nature. As far as we know, there is no existing work investigating the optimal pricing policies under such realistic settings. We first propose a comprehensive model for the real-world cloud market and formulate it as a stochastic game. Then we use the Markov perfect equilibrium (MPE) to describe providers' optimal policies. Next we decompose the problem of computing the MPE into two subtasks:1) dividing the stochastic game into many normal-formal games and calculating their Nash equilibria, for which we develop an algorithm ensuring to converge, and 2) computing the MPE of the original game, which is efficiently solved by an algorithm combining the Nash equilibria based on a mild assumption. Experimental results show that our algorithms are efficient for computing MPE and the MPE strategy leads to much higher profits for providers compared with existing policies.
    References | Supplementary Material | Related Articles | Metrics
    BHONEM: Binary High-Order Network Embedding Methods for Networked-Guarantee Loans
    Da-Wei Cheng, Yi Tu, Zhen-Wei Ma, Zhi-Bin Niu, Li-Qing Zhang
    Journal of Computer Science and Technology, 2019, 34 (3): 657-669.  DOI: 10.1007/s11390-019-1934-8
    Abstract   PDF(1204KB) ( 358 )   Chinese Summary
    Networked-guarantee loans may cause systemic risk related concern for the government and banks in China. The prediction of the default of enterprise loans is a typical machine learning based classification problem, and the networked guarantee makes this problem very difficult to solve. As we know, a complex network is usually stored and represented by an adjacency matrix. It is a high-dimensional and sparse matrix, whereas machine-learning methods usually need lowdimensional dense feature representations. Therefore, in this paper, we propose a binary higher-order network embedding method to learn the low-dimensional representations of a guarantee network. We first set vertices of this heterogeneous economic network by binary roles (guarantor and guarantee), and then define high-order adjacent measures based on their roles and economic domain knowledge. Afterwards, we design a penalty parameter in the objective function to balance the importance of network structure and adjacency. We optimize it by negative sampling based gradient descent algorithms, which solve the limitation of stochastic gradient descent on weighted edges without compromising efficiency. Finally, we test our proposed method on three real-world network datasets. The result shows that this method outperforms other start-of-the-art algorithms for both classification accuracy and robustness, especially in a guarantee network.
    References | Supplementary Material | Related Articles | Metrics
    Software Systems
    Unit Test Data Generation for C Using Rule-Directed Symbolic Execution
    Ming-Zhe Zhang, Yun-Zhan Gong, Ya-Wen Wang, Da-Hai Jin
    Journal of Computer Science and Technology, 2019, 34 (3): 670-689.  DOI: 10.1007/s11390-019-1935-7
    Abstract   PDF(538KB) ( 564 )   Chinese Summary
    Unit testing is widely used in software development. One important activity in unit testing is automatic test data generation. Constraint-based test data generation is a technique for automatic generation of test data, which uses symbolic execution to generate constraints. Unit testing only tests functions instead of the whole program, where individual functions typically have preconditions imposed on their inputs. Conventional symbolic execution cannot detect these preconditions, let alone converting these preconditions into constraints. To overcome these limitations, we propose a novel unit test data generation approach using rule-directed symbolic execution for dealing with functions with missing input preconditions. Rule-directed symbolic execution uses predefined rules to detect preconditions in the individual function, and generates constraints for inputs based on preconditions. We introduce implicit constraints to represent preconditions, and unify implicit constraints and program constraints into integrated constraints. Test data generated based on integrated constraints can explore previously unreachable code and help developers find more functional faults and logical faults. We have implemented our approach in a tool called CTS-IC, and applied it to real-world projects. The experimental results show that rule-directed symbolic execution can find preconditions (implicit constraints) automatically from an individual function. Moreover, the unit test data generated by our approach achieves higher coverage than similar tools and efficiently mitigates missing input preconditions problems in unit testing for individual functions.
    References | Supplementary Material | Related Articles | Metrics
    Computer Architecture and Systems
    Cacheap: Portable and Collaborative I/O Optimization for Graph Processing
    Peng Zhao, Chen Ding, Lei Liu, Jiping Yu, Wentao Han, Xiao-Bing Feng
    Journal of Computer Science and Technology, 2019, 34 (3): 690-706.  DOI: 10.1007/s11390-019-1936-6
    Abstract   PDF(2832KB) ( 552 )   Chinese Summary
    Increasingly there is a need to process graphs that are larger than the available memory on today's machines. Many systems have been developed with graph representations that are efficient and compact for out-of-core processing. A necessary task in these systems is memory management. This paper presents a system called Cacheap which automatically and efficiently manages the available memory to maximize the speed of graph processing, minimize the amount of disk access, and maximize the utilization of memory for graph data. It has a simple interface that can be easily adopted by existing graph engines. The paper describes the new system, uses it in recent graph engines, and demonstrates its integer factor improvements in the speed of large-scale graph processing.
    References | Supplementary Material | Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved