Loading...




Bimonthly    Since 1986
ISSN 1000-9000(Print)
/1860-4749(Online)
CN 11-2296/TP
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
Other Countries: Springer
 
ip访问总数:
ip当日访问总数:
当前在线人数:
  • Table of Content
      05 March 2016, Volume 31 Issue 2 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Video Coding and Assessment
    A Novel Spatial Pooling Strategy for Image Quality Assessment
    Qiaohong Li, Yu-Ming Fang, Jing-Tao Xu
    Journal of Computer Science and Technology, 2016, 31 (2): 225-234.  DOI: 10.1007/s11390-016-1623-9
    Abstract   PDF(1221KB) ( 1263 )   Chinese Summary
    A variety of existing image quality assessment (IQA) metrics share a similar two-stage framework: at the first stage, a quality map is constructed by comparison between local regions of reference and distorted images; at the second stage, the spatial pooling is adopted to obtain overall quality score. In this work, we propose a novel spatial pooling strategy for image quality assessment through statistical analysis of the quality map. Our in-depth analysis indicates that the overall image quality is sensitive to the quality distribution. Based on the analysis, the quality histogram and statistical descriptors extracted from the quality map are used as input to the support vector regression to obtain the final objective quality score. Experimental results on three large public IQA databases have demonstrated that the proposed spatial pooling strategy can greatly improve the quality prediction performance of the original IQA metrics in terms of correlation with human subjective ratings.
    References | Related Articles | Metrics
    Computer Architecture and Systems
    Pragma Directed Shared Memory Centric Optimizations on GPUs
    Jing Li, Lei Liu, Yuan Wu, Xiang-Hua Liu, Yi Gao, Xiao-Bing Feng, Cheng-Yong Wu
    Journal of Computer Science and Technology, 2016, 31 (2): 235-252.  DOI: 10.1007/s11390-016-1624-8
    Abstract   PDF(1800KB) ( 1404 )   Chinese Summary
    GPUs become a ubiquitous choice as coprocessors since they have excellent ability in concurrent processing. In GPU architecture, shared memory plays a very important role in system performance as it can largely improve bandwidth utilization and accelerate memory operations. However, even for affine GPU applications that contain regular access patterns, optimizing for shared memory is not an easy work. It often requires programmer expertise and nontrivial parameter selection. Improper shared memory usage might even underutilize GPU resource. Even using state-of-the-art high level programming models (e.g., OpenACC and OpenHMPP), it is still hard to utilize shared memory since they lack inherent support in describing shared memory optimization and selecting suitable parameters, let alone maintaining high resource utilization. Targeting higher productivity for affine applications, we propose a data centric way to shared memory optimization on GPU. We design a pragma extension on OpenACC so as to convey data management hints of programmers to compiler. Meanwhile, we devise a compiler framework to automatically select optimal parameters for shared arrays, using the polyhedral model. We further propose optimization techniques to expose higher memory and instruction level parallelism. The experimental results show that our shared memory centric approaches effectively improve the performance of five typical GPU applications across four widely used platforms by 3.7x on average, and do not burden programmers with lots of pragmas.
    References | Related Articles | Metrics
    Wide Operational Range Processor Power Delivery Design for Both Super-Threshold Voltage and Near-Threshold Voltage Computing
    Xin He, Gui-Hai Yan, Yin-He Han, Xiao-Wei Li
    Journal of Computer Science and Technology, 2016, 31 (2): 253-266.  DOI: 10.1007/s11390-016-1625-7
    Abstract   PDF(693KB) ( 889 )   Chinese Summary
    The load power range of modern processors is greatly enlarged because many advanced power management techniques are employed, such as dynamic voltage frequency scaling, Turbo Boosting, and near-threshold voltage (NTV) technologies. However, because the efficiency of power delivery varies greatly with different load conditions, conventional power delivery designs cannot maintain high efficiency over the entire voltage spectrum, and the gained power saving may be offset by power loss in power delivery. We propose SuperRange, a wide operational range power delivery unit. SuperRange complements the power delivery capability of on-chip voltage regulator and off-chip voltage regulator. On top of SuperRange, we analyze its power conversion characteristics and propose a voltage regulator (VR) aware power management algorithm. Moreover, as more and more cores have been integrated on a singe chip, multiple SuperRange units can serve as basic building blocks to build, in a highly scalable way, more powerful power delivery subsystem with larger power capacity. Experimental results show SuperRange unit offers 1x and 1.3x higher power conversion efficiency (PCE) than other two conventional power delivery schemes at NTV region and exhibits an average 70% PCE over entire operational range. It also exhibits superior resilience to power-constrained systems.
    References | Related Articles | Metrics
    Worst-Case Finish Time Analysis for DAG-Based Applications in the Presence of Transient Faults
    Xiao-Tong Cui, Kai-Jie Wu, Tong-Quan Wei, Edwin Hsing-Mean Sha
    Journal of Computer Science and Technology, 2016, 31 (2): 267-283.  DOI: 10.1007/s11390-016-1626-6
    Abstract   PDF(980KB) ( 1785 )   Chinese Summary
    Tasks in hard real-time systems are required to meet preset deadlines, even in the presence of transient faults, and hence the analysis of worst-case finish time (WCFT) must consider the extra time incurred by re-executing tasks that were faulty. Existing solutions can only estimate WCFT and usually result in significant under- or over-estimation. In this work, we conclude that a sufficient and necessary condition of a task set experiencing its WCFT is that its critical task incurs all expected transient faults. A method is presented to identify the critical task and WCFT in O(|V|+|E|) where |V| and |E| are the number of tasks and dependencies between tasks, respectively. This method finds its application in testing the feasibility of directed acyclic graph (DAG) based task sets scheduled in a wide variety of fault-prone multi-processor systems, where the processors could be either homogeneous or heterogeneous, DVS-capable or DVS-incapable, etc. The common practices, which require the same time complexity as the proposed critical-task method, could either underestimate the worst case by up to 25%, or overestimate by 13%. Based on the proposed critical-task method, a simulated-annealing scheduling algorithm is developed to find the energy efficient fault-tolerant schedule for a given DAG task set. Experimental results show that the proposed critical-task method wins over a common practice by up to 40% in terms of energy saving.
    References | Related Articles | Metrics
    Theory and Algorithms
    A Synthesis of Multi-Precision Multiplication and Squaring Techniques for 8-Bit Sensor Nodes: State-of-the-Art Research and Future Challenges
    Zhe Liu, Hwajeong Seo, Howon Kim
    Journal of Computer Science and Technology, 2016, 31 (2): 284-299.  DOI: 10.1007/s11390-016-1627-5
    Abstract   PDF(536KB) ( 887 )   Chinese Summary
    Multi-precision multiplication and squaring are the performance-critical operations for the implementation of public-key cryptography, such as exponentiation in RSA, and scalar multiplication in elliptic curve cryptography (ECC). In this paper, we provide a survey on the multi-precision multiplication and squaring techniques, and make special focus on the comparison of their performance and memory footprint on sensor nodes using 8-bit processors. Different from the previous work, our advantages are in at least three aspects. Firstly, this survey includes the existing techniques for multiprecision multiplication and squaring on sensor nodes over prime fields. Secondly, we analyze and evaluate each method in a systematic and objective way. Thirdly, this survey also provides suggestions for selecting appropriate multiplication and squaring techniques for concrete implementation of public-key cryptography. At the end of this survey, we propose the research challenges on efficient implementation of the multiplication and the squaring operations based on our observation.
    References | Related Articles | Metrics
    Complete Proof Systems for Amortised Probabilistic Bisimulations
    Li-Li Xu, Hui-Min Lin
    Journal of Computer Science and Technology, 2016, 31 (2): 300-316.  DOI: 10.1007/s11390-016-1628-4
    Abstract   PDF(312KB) ( 944 )   Chinese Summary
    The notion of amortisation has been integrated in quantitative bisimulations to make long-term behavioral comparisons between nondeterministic systems. In this paper, we present sound and complete proof systems for amortised strong probabilistic bisimulation and its observational congruence on a process algebra with probability and nondeterminism, and prove their soundness and completeness. Our results make it possible to reason about long-term (observable) probabilistic behaviors by syntactic manipulations.
    References | Related Articles | Metrics
    Utilizing Probabilistic Linear Equations in Cube Attacks
    Yuan Yao, Bin Zhang, Wen-Ling Wu
    Journal of Computer Science and Technology, 2016, 31 (2): 317-325.  DOI: 10.1007/s11390-016-1629-3
    Abstract   PDF(260KB) ( 1003 )   Chinese Summary
    Cube attacks, proposed by Dinur and Shamir at EUROCRYPT 2009, have shown huge power against stream ciphers. In the original cube attacks, a linear system of secret key bits is exploited for key recovery attacks. However, we find a number of equations claimed linear in previous literature actually nonlinear and not fit into the theoretical framework of cube attacks. Moreover, cube attacks are hard to apply if linear equations are rare. Therefore, it is of significance to make use of probabilistic linear equations, namely nonlinear superpolys that can be approximated by linear expressions effectively. In this paper, we suggest a way to test out and utilize these probabilistic linear equations, thus extending cube attacks to a wider scope. Concretely, we employ the standard parameter estimation approach and the sequential probability ratio test (SPRT) for linearity test in the preprocessing phase, and use maximum likelihood decoding (MLD) for solving the probabilistic linear equations in the online phase. As an application, we exhibit our new attack against 672 rounds of Trivium and reduce the number of key bits to search by 7.
    References | Related Articles | Metrics
    Computer Networks and Distributed Computing
    Survey on Simulation for Mobile Ad-Hoc Communication for Disaster Scenarios
    Erika Rosas, Nicolás Hidalgo, Veronica Gil-Costa, Carolina Bonacic, et al
    Journal of Computer Science and Technology, 2016, 31 (2): 326-349.  DOI: 10.1007/s11390-016-1630-x
    Abstract   PDF(530KB) ( 1471 )   Chinese Summary
    Mobile ad-hoc communication is a demonstrated solution to mitigate the impact of infrastructure failures during large-scale disasters. A very complex issue in this domain is the design validation of software applications that support decision-making and communication during natural disasters. Such disasters are irreproducible, highly unpredictable, and impossible to scale down, and thus extensive assessments cannot be led in situ. In this context, simulation constitutes the best approach towards the testing of software solutions for natural disaster responses. The present survey reviews mobility models, ad-hoc network architectures, routing protocols and network simulators. Our aim is to provide guidelines for software developers with regards to the performance evaluation of their applications by means of simulation.
    References | Related Articles | Metrics
    Identify Congested Links Based on Enlarged State Space
    Sheng-Li Pan, Zhi-Yong Zhang, Ying-Jie Zhou, Feng Qian, Guang-Min Hu
    Journal of Computer Science and Technology, 2016, 31 (2): 350-358.  DOI: 10.1007/s11390-016-1631-9
    Abstract   PDF(379KB) ( 772 )   Chinese Summary
    When paths share a common congested link, they will all suffer from a performance degradation. Boolean tomography exploits these performance-level correlations between different paths to identify the congested links. It is clear that the congestion of a path will be distinctly intensive when it traverses multiple congested links. We adopt an enlarged state space model to mirror different congestion levels and employ a system of integer equations, instead of Boolean equations, to describe relationships between the path states and the link states. We recast the problem of identifying congested links into a constraint optimization problem, including Boolean tomography as a special case. For a logical tree, we propose an up-to-bottom algorithm and prove that it always achieves a solution to the problem. Compared with existing algorithms, the simulation results show that our proposed algorithm achieves a higher detection rate while keeping a low false positive rate.
    References | Related Articles | Metrics
    Data Management and Data Mining
    Content-Based Publish/Subscribe System for Web Syndication
    Zeinab Hmedeh, Harry Kourdounakis, Vassilis Christophides, Cédric du Mouza, et al
    Journal of Computer Science and Technology, 2016, 31 (2): 359-380.  DOI: 10.1007/s11390-016-1632-8
    Abstract   PDF(1862KB) ( 939 )   Chinese Summary
    Content syndication has become a popular way for timely delivery of frequently updated information on the Web. Today, web syndication technologies such as RSS or Atom are used in a wide variety of applications spreading from large-scale news broadcasting to medium-scale information sharing in scientific and professional communities. However, they exhibit serious limitations for dealing with information overload in Web 2.0. There is a vital need for efficient realtime filtering methods across feeds, to allow users to effectively follow personally interesting information. We investigate in this paper three indexing techniques for users' subscriptions based on inverted lists or on an ordered trie for exact and partial matching. We present analytical models for memory requirements and matching time and we conduct a thorough experimental evaluation to exhibit the impact of critical parameters of realistic web syndication workloads.
    References | Related Articles | Metrics
    Answering Reachability Queries on Incrementally Updated Graphs by Hierarchical Labeling Schema
    Tak-Lam Wong
    Journal of Computer Science and Technology, 2016, 31 (2): 381-399.  DOI: 10.1007/s11390-016-1633-7
    Abstract   PDF(802KB) ( 945 )   Chinese Summary
    We proposed a novel solution schema called the Hierarchical Labeling Schema (HLS) to answer reachability queries in directed graphs. Different from many existing approaches that focus on static directed acyclic graphs (DAGs), our schema focuses on directed cyclic graphs (DCGs) where vertices or arcs could be added to a graph incrementally. Unlike many of the traditional approaches, HLS does not require the graph to be acyclic in constructing its index. Therefore, it could, in fact, be applied to both DAGs and DCGs. When vertices or arcs are added to a graph, the HLS is capable of updating the index incrementally instead of re-computing the index from the scratch each time, making it more efficient than many other approaches in the practice. The basic idea of HLS is to create a tree for each vertex in a graph and link the trees together so that whenever two vertices are given, we can immediately know whether there is a path between them by referring to the appropriate trees. We conducted extensive experiments on both real-world datasets and synthesized datasets. We compared the performance of HLS, in terms of index construction time, query processing time and space consumption, with two state-of-the-art methodologies, the path-tree method and the 3-hop method. We also conducted simulations to model the situation when a graph is updated incrementally. The performance comparison of different algorithms against HLS on static graphs has also been studied. Our results show that HLS is highly competitive in the practice and is particularly useful in the cases where the graphs are updated frequently.
    References | Related Articles | Metrics
    Regular Paper
    A Parallel Markov Cerebrovascular Segmentation Algorithm Based on Statistical Model
    Rong-Fei Cao, Xing-Ce Wang, Zhong-Ke Wu, Ming-Quan Zhou, Xin-Yu Liu
    Journal of Computer Science and Technology, 2016, 31 (2): 400-416.  DOI: 10.1007/s11390-016-1634-6
    Abstract   PDF(1408KB) ( 995 )   Chinese Summary
    For segmenting cerebral blood vessels from the time-of-flight magnetic resonance angiography (TOF-MRA) images accurately, we propose a parallel segmentation algorithm based on statistical model with Markov random field (MRF). Firstly, we improve traditional non-local means filter with patch-based Fourier transformation to preprocess the TOF-MRA images. In this step, we mainly utilize the sparseness and self-similarity of the MRA brain images sequence. Secondly, we add the MRF information to the finite mixture mode (FMM) to fit the intensity distribution of medical images. We make use of the MRF in image sequence to estimate the proportion of cerebral tissues. Finally, we choose the particle swarm optimization (PSO) algorithm to parallelize the parameter estimation of FMM. A large number of experiments verify the high accuracy and robustness of our approach especially for narrow vessels. The work will offer significant assistance for physicians on the prevention and diagnosis of cerebrovascular diseases.
    References | Related Articles | Metrics
    A Parallel Genetic Algorithm Based on Spark for Pairwise Test Suite Generation
    Rong-Zhi Qi, Zhi-Jian Wang, Shui-Yan Li
    Journal of Computer Science and Technology, 2016, 31 (2): 417-427.  DOI: 10.1007/s11390-016-1635-5
    Abstract   PDF(522KB) ( 2069 )   Chinese Summary
    Pairwise testing is an effective test generation technique that requires all pairs of parameter values to be covered by at least one test case. It has been proven that generating minimum test suite is an NP-complete problem. Genetic algorithms have been used for pairwise test suite generation by researchers. However, it is always a time-consuming process, which leads to significant limitations and obstacles for practical use of genetic algorithms towards large-scale test problems. Parallelism will be an effective way to not only enhance the computation performance but also improve the quality of the solutions. In this paper, we use Spark, a fast and general parallel computing platform, to parallelize the genetic algorithm to tackle the problem. We propose a two-phase parallelization algorithm including fitness evaluation parallelization and genetic operation parallelization. Experimental results show that our algorithm outperforms the sequential genetic algorithm and competes with other approaches in both test suite size and computational performance. As a result, our algorithm is a promising improvement of the genetic algorithm for pairwise test suite generation.
    References | Related Articles | Metrics
    Tuning the Learning Rate for Stochastic Variational Inference
    Xi-Ming Li, Ji-Hong Ouyang
    Journal of Computer Science and Technology, 2016, 31 (2): 428-436.  DOI: 10.1007/s11390-016-1636-4
    Abstract   PDF(1901KB) ( 1189 )   Chinese Summary
    Stochastic variational inference (SVI) can learn topic models with very big corpora. It optimizes the variational objective by using the stochastic natural gradient algorithm with a decreasing learning rate. This rate is crucial for SVI; however, it is often tuned by hand in real applications. To address this, we develop a novel algorithm, which tunes the learning rate of each iteration adaptively. The proposed algorithm uses the Kullback-Leibler (KL) divergence to measure the similarity between the variational distribution with noisy update and that with batch update, and then optimizes the learning rates by minimizing the KL divergence. We apply our algorithm to two representative topic models: latent Dirichlet allocation and hierarchical Dirichlet process. Experimental results indicate that our algorithm performs better and converges faster than commonly used learning rates.
    References | Related Articles | Metrics
  Journal Online
Just Accepted
Archive
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Surveys
Special Issues
  Download
   ScholarOne Manuscripts
   Log In

User ID:

Password:

  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved