Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      15 March 2002, Volume 17 Issue 2 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Greylevel Difference Classification Algorithm in Fractal Image Compression
    CHEN Yisong(陈毅松),LU Jian(卢坚),SUN Zhengxing(孙正兴)and ZHANG Fuyan(张福炎)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    This paper proposes the notion of a greylevel difference classification algorithm in fractal image compression. Then an example of the greylevel difference classification algorithm is given as an improvement of the quadrant greylevel and variance classification in the quadtree-based encoding algorithm. The algorithm incorporates the frequency feature in spatial analysis using the notion of average quadrant greylevel difference, leading to an enhancement in terms of encoding time, PSNR value and compression ratio.
    Related Articles | Metrics
    Sequential Combination Methods for Data Clustering Analysis
    QIAN Yuntao(钱云涛),Ching Y.Suen and TANG Yuanyan(唐远炎)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    This paper proposes the use of more than one clustering method to improve clustering performance. Clustering is an optimization procedure based on a specific clustering criterion. Clustering combination can be regarded as a technique that constructs and processes multiple clustering criteria. Since the global and local clustering criteria are complementary rather than competitive, combining these two types of clustering criteria may enhance the clustering performance. In our past work, a multi-objective programming based simultaneous clustering combination algorithm has been proposed, which incorporates multiple criteria into an objective function by a weighting method, and solves this problem with constrained nonlinear optimization programming. But this algorithm has high computational complexity. Here a sequential combination approach is investigated, which first uses the global criterion based clustering to produce an initial result, then uses the local criterion based information to improve the initial result with a probabilistic relaxation algorithm or linear additive model. Compared with the simultaneous combination method, sequential combination has low computational complexity. Results on some simulated data and standard test data are reported. It appears that clustering performance improvement can be achieved at low cost through sequential combination.
    Related Articles | Metrics
    Digital Image Watermarking Based on Discrete Wavelet Transform
    DING Wei(丁玮),YAN Weiqi(闫伟齐)and QI Dongxu(齐东旭)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    This paper aims at digital watermark which is a new popular research topic recently, presents some methods to embed digital watermark based on modifying frequency coefficients in discrete wavelet transform (DWT) domain. First, the present progress of digital watermark is briefly introduced; after that, starting from Pitas's method and discarding his pseudo random number method, the authors use a digital image scrambling technology as preprocessing for digital watermarking. Then the authors discuss how to embed a 1-bit digital image as watermark in frequency domain. Finally another digital watermarking method is given in which 3-D DWT is used to transform a given digital image. Based on the experimental results, it is shown that the proposed methods are robust to a large extent.
    Related Articles | Metrics
    Performance Evaluation of a Parallel Cascade Semijoin Algorithm for Computing Path Expressions in Object Database Systems
    WANG Guoren(王国仁)and YU Ge(于戈)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    With the emerging of new applications, especially in Web, such as E-Commerce, Digital Library and DNA Bank, object database systems show their stronger functions than other kinds of database systems due to their powerful representation ability on complex semantics and relationship. One distinguished feature of object database systems is path expression, and most queries on an object database are based on path expression because it is the most natural and convenient way to access the object database, for example, to navigate the hyper-links in a web-based database. The execution of path expression is usually extremely expensive on a very large database. Therefore, the improvement of path expression execution efficiency is critical for the performance of object databases. As an important approach realizing high-performance query processing, the parallel processing of path expression on distributed object databases is explored in this paper. Up to now, some algorithms about how to compute path expressions and how to optimize path expression processing have been proposed for centralized environments. But, few approaches have been presented for computing path expressions in parallel. In this paper, a new parallel algorithm for computing path expression named Parallel Cascade Semijoin (PCSJ) is proposed. Moreover, a new scheduling strategy called right-deep zigzag tree is designed to further improve the performance of the PCSJ algorithm. The experiments have been implemented in an NOW distributed and parallel environment. The results show that the PCSJ algorithm outperforms the other two parallel algorithms (the parallel version of forward pointer chasing algorithm (PFPC) and the index splitting parallel algorithm (IndexSplit) when computing path expressions with restrictive predicates and that the right-deep zigzag tree scheduling strategy has better performance than the right-deep tree scheduling strategy.
    Related Articles | Metrics
    A Fast Scalable Calssifier Tightly Integrated with RDBMS
    LIU Hongyan(刘红岩),LU Hongjun(陆宏钧)and CHEN Jian(陈剑)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    In this paper, we report our success in building efficient scalable classifiers by exploring the capabilities of modern relational database management systems (RDBMS). In addition to high classification accuracy, the unique features of the approach include its high training speed, linear scalability, and simplicity in implementation. More importantly, the major computation required in the approach can be implemented using standard functions provided by the modern relational DBMS. Besides, with the effective rule pruning strategy, the algorithm proposed in this paper can produce a compact set of classification rules. The results of experiments conducted for performance evaluation and analysis are presented.
    Related Articles | Metrics
    Interactive and Symbolic Data Dependence Analysis Based on Renges of Expressions
    YANG Bo(杨博),ZHENG Fengzhou(郑丰宙),WANG Dingxing(王鼎兴)and ZHENG Weimin(郑纬民)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    Traditional data dependence testing algorithms have become very accurate and efficient for simple subscript expressions, but they cannot handle symbolic expressions because of the complexity of data-flow and lack of the semantic information of variables in programs. In this paper, a range-based testing and query approach, called DDTQ, is proposed to eliminate data dependence between array references with symbolic subscripts. DDTQ firstly extracts data dependence information from the symbolic subscripts, a testing algorithm is then used to disprove the dependence based on the ranges of expressions. The assumed dependence that cannot be handled by the disprover will be converted into simple questions by a question engine so that the compiler can solve them by user interaction in a friendly way. The experiment on perfect benchmarks indicates that DDTQ is effective in improving the parallelizing capability of the compiler.
    Related Articles | Metrics
    IPULOC—Exploring Dynamic Program Locality with the Instruction Processing Unit for Filling Memory Gap
    HUANG Zhenchun (黄震春)and LI Sanli(李三立)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    Memory gap has become an essential factor influencing the peak performance of high-speed CPU-based systems. To fill this gap, enlarging cache capacity has been a traditional method based on static program locality principle. However, the order of instructions stored in I-Cache before being sent to Data Processing Unit (DPU) is a kind of useful information that has not ever been utilized before. So an architecture containing an Instruction Processing Unit (IPU) in parallel with the ordinary DPU is proposed. The IPU can prefetch, analyze and preprocess a large amount of instructions otherwise lying in the I-Cache untouched. It is more efficient than the conventional prefetch buffer that can only store several instructions for previewing. By IPU, Load Instructions can be preprocessed while the DPU is executing on data simultaneously. It is termed as ``Instruction Processing Unit with Lookahead Cache'' (IPULOC for short) in which the idea of dynamic program locality is presented. This paper describes the principle of IPULOC and illustrates the quantitative parameters for evaluation. Tools for simulating the IPULOC have been developed. The simulation result shows that it can improve program locality during program execution, and hence can improve the cache hit ratio correspondingly without further enlarging the on-chip cache that occupies a large portion of chip area.
    Related Articles | Metrics
    Practical Fast Computation of Zernike Moments
    Al-Rawi Mohammed and YANG Jie(杨杰)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    The fast computation of Zernike moments from normalized geometric moments has been developed in this paper. The computation is multiplication free and only additions are needed to generate Zernike moments. Geometric moments are generated using Hatamian's filter up to high orders by a very simple and straightforward computation scheme. Other kinds of moments (e.g., Legendre, pseudo Zernike) can be computed using the same algorithm after giving the proper transformations that state their relations to geometric moments. Proper normalizations of geometric moments are necessary so that the method can be used in the efficient computation of Zernike moments. To ensure fair comparisons, recursive algorithms are used to generate Zernike polynomials and other coefficients. The computational complexity model and test programs show that the speed-up factor of the proposed algorithm is superior with respect to other fast and/or direct computations. It perhaps is the first time that Zernike moments can be computed in real time rates, which encourages the use of Zernike moment features in different image retrieval systems that support huge databases such as the XM experimental model stated for the MPEG-7 experimental core. It is concluded that choosing direct computation would be impractical.
    Related Articles | Metrics
    Automatic Segmentation of News Items Based on Video and Audio Features
    WANG Weiqiang(王伟强)and GAO Wen(高文)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    The automatic segmentation of news items is a key for implementing the automatic cataloging system of news video. This paper presents an approach which manages audio and video feature information to automatically segment news items. The integration of audio and visual analyses can overcome the weakness of the approach using only image analysis techniques. It makes the approach more adaptable to various situations of news items. The proposed approach detects silence segments in accompanying audio, and integrates them with shot segmentation results, as well as anchor shot detection results, to determine the boundaries among news items. Experimental results show that the integration of audio and video features is an effective approach to solving the problem of automatic segmentation of news items.
    Related Articles | Metrics
    Why RTL ATPG?
    MIN Yinghua(闵应骅)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    Register Transfer Level (RTL) Automatic Test Pattern Generation (ATPG) has been of wide concern for two decades. Meanwhile gate-level ATPG has made remarkable progress in dealing with large circuits. An argument is then posed. Do we need RTL ATPG in the case of gate-level ATPG capable of generating tests for large circuits? This paper attempts to answer this question. The necessity, difficulty, and major interests of RTL ATPG are reviewed.
    Related Articles | Metrics
    A Rejection Model Based on Multi-Layer Perceptrons for Mandarin Digit Recognition
    ZHONG Lin(钟林),LIU Jia(刘加)and LIU Runsheng(刘润生)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    High performance Mandarin digit recognition (MDR) is much more difficult to achieve than its English counterpart, especially on inexpensive hardware implementation. In this paper, a new Multi-Layer Perceptrons (MLP) based postprocessor, an a posteriori probability estimator, is presented and used for the rejection model of the hidden Markov model (HMM) Viterbi decoding in the speaker independent Mandarin digit recognition system based on hidden Markov model (HMM). Poor utterances, which are recognized by HMMs but have low a posteriori probability, will be rejected. After rejecting about 4.9% of the tested utterances, the MLP rejection model can boost the digit recognition accuracy from 97.1% to 99.6%. The performance is better than those rejection models based on linear discrimination, likelihood ratio or anti-digit.
    Related Articles | Metrics
    A Mixed-Mode BIST Scheme Based on Folding Compression
    LIANG Huaguo(梁华国),Sybille Hellebrand and Hans-Joachim Wunderlich
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    In this paper a new scheme for mixed mode scan-based BIST is presented with complete fault coverage, and some new concepts of folding set and computing are introduced. This scheme applies single feedback polynomial of LFSR for generating pseudo-random patterns, as well as for compressing and extending seeds of folding sets and an LFSR, where we encode seed of folding set as an initial seed of LFSR. Moreover these new techniques are 100% compatible with scan design. Experimental results show that the proposed scheme outperforms previously published approaches based on the reseeding of LFSRs.
    Related Articles | Metrics
    Deep Performance Analysis of Refined Harmonic Bin Packing Algorithm
    GU Xiaodong(顾晓东),CHEN Guoliang(陈国良)and XU Yinlong(许胤龙)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    Refined Harmonic (RH) is one of the best on-line bin packing algorithms. The algorithm was first proposed by Lee&Lee in 1985 and the upper bound of the worst-case performance ratio has been proved to be 1.63596... . In this paper, it is proved that 1.63596... is also the lower bound. The average performance of RH is also studied for the first time. It is shown that the average-case performance ratio of RH is 1.28243... under the uniform distribution.
    Related Articles | Metrics
    A Non-Collision Hash Trie-Tree Based Fast IP Classification Algorithm
    XU Ke(徐恪),WU Jianping(吴建平),YU Zhongchao(喻中超)and XU Mingwei(徐明伟)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    With the development of network applications, routers must support such functions as firewalls, provision of QoS, traffic billing, etc. All these functions need the classification of IP packets, according to how different the packets are processed subsequently, which is determined. In this article, a novel IP classification algorithm is proposed based on the Grid of Tries algorithm. The new algorithm not only eliminates original limitations in the case of multiple fields but also shows better performance in regard to both time and space. It has better overall performance than many other algorithms.
    Related Articles | Metrics
    A Rate-Based Flow Control Mechanism for Avoiding Congestion
    ZHANG Xiaolin(张孝林),WANG Yuhong(王宇宏)and WU Jieyi(吴介一)
    Journal of Computer Science and Technology, 2002, 17 (2): 0-0. 
    The rate-based flow control mechanisms for the Available Bit Rate (ABR) service are used to share the available bandwidth of a bottleneck switch connected to a bottleneck link fairly and reasonably among many competitive users, and to maintain the buffer queue length of the switch at a desired level in order to avoid congestion in Asynchronous Transfer Mode (ATM) networks. In this paper, a control theoretic approach that uses a Deadbeat-Response (DR) controller to the design of a rate-based flow control mechanism is presented. The mechanism has a simple structure and is robust in the sense that its stability is not sensitive to the change of the number of active Virtual Connections (VCs). Simulation results show that this mechanism not only ensures fair share of the bandwidth for all active VCs regardless of the number of hops they traverse but also has the advantages of fast convergence, no oscillation, and high link bandwidth utilization.
    Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved