Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      15 November 2003, Volume 18 Issue 6 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    The Haplotyping Problem: An Overview of Computational Models and Solutions
    Paola Bonizzoni , Gianluca Della Vedova , Riccardo Dondi and Jing Li
    Journal of Computer Science and Technology, 2003, 18 (6): 675-688. 
    Abstract   PDF(388KB) ( 1508 )   Chinese Summary
    The investigation of genetic differences among humans has given evidence that mutations in DNA sequences are responsible for some genetic diseases. The most common mutation is the one that involves only a single nucleotide of the DNA sequence, which is called a single nucleotide polymorphism (SNP). As a consequence, computing a complete map of all SNPs occurring in the human populations is one of the primary goals of recent studies in human genomics. The construction of such a map requires to determine the DNA sequences that from all chromosomes. In diploid organisms like humans, each chromosome consists of two sequences called haplotypes. Distinguishing the information contained in both haplotypes when analyzing chromosome sequences poses several new computational issues which collectively form a new emerging topic of Computational Biology known as Haplotyping. This paper is a comprehensive study of some new combinatorial approaches proposed in this research area and it mainly focuses on the formulations and algorithmic solutions of some basic biological problems. Three statistical approaches are briefly discussed at the end of the paper.
    Related Articles | Metrics
    Space-Time Coding and Signal Processing for MIMO Communications
    Inaki Berenguer and Xiaodong Wang
    Journal of Computer Science and Technology, 2003, 18 (6): 689-702. 
    Abstract   PDF(496KB) ( 4818 )   Chinese Summary
    Rapid growth in mobile computing and other wireless multimedia services is inspiring many research and development activities on high-speed wireless communication systems. Main challenges in this area include the development of efficient coding and modulation signal processing techniques for improving the quality and spectral efficiency of wireless systems. The recently emerged space-time coding and signal processing techniques for wireless communication systems employing multiple transmit and receive antennas offer a powerful paradigm for meeting these challenges. This paper provides an overview on the recent development in space-time coding and signal processing techniques for multiple-input multiple-output (MIMO) communication systems. We first review the information theoretic results on the capacities of wireless systems employing multiple transmit and receive antennas. We then describe two representative categories of space-time systems, namely, the BLAST system and the space-time block coding system, both of which have been proposed for next-generation high-speed wireless system. Signal processing techniques for channel estimation and decoding in space-time systems are also discussed. Finally, some other coding and signal processing techniques for wireless systems employing multiple transmit and receive antennas that are currently under intensive research are also briefly touched upon.
    Related Articles | Metrics
    Research on Scheduling Algorithms in Web Cluster Servers
    LEI YingChun, GONG YiLi, ZHANG Song and LI GuoJie
    Journal of Computer Science and Technology, 2003, 18 (6): 703-716. 
    Abstract   PDF(387KB) ( 1995 )   Chinese Summary
    This paper analyzes quantitatively the impact of the load balance scheduling algorithms and the locality scheduling algorithms on the performance of Web cluster servers, and brings forward the Adaptive_LARD algorithm. Compared with the representative LARD algorithm, the advantages of the Adaptive_LARD are that: (1) it adjusts load distribution among the back-ends through the idea of load balancing to avoid learning steps in the LARD algorithm and reinforce its adaptability; (2) by distinguishing between TCP connections accessing disks and those accessing cache memory, it can estimate the impact of different connections on the back-ends' load more precisely. Performance evaluations suggest that the proposed method outperforms the LARD algorithm by up to 14.7%.
    Related Articles | Metrics
    Pseudo-Cycle-Based Multicast Routing in Wormhole-Routed Networks
    SONG JianPing , HOU ZiFeng and XU Ming
    Journal of Computer Science and Technology, 2003, 18 (6): 717-724. 
    Abstract   PDF(338KB) ( 1176 )   Chinese Summary
    This paper addresses the problem of fault-tolerant multicast routing in wormhole-routed multicomputers. A new pseudo-cycle-based routing method is presented for constructing deadlock-free multicast routing algorithms. With at most two virtual channels this technique can be applied to any connected networks with arbitrary topologies. Simulation results show that this technique results in negligible performance degradation even in the presence of a large number of faulty nodes.
    Related Articles | Metrics
    Minimizing ADMs on WDM Directed Fiber Trees
    ZHOU FengFeng , CHEN GuoLiang , XU YinLong and GU Jun
    Journal of Computer Science and Technology, 2003, 18 (6): 725-731. 
    Abstract   PDF(323KB) ( 1298 )   Chinese Summary
    This paper proposes a polynomial-time algorithm for Minimum WDM/SONET Add/Drop Multiplexer Problem (MADM) on WDM directed fiber trees whether or not wavelength converters are used. It runs in time O (m^2 n), where n and m are the number of nodes of the tree and the number of the requests respectively. Incorporating T. Erlebach et al.'s work into the proposed algorithm, it also reaches the lower bound of the required wavelengths with greedy algorithms for the case without wavelength converters. Combined with some previous work, the algorithm reduces the number of required wavelengths greatly while using minimal number of ADMs for the case with limited wavelength converters. The experimental results show the minimal number of required ADMs on WDM directed fiber trees.
    Related Articles | Metrics
    CNB: A Critical-Network-Based Timing Optimization Method for Standard Cell Global Routing
    HONG XianLong , JING Tong , XU JingYu , BAO HaiYun and GU Jun
    Journal of Computer Science and Technology, 2003, 18 (6): 732-738. 
    Abstract   PDF(315KB) ( 1253 )   Chinese Summary
    A novel method, named critical-network-based (CNB), for timing optimization in global routing is presented in this paper. The essence of this method is different from that of the typical existing ones, named nets-based (NB) and critical-path-based (CPB). The main contribution of this paper is that the CNB delay reduction method is more efficient than the typical existing ones. This new method makes it possible to reduce the delay in an overall survey. Based on CNB, a timing optimization algorithm for global routing is implemented and tested on Microelectronics Center of North Carolina (MCNC) benchmarks in this paper. The experimental results are compared between this algorithm and the existing ones. The experimental results show that this algorithm is able to control the delay efficiently.
    Related Articles | Metrics
    Deterministic VLSI Block Placement Algorithm Using Less Flexibility First Principle
    DONG SheQin , HONG XianLong , WU YuLiang and GU Jun
    Journal of Computer Science and Technology, 2003, 18 (6): 739-746. 
    Abstract   PDF(359KB) ( 1344 )   Chinese Summary
    In this paper, a simple while effective deterministic algorithm for solving the VLSI block placement problem is proposed considering the packing area and interconnect wiring simultaneously. The algorithm is based on a principle inspired by observations of ancient professionals in solving their similar problems. Using the so-called Less Flexibility First principle, it is tried to pack blocks with the least packing flexibility on its shape and interconnect requirement to the empty space with the least packing flexibility in a greedy manner. Experimental results demonstrate that the algorithm, though simple, is quite effective in solving the problem. The same philosophy could also be used in designing efficient heuristics for other hard problems, such as placement with preplaced modules, placement with L/T shape modules, etc.
    Related Articles | Metrics
    A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices
    WU GenQing and ZHENG Fang
    Journal of Computer Science and Technology, 2003, 18 (6): 747-755. 
    Abstract   PDF(354KB) ( 1494 )   Chinese Summary
    In this paper, an important question, whether a small language model can be practically accurate enough, is raised. Afterwards, the purpose of a language model, the problems that a language model faces, and the factors that affect the performance of a language model, are analyzed. Finally, a novel method for language model compression is proposed, which makes the large language model usable for applications in handheld devices, such as mobiles, smart phones, personal digital assistants (PDAs), and handheld personal computers (HPCs). In the proposed language model compression method, three aspects are included. First, the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed. Second, a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full language model. And third, a rank-based quantization method is adopted to quantize the bi-gram probability values. Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease. This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough, and it makes the language model usable in handheld devices.
    Related Articles | Metrics
    Towards Robustness to Speech Rate in Mandarin All-Syllable Recognition
    CHEN YiNing, ZHU Xuan, LIU Jia and LIU RunSheng
    Journal of Computer Science and Technology, 2003, 18 (6): 756-761. 
    Abstract   PDF(289KB) ( 1305 )   Chinese Summary
    In mandarin all-syllable recognition, many insert errors occur due to the influence of non-consonant syllables. Introducing the duration model into the recognition process is a direct way to lessen these errors. But that usually could not work well as expected, for the duration is sensitive to speech rate. Hence, aiming at this problem, a novel context dependent duration distribution normalized by speech rate is proposed in this paper and applied to a speech recognition system based on the frame of improved Hidden Markov Model (HMM). To realize this algorithm, the authors employ a new method to estimate the speech rate of a sentence; then compute the duration probability combined with speech rate; and finally implement this duration information in the post-processing stage. With little change in the recognition process and resource demand, the duration model is adopted efficiently in the system. The experimental results indicate that the syllable error rates decrease significantly in two different speech corpora. Especially for the insertions, the error rates reduce about sixty to eighty percent.
    Related Articles | Metrics
    Combining Static Analysis and Case-Based Search Space Partitioning for Reducing Peak Memory in Model Checking
    ZHANG WenHui
    Journal of Computer Science and Technology, 2003, 18 (6): 762-770. 
    Abstract   PDF(339KB) ( 1448 )   Chinese Summary
    Memory is one of the critical resources in model checking. This paper discusses a strategy for reducing peak memory in model checking by case-based partitioning of the search space. This strategy combines model checking for verification of different cases and static analysis or expert judgment for guaranteeing the completeness of the cases. Description of the static analysis is based on using PROMELA as the modeling language. The strategy is applicable to a subset of models including models for verification of certain aspects of protocols.
    Related Articles | Metrics
    Analyzing and Mining Ordered Information Tables
    SAI Ying and Y. Y. Yao
    Journal of Computer Science and Technology, 2003, 18 (6): 771-779. 
    Abstract   PDF(319KB) ( 1367 )   Chinese Summary
    Work in inductive learning has mostly been concentrated on classifying. However, there are many applications in which it is desirable to order rather than to classify instances. For modelling ordering problems, we generalize the notion of information tables to ordered information tables by adding order relations in attribute values. Then we propose a data analysis model by analyzing the dependency of attributes to describe the properties of ordered information tables. The problem of mining ordering rules is formulated as finding association between orderings of attribute values and the overall ordering of objects. An ordering rules may state that “if the value of an object x on an attribute a is ordered ahead of the value of another object y on the same attribute, then x is ordered ahead of y”. For mining ordering rules, we first transform an ordered information table into a binary information table, and then apply any standard machine learning and data mining algorithms. As an illustration, we analyze in detail Maclean's universities ranking for the year 2000.
    Related Articles | Metrics
    Fast Construction of Plant Architectural Models Based on Substructure Decomposition
    YAN HongPing , Philippe de Reffye, PAN ChunHong and HU BaoGang
    Journal of Computer Science and Technology, 2003, 18 (6): 780-787. 
    Abstract   PDF(318KB) ( 1525 )   Chinese Summary
    Plant structure, representing the physical link among different organs, includes many similar substructures. In this paper, a new method is presented to construct plant architectural models of most plant species. The plant structure is decomposed into a stem, a set of lateral substructures and a terminal substructure, which is called substructure decomposition; then based on substructure decomposition, the plant structures are expressed in an iterative way; and further the derivative formula is employed to compute the number of organs in plant structures to get the geometrical sizes of 3D plant organs by borrowing Hydraulic Model. Using 3D organs, a substructure library is built. Based on the substructures stored in substructure library, one can construct 3D plant structure according to certain topological and geometrical rules. The experiments with different plant species are included in this paper to demonstrate the validity of the new method for constructing plant structures. The experimental results show that the approach follows botanical knowledge with high efficiency in constructing plant structures of most plant species. In addition, this method enables users to check the detail information of plant structure.
    Related Articles | Metrics
    Kernel-Based Nonlinear Discriminant Analysis for Face Recognition
    LIU QingShan , HUANG Rui , LU HanQing and MA SongDe
    Journal of Computer Science and Technology, 2003, 18 (6): 788-795. 
    Abstract   PDF(330KB) ( 1834 )   Chinese Summary
    Linear subspace analysis methods have been successfully applied to extract features for face recognition. But they are inadequate to represent the complex and nonlinear variations of real face images, such as illumination, facial expression and pose variations, because of their linear properties. In this paper, a nonlinear subspace analysis method, Kernel-based Nonlinear Discriminant Analysis (KNDA), is presented for face recognition, which combines the nonlinear kernel trick with the linear subspace analysis method --- Fisher Linear Discriminant Analysis (FLDA). First, the kernel trick is used to project the input data into an implicit feature space, then FLDA is performed in this feature space. Thus nonlinear discriminant features of the input data are yielded. In addition, in order to reduce the computational complexity, a geometry-based feature vectors selection scheme is adopted. Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA), which combines the kernel trick with linear Principal Component Analysis (PCA). Experiments are performed with the polynomial kernel, and KNDA is compared with KPCA and FLDA. Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.
    Related Articles | Metrics
    An Efficient Optimization Procedure for Tetrahedral Meshes by Chaos Search Algorithm
    SUN ShuLi and LIU JianFei
    Journal of Computer Science and Technology, 2003, 18 (6): 796-803. 
    Abstract   PDF(321KB) ( 1756 )   Chinese Summary
    A simple and efficient local optimization-based procedure for node repositioning/smoothing of three-dimensional tetrahedral meshes is presented. The initial tetrahedral mesh is optimized with respect to a specified element shape measure by chaos search algorithm, which is very effective for the optimization problems with only a few design variables. Examples show that the presented smoothing procedure can provide favorable conditions for local transformation approach and the quality of mesh can be significantly improved by the combination of these two procedures with respect to a specified element shape measure. Meanwhile, several commonly used shape measures for tetrahedral element, which are considered to be equivalent in some weak sense over a long period of time, are briefly re-examined in this paper. Preliminary study indicates that using different measures to evaluate the change of element shape will probably lead to inconsistent result for both well shaped and poorly shaped elements. The proposed smoothing approach can be utilized as an appropriate and effective tool for evaluating element shape measures and their influence on mesh optimization process and optimal solution.
    Related Articles | Metrics
    Real-Time Ray Casting Rendering of Volume Clipping in Medical Visualization
    CHEN Wei, HUA Wei, BAO HuJun and PENG QunSheng
    Journal of Computer Science and Technology, 2003, 18 (6): 804-814. 
    Abstract   PDF(405KB) ( 1524 )   Chinese Summary
    This paper presents a real-time ray casting rendering algorithm for "volume clipping plane" as an extension of the conventional ray casting technique. For each viewing direction a (moderate) pre-processing step is performed: the ray traverses the entire volume data (no early ray termination). Its intensity and opacity contributions are divided into several segments which are then sorted and stored by depth. At each sampling position along a segment, accumulated transparency and color are stored at a moderate memory overhead. For visualizing real-time volume clipping, only relevant segment contributions (maximum two) at the location of the clipping plane are considered, thus reducing the calculation to meet real-time requirements. Compared with the previous work that involves time-consuming re-clipping, re-traversing and re-shading, the proposed method achieves quality identical to ray casting at real-time speed. The performance is independent of the volume resolution and/or the number of clipping planes along a given viewing direction. Therefore it is suitable for real-time "internal volume inspections", involving one or several cutting planes, typically applied e.g., in medical visualization and material testing applications.
    Related Articles | Metrics
    Scheduling Algorithms Based on Weakly Hard Real-Time Constraints
    TU Gang, YANG FuMin and LU YanSheng
    Journal of Computer Science and Technology, 2003, 18 (6): 815-821. 
    Abstract   PDF(318KB) ( 1239 )   Chinese Summary
    The problem of scheduling weakly hard real-time tasks is addressed in this paper. The paper first analyzes the characters of u-pattern and weakly hard real-time constraints, then, presents two scheduling algorithms, Meet Any Algorithm and Meet Row Algorithm, for weakly hard real-time systems. Different from traditional algorithms used to guarantee deadlines, Meet Any Algorithm and Meet Row Algorithm can guarantee both deadlines and constraints. Meet Any Algorithm and Meet Row Algorithm try to find out the probabilities of tasks breaking constraints and increase task's priority in advance, but not till the last moment. Simulation results show that these two algorithms are better than other scheduling algorithms dealing with constraints and can largely decrease worst-case computation time of real-time tasks.
    Related Articles | Metrics
    A Super-Resolution Method with EWA%lliptical Weighted Average
    JIANG ZhongDing , LIN Hai , BAO HuJun and MA LiZhuang
    Journal of Computer Science and Technology, 2003, 18 (6): 822-832. 
    Abstract   PDF(390KB) ( 1269 )   Chinese Summary
    This paper introduces a practical algorithm for super-resolution, the process of reconstructing a high-resolution image from low-resolution input ones. The emphasis of the work is to super-resolve frames from real-world image/video sequences which may contain significant object occlusion or scene changes. As the quality of super-resolved images highly relies on the correctness of image alignment between consecutive frames, the robust optical flow method is used to accurately estimate motion between the image pair. An efficient and reliable scheme is devised to detect and discard incorrect matchings which may degrade the output quality. The usage of elliptical weighted average (EWA) filter is also introduced to model the point spread function (PSF) of acquisition system in order to improve accuracy of the model. A number of complex video sequences are tested to demonstrate the applicability and reliability of the proposed algorithm.
    Related Articles | Metrics
    Evaluation and Choice of Various Branch Predictors for Low-Power Embedded Processor
    FAN DongRui , YANG HongBo , GAO GuangRong and ZHAO RongCai
    Journal of Computer Science and Technology, 2003, 18 (6): 833-838. 
    Abstract   PDF(282KB) ( 1684 )   Chinese Summary
    Power is an important design constraint in embedded computing systems. To meet the power constraint, microarchitecture and hardware designed to achieve high performance need to be revisited, from both performance and power angles. This paper studies one of them: branch predictor. As well known, branch prediction is critical to exploit instruction level parallelism effectively, but may incur additional power consumption due to the hardware resource dedicated for branch prediction and the extra power consumed on mispredicted branches. This paper explores the design space of branch prediction mechanisms and tries to find the most beneficial one to realize low-power embedded processor. The sample processor studied is Godson-like processor, which is a dual-issue, out-of-order processor with deep pipeline, supporting MIPS instruction set.
    Related Articles | Metrics
    Integrating Scene Parallelism in Camera Auto-Calibration
    LIU Yong , WU ChengKe and Hung-Tat Tsui
    Journal of Computer Science and Technology, 2003, 18 (6): 839-847. 
    Abstract   PDF(347KB) ( 1148 )   Chinese Summary
    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera auto-calibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.
    Related Articles | Metrics
    Semantic Language and Multi-Language MT Approach Based on SL
    GAO QingShi , HU Yue , LI Li and GAO XiaoYu
    Journal of Computer Science and Technology, 2003, 18 (6): 848-852. 
    Abstract   PDF(302KB) ( 1288 )   Chinese Summary
    Any natural language is regarded as a representation of semantic language. The translation between two languages (I,J) is regarded as a transformation between two representations. A natural language-I is translated into another natural language-J only by two steps. One is semantic-analysis of language-I based on ``semantic-element-representation-base of language-I', the other is deploying into the representation of language-J based on ``semantic-element-representation-base of language-J'. For translating in N natural languages, it is needed to develop N translation systems only, rather than N(N-1)/2, or 2N systems.
    Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved