Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      15 May 2005, Volume 20 Issue 3 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Knowledge Map: Mathematical Model and Dynamic Behaviors
    Hai Zhuge and Xiang-Feng Luo
    Journal of Computer Science and Technology, 2005, 20 (3): 289-295 . 
    Abstract   PDF(809KB) ( 1401 )   Chinese Summary
    Knowledge representation and reasoning is a key issue of the Knowledge Grid. This paper proposes a Knowledge Map (KM) model for representing and reasoning causal knowledge as an overlay in the Knowledge Grid. It extends Fuzzy Cognitive Maps (FCMs) to represent andreason not only simple cause-effect relations, but also time-delaycausal relations, conditional probabilistic causal relations andsequential relations. The mathematical model and dynamic behaviors ofKM are presented. Experiments show that, under certain conditions, thedynamic behaviors of KM can translate between different states. Knowingthis condition, experts can control or modify the constructed KM whileits dynamic behaviors do not accord with their expectation. Simulationsand applications show that KM is more powerful and natural than FCM inemulating real world.
    References | Related Articles | Metrics
    Bridging Real World Semantics to Model World Semantics for Taxonomy Based Knowledge Representation System
    Ju-Hum Kwon, Chee-Yang Song, Chang-Joo Moon, and Doo-Kwon Baik
    Journal of Computer Science and Technology, 2005, 20 (3): 296-308 . 
    Abstract   PDF(923KB) ( 1751 )   Chinese Summary
    As a mean to map ontology concepts, a similarity technique is employed.Especially a context dependent concept mapping is tackled, which needscontextual information from knowledge taxonomy.Context-based semantic similarity differs from the real worldsimilarity in that it requires contextual information to calculatesimilarity. The notion of semantic coupling is introduced to derivesimilarity for a taxonomy-based system. The semantic coupling shows thedegree of semantic cohesiveness for a group of concepts toward a givencontext. In order to calculate the semantic coupling effectively,the edge counting method is revisited for measuring basic semanticsimilarity by considering the weighting attributes from where theyaffect an edge's strength. The attributes of scaling depth effect,semantic relation type, and virtual connection for the edge counting areconsidered. Furthermore, how the proposed edge counting method could bewell adapted for calculating context-based similarity is showed.Thorough experimental results are provided for both edge counting andcontext-based similarity. The results of proposed edge counting wereencouraging compared with other combined approaches, and thecontext-based similarity also showed understandable results. The novelcontributions of this paper come from two aspects. First, the similarityis increased to the viable level for edge counting. Second,a mechanism is provided to derive a context-based similarity intaxonomy-based system, which has emerged as a hot issue in theliterature such as Semantic Web, MDR, and other ontology-mappingenvironments.
    References | Related Articles | Metrics
    Multi-Scaling Sampling: An Adaptive Sampling Method for Discovering Approximate Association Rules
    Cai-Yan Jia and Xie-Ping Gao
    Journal of Computer Science and Technology, 2005, 20 (3): 309-318 . 
    Abstract   PDF(698KB) ( 1405 )   Chinese Summary
    One of the obstacles of the efficient association rule mining is theexplosive expansion of data sets since it is costly or impossible toscan large databases, esp., for multiple times. A popular solution toimprove the speed and scalability of the association rule mining is todo the algorithm on a random sample instead of the entire database. Buthow to effectively define and efficiently estimate the degree of errorwith respect to the outcome of the algorithm, and how to determine the samplesize needed are entangling researches until now. In this paper,an effective and efficient algorithm is given based on the PAC(Probably Approximate Correct) learning theory to measure and estimatesample error. Then, a new adaptive, on-line, fast samplingstrategy --- multi-scaling sampling --- is presented inspired by MRA(Multi-Resolution Analysis) and Shannon sampling theorem, for quicklyobtaining acceptably approximate association rules at appropriate samplesize. Both theoretical analysis and empirical study have showed that thesampling strategy can achieve a very good speed-accuracy trade-off.
    References | Related Articles | Metrics
    Fuzzy Constraint-Based Agent Negotiation
    Menq-Wen Lin, K. Robert Lai, and Ting-Jung Yu
    Journal of Computer Science and Technology, 2005, 20 (3): 319-330 . 
    Abstract   PDF(844KB) ( 1344 )   Chinese Summary
    Conflicts between two or more parties arise forvarious reasons and perspectives. Thus, resolution of conflicts frequentlyrelies on some form of negotiation. This paper presents a generalproblem-solving framework for modeling multi-issue multilateral negotiationusing fuzzy constraints. Agent negotiation isformulated as a distributed fuzzy constraint satisfaction problem (DFCSP).Fuzzy constrains are thus used to naturally represent each agent's desiresinvolving imprecision and human conceptualization, particularly when lexicalimprecision and subjective matters are concerned. On the other hand, based onfuzzy constraint-based problem-solving, our approach enables an agent not onlyto systematically relax fuzzy constraints to generate a proposal, but also toemploy fuzzy similarity to select the alternative that is subject to itsacceptability by the opponents. This task of problem-solving is to reach anagreement that benefits all agents with a high satisfaction degree of fuzzyconstraints, and move towards the deal more quickly since their search focusesonly on the feasible solution space. An application to multilateralnegotiation of a travel planning is provided to demonstrate the usefulness andeffectiveness of our framework.
    References | Related Articles | Metrics
    Computation on Sentence Semantic Distance for Novelty Detection
    Hua-Ping Zhang, Jian Sun, Bing Wang, and Shuo Bai
    Journal of Computer Science and Technology, 2005, 20 (3): 331-337 . 
    Abstract   PDF(567KB) ( 1613 )   Chinese Summary
    Novelty detection is to retrieve new information andfilter redundancy from given sentences that are relevant to a specifictopic. In TREC2003, the authors tried an approach to novelty detectionwith semantic distance computation. The motivation is to expand asentence by introducing semantic information. Computation on semanticdistance between sentences incorporates WordNet with statisticalinformation. The novelty detection is treated as a binaryclassification problem: new sentence or not. The feature vector, usedin the vector space model for classification, consists of variousfactors, including the semantic distance from the sentence to the topicand the distance from the sentence to the previous relevant contextoccurring before it. New sentences are then detected with Winnow andsupport vector machine classifiers, respectively. Several experimentsare conducted to survey the relationship between different factors andperformance. It is proved that semantic computation is promising innovelty detection. The ratio of new sentence size to relevant size isfurther studied given different relevant document sizes. It isfound that the ratio reduced with a certain speed (about 0.86). Thenanother group of experiments is performed supervised with the ratio.It is demonstrated that the ratio is helpful to improve the noveltydetection performance.
    References | Related Articles | Metrics
    Logical Sentences as the Intent of Concepts
    Yu Sun, Yue-Fei Sui, and You-Ming Xia
    Journal of Computer Science and Technology, 2005, 20 (3): 338-344 . 
    Abstract   PDF(275KB) ( 1124 )   Chinese Summary
    Pragmatics plays animportant role in correctly understanding sentences. Much usefulinformation will be lost if the context in which a sentence isasserted is ignored. There are some approaches in logic topragmatics, such as situation theories and context logics.Although these methods associate a sentence with a context or asituation, they consider only the truth value of the sentence.However, a sentence should have more meanings than its truthvalue, and people care more about what a sentence conveys. For theaffection of contexts, the meaning of a sentence is not always itssemantic meaning and a sentence may have different pragmaticalimplications in different contexts. In this paper, a context isconsidered as some structure in the real world. A sentence fromsome logical language is conceptualized as a concept, whose intentis a set of sentences implied semantically by the sentence, and whoseextent is a set of contexts in which the sentence describes a part of the contexts. In terms of tools and theories of concepts, astrictly defined theory is given to study the pragmatics ofsentences in contexts in information systems, which cannot bederived from the sentences by using logical reasoning methods.
    References | Related Articles | Metrics
    Parallel Data Cube Storage Structure for Range Sum Queries and Dynamic Updates
    Hong Gao and Jian-Zhong Li
    Journal of Computer Science and Technology, 2005, 20 (3): 345-356 . 
    Abstract   PDF(1162KB) ( 1189 )   Chinese Summary
    I/O parallelism is considered to be a promising approach to achievinghigh performance in parallel data warehousing systems where hugeamounts of data and complex analytical queries have to be processed.This paper proposes a parallel secondary data cube storage structure(PHC for short) to efficiently support the processing of range sumqueries and dynamic updates on data cube using parallel computingsystems. Based on PHC, two parallel algorithms for processing range sumqueries and updates are proposed also. Both the algorithms have the same time complexity, O(logdn/P). The analytical and experimentalresults show that PHC and the parallel algorithms have high performanceand achieve optimum speedup.
    References | Related Articles | Metrics
    Accomplishing Deterministic XML Query Optimization
    Dun-Ren Che
    Journal of Computer Science and Technology, 2005, 20 (3): 357-366 . 
    Abstract   PDF(547KB) ( 1239 )   Chinese Summary
    As the popularity of XML (eXtensible Markup Language) keeps growingrapidly, the management of XML compliant structured-document databaseshas become a very interesting and compelling research area. Queryoptimization for XML structured-documents stands out as one of the mostchallenging research issues in this area because of the much enlargedoptimization (search) space, which is a consequence of the intrinsiccomplexity of the underlying data model of XML data. We thereforepropose to apply deterministic transformations on query expressions tomost aggressively prune the search space and fast achieve a sufficientlyimproved alternative (if not the optimal) for each incoming queryexpression. This idea is not just exciting but practically attainable.This paper first provides an overview of our optimization strategy, andthen focuses on the key implementation issues of our rule-basedtransformation system for XML query optimization in a databaseenvironment. The performance results we obtained from experimentationshow that our approach is a valid and effective one.
    References | Related Articles | Metrics
    Semi-Closed Cube: An Effective Approach to Trading Off Data Cube Size and Query Response Time
    Sheng-En Li and Shan Wang
    Journal of Computer Science and Technology, 2005, 20 (3): 367-372 . 
    Abstract   PDF(458KB) ( 1068 )   Chinese Summary
    The results of data cube will occupy huge amount ofdisk space when the base table is of a large number of attributes. Anew type of data cube, compact data cube like condensed cube andquotient cube, was proposed to solve the problem. It compresses data cubedramatically. However, its query cost is so high that it cannot be usedin most applications. This paper introduces the semi-closed cube to reducethe size of data cube and achieve almost the same queryresponse time as the data cube does. Semi-closed cube is a generalizationof condensed cube and quotient cube and is constructed from a quotientcube. When the query cost of quotient cube is higher than a giventhreshold, semi-closed cube selects some views and picks a fellow foreach of them. All the tuples of those views are materialized exceptthose closed by their fellows. To find a tuple of those views, users onlyneed to scan the view and its fellow. Thus, their query performance isimproved. Experiments were conducted using a real-world data set. Theresults show that semi-closed cube is an effective approach of datacube.
    References | Related Articles | Metrics
    Declarative XML Update Language Based on a Higher Data Model
    Guo-Ren Wang and Xiao-Lin Zhang
    Journal of Computer Science and Technology, 2005, 20 (3): 373-377 . 
    Abstract   PDF(273KB) ( 1284 )   Chinese Summary
    With the extensive use of XML in applications over the Web, how toupdate XML data is becoming an important issue because the role ofXML has expanded beyond traditional applications in which XML isused for information exchange and data representation over theWeb. So far, several languages have been proposed for updating XMLdata, but they are all based on lower, so-called graph-based ortree-based data models. Update requests are thus expressed in anonintuitive and unnatural way and update statements are toocomplicated to comprehend. This paper presents a noveldeclarative XML update language which is an extension of theXML-RL query language. Compared with other existing XML updatelanguages, it has the following features. First, it is the onlyXML data manipulation language based on a higher data model.Second, this language can express complex update requests atmultiple levels in a hierarchy in a simple and flat way. Third,this language directly supports the functionality of updatingcomplex objects while all other update languages do not supportthese operations. Lastly, most of existing languages userename to modify attribute and element names, which is adifferent way from updates on value. The proposed language modifies tagnames, values, and objects in a unified way by the introduction ofthree kinds of logical binding variables: object variables,value variables, and name variables.
    References | Related Articles | Metrics
    Illumination Invariant Recognition of Three-Dimensional Texture in Color Images
    Jie Yang and Mohammed Al-Rawi
    Journal of Computer Science and Technology, 2005, 20 (3): 378-388 . 
    Abstract   PDF(878KB) ( 1297 )   Chinese Summary
    In this paper, illumination-affine invariant methods are presentedbased on affinemoment normalization techniques, Zernike moments, and multibandcorrelation functions. The methods are suitable for the illuminationinvariant recognition of 3D color texture. Complex valued moments (I.e.,Zernike moments) and affine moment normalization are used in thederivation of illumination affine invariants where the real valuedaffine moment invariants fail to provide affine invariants that areindependent of illumination changes. Three different momentnormalization methods have been used, two of which are based on affinemoment normalization technique and the third is based on reducing theaffine transformation to a Euclidian transform. It is shown that for achange of illumination and orientation, the affinely normalized Zernikemoment matrices are related by a linear transform. Experimental resultsare obtained in two tests: the first is used with textures ofoutdoor scenes while the second is performed on the well-knownCUReT texture database. Both tests show high recognition efficiency of the proposed recognition methods.
    References | Related Articles | Metrics
    Blending Canal Surfaces Based on PH Curves
    Chen-Dong Xu and Fa-Lai Chen
    Journal of Computer Science and Technology, 2005, 20 (3): 389-395 . 
    Abstract   PDF(428KB) ( 1176 )   Chinese Summary
    In this paper, a new method for blending two canal surfaces is proposed. The blending surface is itself a generalized canal surface, the spine curve of which is a PH (Pythagorean-Hodograph) curve. The blending surface possesses an attractive property --- its representation is rational. The method is extensible to blend general surfaces as long as the blending boundaries are well-defined.
    References | Related Articles | Metrics
    Effects of Local-Lag Mechanism on Task Performance in a Desktop CVE System
    Hong Chen, Ling Chen, and Gen-Cai Chen
    Journal of Computer Science and Technology, 2005, 20 (3): 396-401 . 
    Abstract   PDF(519KB) ( 1243 )   Chinese Summary
    Consistency maintenance is a kernel problem inCollaborative Virtual Environment (CVE) research. The approaches usedin Networked Virtual Environments (e.g., DR algorithm) could not be usedin CVEs, for they could not prevent short-term inconsistency. Therefore,local-lag mechanism has been proposed to eliminate short-term inconsistencyin CVEs. Choosing a proper lag value is a key problem in local-lagmechanism. This paper studied the effects of lag value (0ms--900ms) ontask performance in a desktop CVE system. Experimental resultsindicate that the effect of lag value on task performance is notlinear. The effect could be separated into four segments by threedividing points: 150ms, 300ms and 600ms. Lag has no effect on taskperformance while ranging from 0ms to 150ms. From 150ms to 300ms,lag slightly affects task performance. Lag deteriorates taskperformance seriously while ranging from 300ms to 600ms. When lag islonger than 600ms, the task cannot be accomplished sometimes.
    References | Related Articles | Metrics
    Arabic Word Recognition by Classifiers and Context
    Nadir Farah, Labiba Souici, and Mokhtar Sellami
    Journal of Computer Science and Technology, 2005, 20 (3): 402-410 . 
    Abstract   PDF(1280KB) ( 1518 )   Chinese Summary
    Given the number and variety of methods used forhandwriting recognition, it has been shown that there is no singlemethod that can be called the ``best''. In recent years, the combinationof different classifiers and the use of contextual information havebecome major areas of interest in improving recognition results. Thispaper addresses a case study on the combination of multiple classifiersand the integration of syntactic level information for the recognitionof handwritten Arabic literal amounts. To the best of our knowledge,this is the first time either of these methods has been applied toArabic word recognition. Using three individual classifiers with highlevel global features, we performed word recognition experiments. Aparallel combination method was tested for all possible configurationcases of the three chosen classifiers. A syntactic analyzer makes afinal decision on the candidate words generated by the bestconfiguration scheme. The effectiveness of contextual knowledgeintegration in our application is confirmed by the obtained results.
    References | Related Articles | Metrics
    Wavelet Energy Feature Extraction and Matching for Palmprint Recognition
    Xiang-Qian Wu, Kuan-Quan Wang, and David Zhang
    Journal of Computer Science and Technology, 2005, 20 (3): 411-418 . 
    Abstract   PDF(828KB) ( 2853 )   Chinese Summary
    According to the fact that the basic features of apalmprint, including principal lines, wrinkles and ridges, havedifferent resolutions, in this paper we analyze palmprints using amulti-resolution method and define a novel palmprint feature, whichcalled wavelet energy feature (WEF), based on the wavelet transform.WEF can reflect the wavelet energy distribution of the principal lines,wrinkles and ridges in different directions at different resolutions(scales), thus it can efficiently characterize palmprints. This paperalso analyses the discriminabilities of each level WEF and, according to these discriminabilities, chooses a suitable weight for each levelto compute the weighted city block distance for recognition. Theexperimental results show that the order of the discriminabilities ofeach level WEF, from strong to weak, is the 4th, 3rd, 5th, 2nd and 1stlevel. It also shows that WEF is robust to some extent in rotation andtranslation of the images. Accuracies of 99.24% and 99.45% have beenobtained in palmprint verification and palmprint identification,respectively. These results demonstrate the power of the proposedapproach.
    References | Related Articles | Metrics
    Phase Correlation Based Iris Image Registration Model
    Jun-Zhou Huang, Tie-Niu Tan, Li Ma, and Yun-Hong Wang
    Journal of Computer Science and Technology, 2005, 20 (3): 419-425 . 
    Abstract   PDF(519KB) ( 1381 )   Chinese Summary
    Iris recognition is one of the most reliable personalidentification methods. In iris recognition systems, image registrationis an important component. Accurately registering iris images leads tohigher recognition rate for an iris recognition system. This paperproposes a phase correlation based method for iris image registrationwith sub-pixel accuracy. Compared with existing methods, it isinsensitive to image intensity and can compensate to a certain extentthe non-linear iris deformation caused by pupil movement. Experimentalresults show that the proposed algorithm has an encouragingperformance.
    References | Related Articles | Metrics
    Complete Multiple Description Mesh-Based Video Coding Scheme and Its Performance
    Yang-Li Wang and Cheng-Ke Wu
    Journal of Computer Science and Technology, 2005, 20 (3): 426-431 . 
    Abstract   PDF(484KB) ( 1298 )   Chinese Summary
    This paper proposes a multiple description(MD) mesh-based motion coding method, which generates two descriptionsfor mesh-based motion by subsampling the nodes of a right-angledtriangular mesh and dividing them into two groups. Motion vectorsassociated with the mesh nodes in each group are transmitted overdistinct channels. With the nodes in each group, two other regulartriangular meshes besides the original one can be constructed, andthree different prediction images can be reconstructed according todescriptions available. The proposed MD mesh-based motion coding methodis then combined with the pairwise correlating transform (PCT), and acomplete MD video coding scheme is proposed. Further measures are takento reduce the mismatch between the encoder and decoder that occurs whenonly one description is received and the decoder reconstruction isdifferent from the encoder. The performance of the proposed scheme isevaluated using computer simulations, and the results show, compared toReibman's MD transform coding (MDTC) method, the proposed schemeachieves better redundancy rate distortion (RRD) performance. In packetloss scenario, the proposed scheme outperforms the MDTC method.
    References | Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved