Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      05 November 2012, Volume 27 Issue 6 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Computational Visual Media
    Shi-Min Hu, Ralph R. Martin
    Journal of Computer Science and Technology, 2012, 27 (6): 1091-1091.  DOI: 10.1007/s11390-012-1286-0
    Abstract   PDF(150KB) ( 1411 )   Chinese Summary
    With the rapid development of various technologies from the Internet to mobile phones and cameras, visual data is now widely available in huge quantity and great variety, bringing significant opportunities for novel ways of processing visual information, as well as commercial applications.
    Related Articles | Metrics
    Multi-Scale Salient Features for Analyzing 3D Shapes
    Yong-Liang Yang (杨永亮) and Chao-Hui Shen (沈超慧)
    Journal of Computer Science and Technology, 2012, 27 (6): 1092-1099.  DOI: 10.1007/s11390-012-1287-z
    Abstract   PDF(4713KB) ( 2078 )   Chinese Summary
    Extracting feature regions on mesh models is crucial for shape analysis and understanding. It can be widely used for various 3D content-based applications in graphics and geometry field. In this paper, we present a new algorithm of extracting multi-scale salient features on meshes. This is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this kind of multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes.
    References | Related Articles | Metrics
    A Multi-Channel Salience Based Detail Exaggeration Technique for 3D Relief Surfaces
    Yong-Wei Miao (缪永伟), Senior Member, CCF, Jie-Qing Feng (冯结青), Senior Member, CCF, Jin-Rong Wang(王金荣), and Renato Pajarola
    Journal of Computer Science and Technology, 2012, 27 (6): 1100-1109.  DOI: 10.1007/s11390-012-1288-y
    Abstract   PDF(8306KB) ( 1530 )   Chinese Summary
    Visual saliency can always persuade the viewer's visual attention to fine-scale mesostructure of 3D complex shapes. Owing to the multi-channel salience measure and salience-domain shape modeling technique, a novel visual saliency based shape depiction scheme is presented to exaggerate salient geometric details of the underlying relief surface. Our multi-channel salience measure is calculated by combining three feature maps, i.e., the 0-order feature map of local height distribution, the 1-order feature map of normal difference, and the 2-order feature map of mean curvature variation. The original relief surface is firstly manipulated by a salience-domain enhancement function, and the detail exaggeration surface can then be obtained by adjusting the surface normals of the original surface as the corresponding final normals of the manipulated surface. The advantage of our detail exaggeration technique is that it can adaptively alter the shading of the original shape to reveal visually salient features whilst keeping the desired appearance unimpaired. The experimental results demonstrate that our non-photorealistic shading scheme can enhance the surface mesostructure effectively and thus improving the shape depiction of the relief surfaces.
    References | Related Articles | Metrics
    Connectivity-Based Segmentation for GPU-Accelerated Mesh Decompression
    Jie-Yi Zhao (赵杰伊), Min Tang(唐敏), and Ruo-Feng Tong (童若锋), Member, CCF
    Journal of Computer Science and Technology, 2012, 27 (6): 1110-1118.  DOI: 10.1007/s11390-012-1289-x
    Abstract   PDF(3921KB) ( 2321 )   Chinese Summary
    We present a novel algorithm to partition large 3D meshes for GPU-accelerated decompression. Our formulation focuses on minimizing the replicated vertices between patches, and balancing the numbers of faces of patches for efficient parallel computing. First we generate a topology model of the original mesh and remove vertex positions. Then we assign the centers of patches using geodesic farthest point sampling and cluster the faces according to the geodesic distance to the centers. After the segmentation we swap boundary faces to fix jagged boundaries and store the boundary vertices for whole-mesh preservation. The decompression of each patch runs on a thread of GPU, and we evaluate its performance on various large benchmarks. In practice, the GPU-based decompression algorithm runs more than 48x faster on NVIDIA GeForce GTX 580 GPU compared with that on the CPU using single core.
    References | Related Articles | Metrics
    Affective Image Colorization
    Xiao-Hui Wang(王晓慧), Jia Jia(贾珈), Han-Yu Liao(廖捍宇), and Lian-Hong Cai(蔡莲红)
    Journal of Computer Science and Technology, 2012, 27 (6): 1119-1128.  DOI: 10.1007/s11390-012-1290-4
    Abstract   PDF(7185KB) ( 1768 )   Chinese Summary
    Colorization of gray-scale images has attracted many attentions for a long time. An important role of image color is the conveyer of emotions (through color themes). The colorization with an undesired color theme is less useful, even it is semantically correct. However this has been rarely considered. Automatic colorization respecting both the semantics and the emotions is undoubtedly a challenge. In this paper, we propose a complete system for affective image colorization. We only need the user to assist object segmentation along with text labels and an affective word. First, the text labels along with other object characters are jointly used to filter the internet images to give each object a set of semantically correct reference images. Second, we select a set of color themes according to the affective word based on art theories. With these themes, a generic algorithm is used to select the best reference for each object, balancing various requirements. Finally, we propose a hybrid texture synthesis approach for colorization. To the best of our knowledge, it is the first system which is able to efficiently colorize a gray-scale image semantically by an emotionally controllable fashion. Our experiments show the effectiveness of our system, especially the benefit compared with the previous Markov random field (MRF) based method.
    References | Related Articles | Metrics
    A Customized Framework to Recompress Massive Internet Images
    Shou-Hong Ding(丁守鸿), Fei-Yue Huang(黄飞跃), Zhi-Feng Xie(谢志峰), Yong-Jian Wu(吴永坚), Bin Sheng(盛斌), and Li-Zhuang Ma(马利庄), Member, CCF
    Journal of Computer Science and Technology, 2012, 27 (6): 1129-1139.  DOI: 10.1007/s11390-012-1291-3
    Abstract   PDF(6809KB) ( 1828 )   Chinese Summary
    Recently, device storage capacity and transmission bandwidth requirements are facing a heavy burden on account of massive internet images. Generally, to improve user experience and save costs as much as possible, a lot of internet applications always focus on how to achieve appropriate image recompression. In this paper, we propose a novel framework to efficiently customize image recompression according to a variety of applications. First of all, we evaluate the input image's compression level and predict an initial compression level which is very close to the final output of our system using a prior learnt from massive images. Then, we iteratively recompress the input image to different levels and measure the perceptual similarity between the input image and the new result by a block-based coding quality method. According to the output of the quality assessment method, we can update the target compression level, or switch to the subjective evaluation, or return the final recompression result in our system pipeline control. We organize subjective evaluations based on different applications and obtain corresponding assessment report. At last, based on the assessment report, we set up a series of appropriate parameters for customizing image recompression. Moreover, our new framework has been successfully applied to many commercial applications, such as web portals, e-commerce, online game, and so on.
    References | Related Articles | Metrics
    A Novel Approach Towards Large Scale Cross-Media Retrieval
    Bo Lu (逯波), Guo-Ren Wang (王国仁), Member, CCF, ACM, IEEE, and Ye Yuan (袁野)
    Journal of Computer Science and Technology, 2012, 27 (6): 1140-1149.  DOI: 10.1007/s11390-012-1292-2
    Abstract   PDF(3682KB) ( 1541 )   Chinese Summary
    With the rapid development of Internet and multimedia technology, cross-media retrieval is concerned to retrieve all the related media objects with multi-modality by submitting a query media object. Unfortunately, the complexity and the heterogeneity of multi-modality have posed the following two major challenges for cross-media retrieval: 1) how to construct a unified and compact model for media objects with multi-modality, 2) how to improve the performance of retrieval for large scale cross-media database. In this paper, we propose a novel method which is dedicate to solving these issues to achieve effective and accurate cross-media retrieval. Firstly, a multi-modality semantic relationship graph (MSRG) is constructed using the semantic correlation amongst the media objects with multi-modality. Secondly, all the media objects in MSRG are mapped onto an isomorphic semantic space. Further, an efficient indexing MK-tree based on heterogeneous data distribution is proposed to manage the media objects within the semantic space and improve the performance of cross-media retrieval. Extensive experiments on real large scale cross-media datasets indicate that our proposal dramatically improves the accuracy and efficiency of cross-media retrieval, outperforming the existing methods significantly.
    References | Related Articles | Metrics
    Artificial Intelligence and Pattern Recognition
    Synthesizing Distributed Protocol Specifications from a UML State Machine Modeled Service Specification
    Jehad Al Dallal and Kassem A. Saleh
    Journal of Computer Science and Technology, 2012, 27 (6): 1150-1168.  DOI: 10.1007/s11390-012-1293-1
    Abstract   PDF(558KB) ( 1889 )   Chinese Summary
    The object-oriented paradigm is widely applied in designing and implementing communication systems. Unified Modeling Language (UML) is a standard language used to model the design of object-oriented systems. A protocol state machine is a UML adopted diagram that is widely used in designing communication protocols. It has two key attractive advantages over traditional finite state machines: modeling concurrency and modeling nested hierarchical states. In a distributed communication system, each entity of the system has its own protocol that defines when and how the entity exchanges messages with other communicating entities in the system. The order of the exchanged messages must conform to the overall service specifications of the system. In object-oriented systems, both the service and the protocol specifications are modeled in UML protocol state machines. Protocol specification synthesis methods have to be applied to automatically derive the protocol specification from the service specification. Otherwise, a time-consuming process of design, analysis, and error detection and correction has to be applied iteratively until the design of the protocol becomes error-free and consistent with the service specification. Several synthesis methods are proposed in the literature for models other than UML protocol state machines, and therefore, because of the unique features of the protocol state machines, these methods are inapplicable to services modeled in UML protocol state machines. In this paper, we propose a synthesis method that automatically synthesizes the protocol specification of distributed protocol entities from the service specification, given that both types of specifications are modeled in UML protocol state machines. Our method is based on the latest UML version (UML2.3), and it is proven to synthesize protocol specifications that are syntactically and semantically correct. As an example application, the synthesis method is used to derive the protocol specification of the H.323 standard used in Internet calls.
    References | Related Articles | Metrics
    Hierarchical Structures on Multigranulation Spaces
    Xi-Bei Yang(杨习贝), Yu-Hua Qian(钱宇华), Member, CCF, IEEE, and Jing-Yu Yang(杨静宇), Member, CCF, IEEE
    Journal of Computer Science and Technology, 2012, 27 (6): 1169-1183.  DOI: 10.1007/s11390-012-1294-0
    Abstract   PDF(460KB) ( 1693 )   Chinese Summary
    Though many hierarchical structures have been proposed to analyze the finer or coarser relationships between two granulation spaces, these structures can only be used to compare the single granulation spaces. However, it should be noticed that the concept of multigranulation plays a fundamental role in the development of granular computing. Therefore, the comparison between two multigranulation spaces has become a necessity. To solve such problem, two types of the multigranulation spaces are considered: one is the partition-based multigranulation space, the other is the covering-based multigranulation space. Three different hierarchical structures are then proposed on such two multigranulation spaces, respectively. Not only the properties about these hierarchical structures are discussed, but also the relationships between these hierarchical structures and the multigranulation rough sets are deeply investigated. It is shown that the first hierarchical structure is consistent with the monotonic varieties of optimistic multigranulation rough set, and the second hierarchical structure is consistent to the monotonic varieties of pessimistic multigranulation rough set, the third hierarchical structure is consistent to the monotonic varieties of both optimistic and pessimistic multigranulation rough sets.
    References | Related Articles | Metrics
    Automatic Prosodic Break Detection and Feature Analysis
    Chong-Jia Ni(倪崇嘉), Ai-Ying Zhang(张爱英), Wen-Ju Liu(刘文举), and Bo Xu(徐波)
    Journal of Computer Science and Technology, 2012, 27 (6): 1184-1196.  DOI: 10.1007/s11390-012-1295-z
    Abstract   PDF(727KB) ( 2681 )   Chinese Summary
    Automatic prosodic break detection and annotation are important for both speech understanding and natural speech synthesis. In this paper, we discuss automatic prosodic break detection and feature analysis. The contributions of the paper are two aspects. One is that we use classifier combination method to detect Mandarin and English prosodic break using acoustic, lexical and syntactic evidence. Our proposed method achieves better performance on both the Mandarin prosodic annotation corpus — Annotated Speech Corpus of Chinese Discourse and the English prosodic annotation corpus — Boston University Radio News Corpus when compared with the baseline system and other researches' experimental results. The other is the feature analysis for prosodic break detection. The functions of different features, such as duration, pitch, energy, and intensity, are analyzed and compared in Mandarin and English prosodic break detection. Based on the feature analysis, we also verify some linguistic conclusions.
    References | Related Articles | Metrics
    Data Structures in Multi-Objective Evolutionary Algorithms
    Najwa Altwaijry and Mohamed El Bachir Menai
    Journal of Computer Science and Technology, 2012, 27 (6): 1197-1210.  DOI: 10.1007/s11390-012-1296-y
    Abstract   PDF(1946KB) ( 1659 )   Chinese Summary
    Data structures used for an algorithm can have a great impact on its performance, particularly for the solution of large and complex problems, such as multi-objective optimization problems (MOPs). Multi-objective evolutionary algorithms (MOEAs) are considered an attractive approach for solving MOPs, since they are able to explore several parts of the Pareto front simultaneously. The data structures for storing and updating populations and non-dominated solutions (archives) may affect the efficiency of the search process. This article describes data structures used in MOEAs for realizing populations and archives in a comparative way, emphasizing their computational requirements and general applicability reported in the original work.
    References | Related Articles | Metrics
    Social Network-Aware Interfaces as Facilitators of Innovation
    Elena Garcia-Barriocanal, Miguel-Angel Sicilia, and Salvador Sánchez-Alonso
    Journal of Computer Science and Technology, 2012, 27 (6): 1211-1221.  DOI: 10.1007/s11390-012-1297-x
    Abstract   PDF(733KB) ( 1567 )   Chinese Summary
    References | Related Articles | Metrics
    KnoE: A Web Mining Tool to Validate Previously Discovered Semantic Correspondences
    Jorge Martinez-Gil, Member, ACM, and Jos? F. Aldana-Montes
    Journal of Computer Science and Technology, 2012, 27 (6): 1222-1232.  DOI: 10.1007/s11390-012-1298-9
    Abstract   PDF(878KB) ( 1558 )   Chinese Summary
    The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of techniques and tools for addressing this problem, however, the complex nature of the matching problem make existing solutions for real situations not fully satisfactory. The Google Similarity Distance has appeared recently. Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions. Our work consists of developing a software application for validating results discovered by schema and ontology matching tools using the philosophy behind this distance. Moreover, we are interested in using not only Google, but other popular search engines with this similarity distance. The results reveal three main facts. Firstly, some web search engines can help us to validate semantic correspondences satisfactorily. Secondly there are significant differences among the web search engines. And thirdly the best results are obtained when using combinations of the web search engines that we have studied.
    References | Related Articles | Metrics
    A Multi-Threaded Semantic Focused Crawler
    Punam BediMember, ACM, Senior Member, IEEE, Anjali Thukral, Hema Banati, Abhishek Behl, and Varun Mendiratta
    Journal of Computer Science and Technology, 2012, 27 (6): 1233-1242.  DOI: 10.1007/s11390-012-1299-8
    Abstract   PDF(1440KB) ( 2447 )   Chinese Summary
    References | Related Articles | Metrics
    Heart Rate Extraction from Vowel Speech Signals
    Abdelwadood Mesleh, Dmitriy Skopin, Sergey Baglikov, Anas Quteishat
    Journal of Computer Science and Technology, 2012, 27 (6): 1243-1251.  DOI: 10.1007/s11390-012-1300-6
    Abstract   PDF(3778KB) ( 2124 )   Chinese Summary
    References | Related Articles | Metrics
    JacUOD: A New Similarity Measurement for Collaborative Filtering
    Hui-Feng Sun, Jun-Liang Chen, Gang Yu, Chuan-Chang Liu, Yong Peng, Guang Chen, and Bo Cheng
    Journal of Computer Science and Technology, 2012, 27 (6): 1252-1260.  DOI: 10.1007/s11390-012-1301-5
    Abstract   PDF(662KB) ( 4045 )   Chinese Summary
    Collaborative filtering (CF) has been widely applied to recommender systems, since it can assist users to discover their favorite items. Similarity measurement that measures the similarity between two users or items is critical to CF. However, traditional similarity measurement approaches for memory-based CF can be strongly improved. In this paper, we propose a novel similarity measurement, named Jaccard Uniform Operator Distance (JacUOD), to effectively measure the similarity. Our JacUOD approach aims at unifying similarity comparison for vectors in different multidimensional vector spaces. Compared with traditional similarity measurement approaches, JacUOD properly handles dimension-number difference for different vector spaces. We conduct experiments based on the well-known MovieLens datasets, and take user-based CF as an example to show the effectiveness of our approach. The experimental results show that our JacUOD approach achieves better prediction accuracy than traditional similarity measurement approaches.
    References | Related Articles | Metrics
    Local Community Detection Using Link Similarity
    Ying-Jun Wu(吴英骏), Han Huang(黄翰), Member, CCF, ACM, IEEE, Zhi-Feng Hao(郝志峰), and Feng Chen(陈丰)
    Journal of Computer Science and Technology, 2012, 27 (6): 1261-1268.  DOI: 10.1007/s11390-012-1302-4
    Abstract   PDF(877KB) ( 2367 )   Chinese Summary
    Exploring local community structure is an appealing problem that has drawn much recent attention in the area of social network analysis. As the complete information of network is often difficult to obtain, such as networks of web pages, research papers and Facebook users, people can only detect community structure from a certain source vertex with limited knowledge of the entire graph. The existing approaches do well in measuring the community quality, but they are largely dependent on source vertex and putting too strict policy in agglomerating new vertices. Moreover, they have predefined parameters which are difficult to obtain. This paper proposes a method to find local community structure by analyzing link similarity between the community and the vertex. Inspired by the fact that elements in the same community are more likely to share common links, we explore community structure heuristically by giving priority to vertices which have a high link similarity with the community. A three-phase process is also used for the sake of improving quality of community structure. Experimental results prove that our method performs effectively not only in computer-generated graphs but also in real-world graphs.
    References | Related Articles | Metrics
    Graphics, Visualization, and Image Processing
    Hybrid Parallel Bundle Adjustment for 3D Scene Reconstruction with Massive Points
    Xin Liu (刘鑫), Wei Gao (高伟), and Zhan-Yi Hu (胡占义)
    Journal of Computer Science and Technology, 2012, 27 (6): 1269-1280.  DOI: 10.1007/s11390-012-1303-3
    Abstract   PDF(10254KB) ( 1539 )   Chinese Summary
    Bundle adjustment (BA) is a crucial but time consuming step in 3D reconstruction. In this paper, we intend to tackle a special class of BA problems where the reconstructed 3D points are much more numerous than the camera parameters, called Massive-Points BA (MPBA) problems. This is often the case when high-resolution images are used. We present a design and implementation of a new bundle adjustment algorithm for efficiently solving the MPBA problems. The use of hardware parallelism, the multi-core CPUs as well as GPUs, is explored. By careful memory-usage design, the graphic-memory limitation is effectively alleviated. Several modern acceleration strategies for bundle adjustment, such as the mixed-precision arithmetics, the embedded point iteration, and the preconditioned conjugate gradients, are explored and compared. By using several high-resolution image datasets, we generate a variety of MPBA problems, with which the performance of five bundle adjustment algorithms are evaluated. The experimental results show that our algorithm is up to 40 times faster than classical Sparse Bundle Adjustment, while maintaining comparable precision.
    References | Related Articles | Metrics
    Fast Image Correspondence with Global Structure Projection
    Qing-Liang Lin(林庆樑), Bin Sheng(盛斌), Yang Shen(沈洋), Zhi-Feng Xie(谢志峰), Zhi-Hua Chen(陈志华), and Li-Zhuang Ma(马利庄), Member, CCF
    Journal of Computer Science and Technology, 2012, 27 (6): 1281-1288.  DOI: 10.1007/s11390-012-1304-2
    Abstract   PDF(10007KB) ( 1386 )   Chinese Summary
    This paper presents a method for recognizing images with flat objects based on global keypoint structure correspondence. This technique works by two steps: reference keypoint selection and structure projection. The using of global keypoint structure is an extension of an orderless bag-of-features image representation, which is utilized by the proposed matching technique for computation efficiency. Specifically, our proposed method excels in the dataset of images containing "flat objects" such as CD covers, books, newspaper. The efficiency and accuracy of our proposed method has been tested on a database of nature pictures with flat objects and other kind of objects. The result shows our method works well in both occasions.
    References | Related Articles | Metrics
    Machine Learning and Data Mining
    A Kernel Approach to Multi-Task Learning with Task-Specific Kernels
    Wei Wu(武威), Hang Li(李航), Member, CCF, ACM, IEEE, Yun-Hua Hu(胡云华), Member, ACM, and Rong Jin(金榕), Member, IEEE
    Journal of Computer Science and Technology, 2012, 27 (6): 1289-1301.  DOI: 10.1007/s11390-012-1305-1
    Abstract   PDF(1292KB) ( 1803 )   Chinese Summary
    Several kernel-based methods for multi-task learning have been proposed, which leverage relations among tasks as regularization to enhance the overall learning accuracies. These methods assume that the tasks share the same kernel, which could limit their applications because in practice different tasks may need different kernels. The main challenge of introducing multiple kernels into multiple tasks is that models from different reproducing kernel Hilbert spaces (RKHSs) are not comparable, making it difficult to exploit relations among tasks. This paper addresses the challenge by formalizing the problem in the square integrable space (SIS). Specially, it proposes a kernel-based method which makes use of a regularization term defined in SIS to represent task relations. We prove a new representer theorem for the proposed approach in SIS. We further derive a practical method for solving the learning problem and conduct consistency analysis of the method. We discuss the relationship between our method and an existing method. We also give an SVM (support vector machine)- based implementation of our method for multi-label classification. Experiments on an artificial example and two real-world datasets show that the proposed method performs better than the existing method.
    References | Related Articles | Metrics
    A Unified Active Learning Framework for Biomedical Relation Extraction
    Hong-Tao Zhang (张宏涛), Min-Lie Huang (黄民烈), and Xiao-Yan Zhu (朱小燕), Member CCF
    Journal of Computer Science and Technology, 2012, 27 (6): 1302-1313.  DOI: 10.1007/s11390-012-1306-0
    Abstract   PDF(744KB) ( 2332 )   Chinese Summary
    Supervised machine learning methods have been employed with great success in the task of biomedical relation extraction. However, existing methods are not practical enough, since manual construction of large training data is very expensive. Therefore, active learning is urgently needed for designing practical relation extraction methods with little human effort. In this paper, we describe a unified active learning framework. Particularly, our framework systematically addresses some practical issues during active learning process, including a strategy for selecting informative data, a data diversity selection algorithm, an active feature acquisition method, and an informative feature selection algorithm, in order to meet the challenges due to the immense amount of complex and diverse biomedical text. The framework is evaluated on protein- protein interaction (PPI) extraction and is shown to achieve promising results with a significant reduction in editorial effort and labeling time.
    References | Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved