Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
  • Table of Content
      01 January 2011, Volume 26 Issue 1 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Natural Language Processing
    Journal of Computer Science and Technology, 2011, 26 (1): 1-2.  DOI: 10.1007/s11390-011-1105-z
    Abstract   PDF(157KB) ( 1673 )   Chinese Summary
      Natural Language Processing (NLP) is a field of computer science and linguistics concerning the interactions between computers and human (natural) languages. There have been sufficient successes in the past decades in this area, which suggest that NLP is now and will continue to be a major area of computer sciences and information technologies.
      The goal of this special section is to present high-quality contributions that explicate reasoning involved in different areas of NLP both at theoretical and practical levels. The special section has received enthusiastic responses. We totally received 55 submissions. After careful review, we have accepted 8 papers, which have high technical quality and cover a wide range of topics that reflect new trends in NLP.
      The paper "A New Multiword Expression Metric and Its Applications" by Fan Bu et al. proposes a knowledge- free, unsupervised, and language-independent Multiword Expression Distance (MED) to measure the distance from an n-gram to its semantics and applies it to two NLP applications.
      The paper "Chinese New Word Identification: A Latent Discriminative Model with Global Features" by Xiao Sun et al. presents a piece of work that makes use of the Latent Dynamic CRF and semi-CRF model for Chinese new word detection and POS tagging as a combined task.
      The paper "Multi-Domain Sentiment Classification with Classifier Combination" by Shou-Shan Li et al. proposes a multiple classifier combination approach for the issue of multi-domain sentiment classification. They first train single domain classifiers separately with domain specific data and then combine the classifiers for the final decision.
      The paper "Learning Noun Phrase Anaphoricity in Coreference Resolution via Label Propagation" by Guo-Dong Zhou and Fang Kong introduces a method that incorporates a label-propagation algorithm into the task of noun phrase anaphoricity determination.
      The paper "Kernel-Based Semantic Relation Detection and Classification via Enriched Parse Tree Structure" by Guo-Dong Zhou and Qiao-Ming Zhu proposes a new kernel-based method for semantic relation detection and classification by making the convolution tree kernel sensitive to context and adding latent semantic information to the parse tree.
      The paper "Improvement of Machine Translation Evaluation by Simpler Linguistically Motivated Features" by Mu-Yun Yang et al. presents a machine translation evaluation metric using features involving POS tags and parser analyses in the framework of regression SVM.
      The paper "Using Syntactic-Based Kernels for Classifying Temporal Relations" by Seyed Abolghasem Mir- roshandel et al. proposes a number of novel kernels which extended the tree kernel to handle the information of events and times in a sentence for the task of temporal relation classification.
      The paper "Transfer Learning via Multi-View Principal Component Analysis" by Yang-Sheng Ji et al. aims at addressing the defeat of existing transfer learning approach, and by treating the common features in source and target as two separated views, presents a novel multi-view PCA algorithm to learn the latent representations of these two views.
      We believe this special issue will help encourage the NLP community to address the challenges in NLP and think about the problem from a broader point of view. We hope you find this special issue well worth the effort.
    We thank all the authors who submitted papers for their contributions and our dedicated reviewers for their professional reviewing services. We are grateful to Mr. Rui Xia for his great help with the review process. The guest editors sincerely hope that the readers will enjoy reading this special section and greatly benefit from the works.
    Related Articles | Metrics
    A New Multiword Expression Metric and Its Applications
    Fan Bu(布凡), Xiao-Yan Zhu(朱小燕), Member, CCF, and Ming Li(李明), Fellow, ACM, IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 3-13.  DOI: 10.1007/s11390-011-1106-y
    Abstract   PDF(2014KB) ( 2246 )   Chinese Summary
    Multiword Expressions (MWEs) appear frequently and ungrammatically in natural languages. Identifying MWEs in free texts is a very challenging problem. This paper proposes a knowledge-free, unsupervised, and language- independent Multiword Expression Distance (MED). The new metric is derived from an accepted physical principle, measures the distance from an n-gram to its semantics, and outperforms other state-of-the-art methods on MWEs in two applications: question answering and named entity extraction.
    References | Related Articles | Metrics
    Chinese New Word Identification: A Latent Discriminative Model with Global Features
    Xiao Sun(Sun-Xiao-), De-Gen Huang(Huang-De-Gen-), Senior Member, CCF , Hai-Yu Song(Song-Hai-Yu-) and Fu-Ji Ren(Lin-Fu-Ji-), Member , IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 14-24.  DOI: 10.1007/s11390-011-1107-x
    Abstract   PDF(468KB) ( 2337 )   Chinese Summary
    Chinese new words are particularly problematic in Chinese natural language processing. With the fast development of Internet and information explosion, it is impossible to get a complete system lexicon for applications in Chinese natural language processing, as new words out of dictionaries are always being created. The procedure of new words identification and POS tagging are usually separated and the features of lexical information cannot be fully used. A latent discriminative model, which combines the strengths of Latent Dynamic Conditional Random Field (LDCRF) and semi-CRF, is proposed to detect new words together with their POS synchronously regardless of the types of new words from Chinese text without being pre-segmented. Unlike semi-CRF, in proposed latent discriminative model, LDCRF is applied to generate candidate entities, which accelerates the training speed and decreases the computational cost. The complexity of proposed hidden semi-CRF could be further adjusted by tuning the number of hidden variables and the number of candidate entities from the Nbest outputs of LDCRF model. A new-word-generating framework is proposed for model training and testing, under which the definitions and distributions of new words conform to the ones in real text. The global feature called "Global Fragment Features" for new word identification is adopted. We tested our model on the corpus from SIGHAN-6. Experimental results show that the proposed method is capable of detecting even low frequency new words together with their POS tags with satisfactory results. The proposed model performs competitively with the state-of-the-art models.
    References | Related Articles | Metrics
    Multi-Domain Sentiment Classification with Classifier Combination
    Shou-Shan Li(李寿山), Chu-Ren Huang(黄居仁), and Cheng-Qing Zong(宗成庆)
    Journal of Computer Science and Technology, 2011, 26 (1): 25-33.  DOI: 10.1007/s11390-011-1108-9
    Abstract   PDF(1287KB) ( 2236 )   Chinese Summary

    State-of-the-arts studies on sentiment classification are typically domain-dependent and domain-restricted. In this paper, we aim to reduce domain dependency and improve overall performance simultaneously by proposing an efficient multi-domain sentiment classification algorithm. Our method employs the approach of multiple classifier combination. In this approach, we first train single domain classifiers separately with domain specific data, and then combine the classifiers for the final decision. Our experiments show that this approach performs much better than both single domain classification approach (using the training data individually) and mixed domain classification approach (simply combining all the training data). In particular, classifier combination with weighted sum rule obtains an average error reduction of 27.6% over single domain classification.

    References | Related Articles | Metrics
    Learning Noun Phrase Anaphoricity in Coreference Resolution via Label Propagation
    Guo-Dong Zhou (周国栋), Senior Member, CCF, Member, ACM, IEEE, and Fang Kong (孔芳), Member, CCF
    Journal of Computer Science and Technology, 2011, 26 (1): 34-44.  DOI: 10.1007/s11390-011-1109-8
    Abstract   PDF(389KB) ( 2069 )   Chinese Summary

    Knowledge of noun phrase anaphoricity might be profitably exploited in coreference resolution to bypass the resolution of non-anaphoric noun phrases. However, it is surprising to notice that recent attempts to incorporate automatically acquired anaphoricity information into coreference resolution systems have been far from expectation. This paper proposes a global learning method in determining the anaphoricity of noun phrases via a label propagation algorithm to improve learning-based coreference resolution. In order to eliminate the huge computational burden in the label propagation algorithm, we employ the weighted support vectors as the critical instances to represent all the anaphoricity-labeled NP instances in the training texts. In addition, two kinds of kernels, i.e., the feature-based RBF (Radial Basis Function) kernel and the convolution tree kernel with approximate matching, are explored to compute the anaphoricity similarity between two noun phrases. Experiments on the ACE2003 corpus demonstrate the great effectiveness of our method in anaphoricity determination of noun phrases and its application in learning-based coreference resolution.

    References | Related Articles | Metrics
    Kernel-Based Semantic Relation Detection and Classification via Enriched Parse Tree Structure
    Guo-Dong Zhou (周国栋), Senior Member, CCF, Member, ACM, IEEE and Qiao-Ming Zhu (朱巧明), Senior Member, CCF
    Journal of Computer Science and Technology, 2011, 26 (1): 45-56.  DOI: 10.1007/s11390-011-1110-2
    Abstract   PDF(440KB) ( 2711 )   Chinese Summary

    This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree kernel is presented to better capture the inherent structural information in a parse tree by enabling the standard convolution tree kernel with context-sensitiveness and approximate matching of sub-trees. Second, an enriched parse tree structure is proposed to well derive necessary structural information, e.g., proper latent annotations, from a parse tree. Evaluation on the ACE RDC corpora shows that both the new tree kernel and the enriched parse tree structure contribute significantly to RDC and our tree kernel method much outperforms the state-of-the-art ones.

    References | Related Articles | Metrics
    Improvement of Machine Translation Evaluation by Simple Linguistically Motivated Features
    Mu-Yun Yang (杨沐昀), Member, CCF, IEEE, Shu-Qi Sun (孙叔琦), Jun-Guo Zhu (朱俊国), Sheng Li (李生), Tie-Jun Zhao (赵铁军), Senior Member, CCF, Member, IEEE, and Xiao-Ning Zhu (朱晓宁)
    Journal of Computer Science and Technology, 2011, 26 (1): 57-67.  DOI: 10.1007/s11390-011-1111-1
    Abstract   PDF(575KB) ( 2051 )   Chinese Summary

    Adopting the regression SVM framework, this paper proposes a linguistically motivated feature engineering strategy to develop an MT evaluation metric with a better correlation with human assessments. In contrast to current practices of "greedy" combination of all available features, six features are suggested according to the human intuition for translation quality. Then the contribution of linguistic features is examined and analyzed via a hill-climbing strategy. Experiments indicate that, compared to either the SVM-ranking model or the previous attempts on exhaustive linguistic features, the regression SVM model with six linguistic information based features generalizes across different datasets better, and augmenting these linguistic features with proper non-linguistic metrics can achieve additional improvements.

    References | Related Articles | Metrics
    Using Syntactic-Based Kernels for Classifying Temporal Relations
    Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani, and Mahdy Khayyamian
    Journal of Computer Science and Technology, 2011, 26 (1): 68-80.  DOI: 10.1007/s11390-011-1112-0
    Abstract   PDF(2132KB) ( 2647 )   Chinese Summary

    Temporal relation classification is one of contemporary demanding tasks of natural language processing. This task can be used in various applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved algorithm for classifying temporal relations, between events or between events and time, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting some useful automatically generated syntactic features to improve the accuracy of classification. Accordingly, a number of novel kernel functions are introduced and evaluated. Our evaluations clearly demonstrate that adding syntactic features results in a considerable improvement over the state-of-the-art method of classifying temporal relations.

    References | Related Articles | Metrics
    Transfer Learning via Multi-View Principal Component Analysis
    Yang-Sheng Ji (吉阳生), Jia-Jun Chen (陈家骏), Member, CCF, Gang Niu (牛罡), Lin Shang (商琳), Member, CCF, and Xin-Yu Dai (戴新宇), Member, CCF
    Journal of Computer Science and Technology, 2011, 26 (1): 81-98.  DOI: 10.1007/s11390-011-1113-z
    Abstract   PDF(842KB) ( 2552 )   Chinese Summary

    Transfer learning aims at leveraging the knowledge in labeled source domains to predict the unlabeled data in a target domain, where the distributions are different in domains. Among various methods for transfer learning, one kind of algorithms focus on the correspondence between bridge features and all the other specific features from different domains, and later conduct transfer learning via the single-view correspondence. However, the single-view correspondence may prevent these algorithms from further improvement due to the problem of incorrect correlation discovery. To tackle this problem, we propose a new method for transfer learning in a multi-view correspondence perspective, which is called Multi-View Principal Component Analysis (MVPCA) approach. MVPCA discovers the correspondence between bridge features representative across all domains and specific features from different domains respectively, and conducts the transfer learning by dimensionality reduction in a multi-view way, which can better depict the knowledge transfer. Experiments show that MVPCA can significantly reduce the cross domain prediction error of a baseline non-transfer method. With multi-view correspondence information incorporated to the single-view transfer learning method, MVPCA can further improve the performance of one state-of-the-art single-view method.

    References | Related Articles | Metrics
    Distributed Computing and Systems
    On/Off-Line Prediction Applied to Job Scheduling on Non-Dedicated NOWs
    Mauricio Hanzich, Porfidio Hernández, Francesc Giné, Francesc Solsona, and Josep L. Lérida
    Journal of Computer Science and Technology, 2011, 26 (1): 99-116.  DOI: 10.1007/s11390-011-1114-y
    Abstract   PDF(4870KB) ( 1747 )   Chinese Summary

    This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. The prediction engine can be configured to work with three different estimation kernels: a Historical kernel, a Simulation kernel based on analytical models and an integration of both, named Hybrid kernel. These estimation proposals were integrated into a scheduling system, named CISNE, which can be executed in an on-line or off-line mode. The accuracy of the proposed estimation methods was evaluated in relation to different job scheduling policies in a real and a simulated cluster environment. In both environments, we observed that the Hybrid system gives the best results because it combines the ability of a simulation engine to capture the dynamism of a non-dedicated environment together with the accuracy of the historical methods to estimate the application runtime considering the state of the resources.

    References | Related Articles | Metrics
    Theoretical Treatment of Target Coverage in Wireless Sensor Networks
    Yu Gu(谷雨), Bao-Hua Zhao(赵保华), Yu-Sheng Ji(计宇生), Member, IEEE, and Jie Li(李颉), Senior Member, ACM, IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 117-129.  DOI: 10.1007/s11390-011-1115-x
    Abstract   PDF(745KB) ( 2250 )   Chinese Summary

    The target coverage is an important yet challenging problem in wireless sensor networks, especially when both coverage and energy constraints should be taken into account. Due to its nonlinear nature, previous studies of this problem have mainly focused on heuristic algorithms; the theoretical bound remains unknown. Moreover, the most popular method used in the previous literature, i.e., discretization of continuous time, has yet to be justified. This paper fills in these gaps with two theoretical results. The first one is a formal justification for the method. We use a simple example to illustrate the procedure of transforming a solution in time domain into a corresponding solution in the pattern domain with the same network lifetime and obtain two key observations. After that, we formally prove these two observations and use them as the basis to justify the method. The second result is an algorithm that can guarantee the network lifetime to be at least (1-ε) of the optimal network lifetime, where ε can be made arbitrarily small depending on the required precision. The algorithm is based on the column generation (CG) theory, which decomposes the original problem into two sub-problems and iteratively solves them in a way that approaches the optimal solution. Moreover, we developed several constructive approaches to further optimize the algorithm. Numerical results verify the efficiency of our CG-based algorithm.

    References | Related Articles | Metrics
    Security of the SMS4 Block Cipher Against Differential Cryptanalysis
    Bo-Zhan Su(苏波展), Wen-Ling Wu(吴文玲), Senior Member, CCF, and Wen-Tao Zhang(张文涛)
    Journal of Computer Science and Technology, 2011, 26 (1): 130-138.  DOI: 10.1007/s11390-011-1116-9
    Abstract   PDF(406KB) ( 3010 )   Chinese Summary

    SMS4 is a 128-bit block cipher used in the WAPI standard for wireless networks in China. In this paper, we analyze the security of the SMS4 block cipher against differential cryptanalysis. Firstly, we prove three theorems and one corollary that reflect relationships of 5- and 6-round SMS4. Next, by these relationships, we clarify the minimum number of active S-boxes in 6-, 7- and 12-round SMS4 respectively. Finally, based on the above results, we present a family of about 214 differential characteristics for 19-round SMS4, which leads to an attack on 23-round SMS4 with 2118 chosen plaintexts and 2126.7 encryptions.

    References | Related Articles | Metrics
    Algorithm and Complexity
    NuMDG: A New Tool for Multiway Decision Graphs Construction
    Sa'ed Abed, Member, ACM, IEEE, Yassine Mokhtari, Otmane Ait-Mohamed, Member, ACM, IEEE, and Sofiène Tahar, Senior Member, IEEE, Member, ACM
    Journal of Computer Science and Technology, 2011, 26 (1): 139-152.  DOI: 10.1007/s11390-011-1117-8
    Abstract   PDF(633KB) ( 1646 )   Chinese Summary

    Multiway Decision Graphs (MDGs) are a canonical representation of a subset of many-sorted first-order logic. This subset generalizes the logic of equality with abstract types and uninterpreted function symbols. The distinction between abstract and concrete sorts mirrors the hardware distinction between data path and control. Here we consider ways to improve MDGs construction. Efficiency is achieved through the use of the Generalized-If-Then-Else (GITE) commonly operator in Binary Decision Diagram packages. Consequently, we review the main algorithms used for MDGs verification techniques. In particular, Relational Product and Pruning by Subsumption are algorithms defined uniformly through this single GITE operator which will lead to a more efficient implementation. Moreover, we provide their correctness proof. This work can be viewed as a way to accommodate the ROBBD algorithms to the realm of abstract sorts and uninterpreted functions. The new tool, called NuMDG, accepts an extended SMV language, supporting abstract data sorts. Finally, we present experimental results demonstrating the efficiency of the NuMDG tool and evaluating its performance using a set of benchmarks from the SMV package.

    References | Related Articles | Metrics
    Incremental Alignment Manifold Learning
    Zhi Han (韩志), De-Yu Meng(孟德宇), Zong-Ben Xu (徐宗本), and Nan-Nan Gu (古楠楠)
    Journal of Computer Science and Technology, 2011, 26 (1): 153-165.  DOI: 10.1007/s11390-011-1118-7
    Abstract   PDF(4914KB) ( 2162 )   Chinese Summary

    A new manifold learning method, called incremental alignment method (IAM), is proposed for nonlinear dimensionality reduction of high dimensional data with intrinsic low dimensionality. The main idea is to incrementally align low-dimensional coordinates of input data patch-by-patch to iteratively generate the representation of the entire dataset. The method consists of two major steps, the incremental step and the alignment step. The incremental step incrementally searches neighborhood patch to be aligned in the next step, and the alignment step iteratively aligns the low-dimensional coordinates of the neighborhood patch searched to generate the embeddings of the entire dataset. Compared with the existing manifold learning methods, the proposed method dominates in several aspects: high efficiency, easy out-of-sample extension, well metric-preserving, and averting of the local minima issue. All these properties are supported by a series of experiments performed on the synthetic and real-life datasets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically argued and experimentally demonstrated.

    References | Related Articles | Metrics
    Computer Graphics and Visualization
    MCGIM-Based Model Streaming for Realtime Progressive Rendering
    Bin Sheng (盛斌), Wei-Liang Meng (孟维亮), Han-Qiu Sun (孙汉秋), Member, ACM, IEEE, and En-Hua Wu (吴恩华), Senior Member, CCF, ACM, IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 166-175.  DOI: 10.1007/s11390-011-1119-6
    Abstract   PDF(654KB) ( 1970 )   Chinese Summary

    While most mesh streaming techniques focus on optimizing the transmission order of the polygon data, few approaches have addressed the streaming problems by using geometry images (GIM). In this paper, we present a new approach which firstly partitions a mesh into several surface patches, then converts these patches into multi-chart geometry images (MCGIM). After resampling the MCGIM and normal map atlas are obtained, we hierarchically construct the regular geometry image representation by adopting the quadtree structure. In this way, the encoded nodes can be transmitted in arbitrary order with high transmission flexibility. Also, the rendering quality of the partially transmitted models can be greatly improved by using the normal texture atlas. Meanwhile only the geometry on the silhouette to the current viewpoint are required to be refined and transmitted, therefore the amount of data is minimized for transferring each frame. In particular, our approach also allows users to encode and transmit the mesh data via JPEG2000 technique. Therefore, our mesh streaming method is suitable for transmitting 3D animation models with use of Motion JPEG2000 videos. Experimental results have demonstrated the effectiveness of our approach, which enables one server to stream the MCGIM texture atlas to the clients. Also, the transmitted model can be rendered in a multiresolution manner by GPU acceleration on the client side, due to the regular geometry structure of MCGIM.

    References | Related Articles | Metrics
    Light Space Cascaded Shadow Maps Algorithm for Real Time Rendering
    Xiao-Hui Liang (梁晓辉), Senior Member, CCF, Shang Ma (马上), Li-Xia Cen (岑丽霞), and Zhuo Yu (于卓)
    Journal of Computer Science and Technology, 2011, 26 (1): 176-186.  DOI: 10.1007/s11390-011-1120-0
    Abstract   PDF(6672KB) ( 3403 )   Chinese Summary

    Owing to its generality and efficiency, Cascaded Shadow Maps (CSMs) has an important role in real-time shadow rendering in large scale and complex virtual environments. However, CSMs suffers from redundant rendering problem —— objects are rendered undesirably to different shadow map textures when view direction and light direction are not perpendicular. In this paper, we present a light space cascaded shadow maps algorithm. The algorithm splits a scene into non-intersecting layers in light space, and generates one shadow map for each layer through irregular frustum clipping and scene organization, ensuring that any shadow sample point never appears in multiple shadow maps. A succinct shadow determination method is given to choose the optimal shadow map when rendering scenes. We also combine the algorithm with stable cascaded shadow maps and soft shadow algorithm to avoid shadow flicking and produce soft shadows. The results show that the algorithm effectively improves the efficiency and shadow quality of CSMs by avoiding redundant rendering, and can produce high-quality shadow rendering in large scale dynamic environments with real-time performance.

    References | Related Articles | Metrics
    Effectively Discriminating Fighting Shots in Action Movies
    Shu-Gao Ma(马述高) and Wei-Qiang Wang(王伟强), Member, ACM, IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 187-194.  DOI: 10.1007/s11390-011-1121-z
    Abstract   PDF(709KB) ( 1813 )   Chinese Summary

    Fighting shots are the highlights of action movies and an effective approach to discriminating fighting shots is very useful for many applications, such as movie trailer construction, movie content filtering, and movie content retrieval. In this paper, we present a novel method for this task. Our approach first extracts the reliable motion information of local invariant features through a robust keypoint tracking computation; then foreground keypoints are distinguished from background keypoints by a sophisticated voting process; further, the parameters of the camera motion model is computed based on the motion information of background keypoints, and this model is then used as a reference to compute the actual motion of foreground keypoints; finally, the corresponding feature vectors are extracted to characterizing the motions of foreground keypoints, and a support vector machine (SVM) classifier is trained based on the extracted feature vectors to discriminate fighting shots. Experimental results on representative action movies show our approach is very effective.

    References | Related Articles | Metrics
    Saliency-Based Fidelity Adaptation Preprocessing for Video Coding
    Shao-Ping Lu (卢少平), Student Member, CCF, ACM, and Song-Hai Zhang (张松海), Member, CCF, ACM, IEEE
    Journal of Computer Science and Technology, 2011, 26 (1): 195-202.  DOI: 10.1007/s11390-011-1122-y
    Abstract   PDF(3355KB) ( 1726 )   Chinese Summary

    In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model. We then use an extended bilateral filter, in which the local intensity and spatial scales are adjusted according to visual saliency, to adaptively alter the image fidelity. Our implementation is based on the H.264 video encoder JM12.0. Besides evaluating our scheme with the H.264 reference software, we also compare it to a more traditional foreground-background segmentation-based method and a foveation-based approach which employs Gaussian blurring. Our results show that the proposed algorithm can improve the compression ratio significantly while effectively preserving perceptual visual quality.

    References | Related Articles | Metrics
  Journal Online
Just Accepted
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Special Issues
   ScholarOne Manuscripts
   Log In

User ID:


  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
E-mail: jcst@ict.ac.cn
  Copyright ©2015 JCST, All Rights Reserved