Loading...




Bimonthly    Since 1986
ISSN 1000-9000(Print)
/1860-4749(Online)
CN 11-2296/TP
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
Other Countries: Springer
 
ip访问总数:
ip当日访问总数:
当前在线人数:
  • Table of Content
      05 January 2014, Volume 29 Issue 1 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Editorial
    Editorial:Moving Forward to Respond to Rapid Changes of Computer Science and Technology
    Guo-Jie Li
    Journal of Computer Science and Technology, 2014, 29 (1): 1-1.  DOI: 10.1007/s11390-013-1406-5
    Abstract   PDF(59KB) ( 1384 )   Chinese Summary
    Related Articles | Metrics
    Introduction to the Six Leading Editors
    Journal of Computer Science and Technology, 2014, 29 (1): 2-3.  DOI: 10.1007/s11390-013-1407-4
    Abstract   PDF(1070KB) ( 1166 )   Chinese Summary
    Related Articles | Metrics
    Computer Networks and Distributed Computing
    Effective Object Identification and Association by Varying Coverage Through RFID Power Control
    Shung Han Cho, Kyung Hoon Kim, and Sangjin Hong
    Journal of Computer Science and Technology, 2014, 29 (1): 4-20.  DOI: 10.1007/s11390-013-1408-3
    Abstract   PDF(929KB) ( 1381 )   Chinese Summary
    This paper presents an effective power scheduling strategy for energy efficient multiple objects identification and association. The proposed method can be utilized in many heterogeneous surveillance systems with visual sensors and RFID (radio-frequency identification) readers where energy efficiency as well as association rate are critical. Multiple objects positions and trajectory estimates are used to decide the power level of RFID readers. Several key parameters including the time windows and the distance separations are defined in the method in order to minimize the effects of RFID coverage uncertainty. The power cost model is defined and incorporated into the method to minimize energy consumption and to maximize association performance. The proposed method computes the power cost using the range of the outermost position for possible single association and group associations at every sampling time. An RFID reader is activated with the proper coverage range when the power cost for the current time is lower than the power cost for the next time sample. The simplicity of the power cost model relieves the problematic combinatorial comparisons in multiple object cases. The performance comparison simulation with the minimum and maximum energy consumption shows that the proposed method achieves fast single associations with less energy consumption. Finally, the realistic comparison simulation with the fixed range RFID readers demonstrates that the proposed method outperforms the fixed ranges in terms of single association rate and energy consumption.
    References | Related Articles | Metrics
    Dynamic I/O-Aware Scheduling for Batch-Mode Applications on Chip Multiprocessor Systems of Cluster Platforms
    Fang Lv, Hui-Min Cui, Lei Wang, Lei Liu, Cheng-Gang Wu, Xiao-Bing Feng, and Pen-Chung Yew
    Journal of Computer Science and Technology, 2014, 29 (1): 21-37.  DOI: 10.1007/s11390-013-1409-2
    Abstract   PDF(4004KB) ( 2214 )   Chinese Summary
    Efficiency of batch processing is becoming increasingly important for many modern commercial service centers, e.g., clusters and cloud computing datacenters. However, periodical resource contentions have become the major performance obstacles for concurrently running applications on mainstream CMP servers. I/O contention is such a kind of obstacle, which may impede both the co-running performance of batch jobs and the system throughput seriously. In this paper, a dynamic I/O-aware scheduling algorithm is proposed to lower the impacts of I/O contention and to enhance the co-running performance in batch processing. We set up our environment on an 8-socket, 64-core server in Dawning Linux Cluster. Fifteen workloads ranging from 8 jobs to 256 jobs are evaluated. Our experimental results show significant improvements on the throughputs of the workloads, which range from 7% to 431%. Meanwhile, noticeable improvements on the slowdown of workloads and the average runtime for each job can be achieved. These results show that a well-tuned dynamic I/O-aware scheduler is beneficial for batch-mode services. It can also enhance the resource utilization via throughput improvement on modern service platforms.
    References | Related Articles | Metrics
    Improving Scalability of Cloud Monitoring Through PCA-Based Clustering of Virtual Machines
    Claudia Canali, Riccardo Lancellotti
    Journal of Computer Science and Technology, 2014, 29 (1): 38-52.  DOI: 10.1007/s11390-013-1410-9
    Abstract   PDF(703KB) ( 1630 )   Chinese Summary
    Cloud computing has recently emerged as a leading paradigm to allow customers to run their applications in virtualized large-scale data centers. Existing solutions for monitoring and management of these infrastructures consider virtual machines (VMs) as independent entities with their own characteristics. However, these approaches suffer from scalability issues due to the increasing number of VMs in modern cloud data centers. We claim that scalability issues can be addressed by leveraging the similarity among VMs behavior in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. The innovative contribution of the proposed methodology is the use of the statistical technique known as principal component analysis (PCA) to automatically select the most relevant information to cluster similar VMs. We apply the methodology to two case studies, a virtualized testbed and a real enterprise data center. In both case studies, the automatic data selection based on PCA allows us to achieve high performance, with a percentage of correctly clustered VMs between 80% and 100% even for short time series (1 day) of monitored data. Furthermore, we estimate the potential reduction in the amount of collected data to demonstrate how our proposal may address the scalability issues related to monitoring and management in cloud computing data centers.
    References | Related Articles | Metrics
    TuLP:A Family of Lightweight Message Authentication Codes for Body Sensor Networks
    Zheng Gong, Pieter Hartel, Svetla Nikova, Shao-Hua Tang, and Bo Zhu
    Journal of Computer Science and Technology, 2014, 29 (1): 53-68.  DOI: 10.1007/s11390-013-1411-8
    Abstract   PDF(1447KB) ( 1454 )   Chinese Summary
    A wireless sensor network (WSN) commonly requires lower level security for public information gathering, whilst a body sensor network (BSN) must be secured with strong authenticity to protect personal health information. In this paper, some practical problems with the message authentication codes (MACs), which were proposed in the popular security architectures for WSNs, are reconsidered. The analysis shows that the recommended MACs for WSNs, e.g., CBC-MAC (TinySec), OCB-MAC (MiniSec), and XCBC-MAC (SenSec), might not be exactly suitable for BSNs. Particularly an existential forgery attack is elaborated on XCBC-MAC. Considering the hardware limitations of BSNs, we propose a new family of tunable lightweight MAC based on the PRESENT block cipher. The first scheme, which is named TuLP, is a new lightweight MAC with 64-bit output range. The second scheme, which is named TuLP-128, is a 128-bit variant which provides a higher resistance against internal collisions. Compared with the existing schemes, our lightweight MACs are both time and resource efficient on hardware-constrained devices.
    References | Related Articles | Metrics
    Trust-Based Personalized Service Recommendation:A Network Perspective
    Shui-Guang Deng, Long-Tao Huang, Jian Wu, and Zhao-Hui Wu
    Journal of Computer Science and Technology, 2014, 29 (1): 69-80.  DOI: 10.1007/s11390-013-1412-7
    Abstract   PDF(1578KB) ( 2194 )   Chinese Summary
    Recent years have witnessed a growing trend of Web services on the Internet. There is a great need of effective service recommendation mechanisms. Existing methods mainly focus on the properties of individual Web services (e.g., functional and non-functional properties) but largely ignore users' views on services, thus failing to provide personalized service recommendations. In this paper, we study the trust relationships between users and Web services using network modeling and analysis techniques. Based on the findings and the service network model we build, we then propose a collaborative filtering algorithm called Trust-Based Service Recommendation (TSR) to provide personalized service recommendations. This systematic approach for service network modeling and analysis can also be used for other service recommendation studies.
    References | Related Articles | Metrics
    Effective Error-Tolerant Keyword Search for Secure Cloud Computing
    Bo Yang, Xiao-Qiong Pang, Jun-Qiang Du, and Dan Xie
    Journal of Computer Science and Technology, 2014, 29 (1): 81-89.  DOI: 10.1007/s11390-013-1413-6
    Abstract   PDF(502KB) ( 1660 )   Chinese Summary
    The existing solutions to keyword search in the cloud can be divided into two categories: searching on exact keywords and searching on error-tolerant keywords. An error-tolerant keyword search scheme permits to make searches on encrypted data with only an approximation of some keyword. The scheme is suitable to the case where users' searching input might not exactly match those pre-set keywords. In this paper, we first present a general framework for searching on error-tolerant keywords. Then we propose a concrete scheme, based on a fuzzy extractor, which is proved secure against an adaptive adversary under well-defined security definition. The scheme is suitable for all similarity metrics including Hamming distance, edit distance, and set difference. It does not require the user to construct or store anything in advance, other than the key used to calculate the trapdoor of keywords and the key to encrypt data documents. Thus, our scheme tremendously eases the users' burden. What is more, our scheme is able to transform the servers' searching for error-tolerant keywords on ciphertexts to the searching for exact keywords on plaintexts. The server can use any existing approaches of exact keywords search to search plaintexts on an index table.
    References | Related Articles | Metrics
    Data Management and Data Mining
    On Unsupervised Training of Multi-Class Regularized Least-Squares Classifiers
    Tapio Pahikkala, Antti Airola, Fabian Gieseke, and Oliver Kramer
    Journal of Computer Science and Technology, 2014, 29 (1): 90-104.  DOI: 10.1007/s11390-013-1414-5
    Abstract   PDF(3627KB) ( 1735 )   Chinese Summary
    In this work we present the first efficient algorithm for unsupervised training of multi-class regularized leastsquares classifiers. The approach is closely related to the unsupervised extension of the support vector machine classifier known as maximum margin clustering, which recently has received considerable attention, though mostly considering the binary classification case. We present a combinatorial search scheme that combines steepest descent strategies with powerful meta-heuristics for avoiding bad local optima. The regularized least-squares based formulation of the problem allows us to use matrix algebraic optimization enabling constant time checks for the intermediate candidate solutions during the search. Our experimental evaluation indicates the potential of the novel method and demonstrates its superior clustering performance over a variety of competing methods on real world datasets. Both time complexity analysis and experimental comparisons show that the method can scale well to practical sized problems.
    References | Related Articles | Metrics
    Minimizing the Discrepancy Between Source and Target Domains by Learning Adapting Components
    Fatemeh Dorri, Ali Ghodsi
    Journal of Computer Science and Technology, 2014, 29 (1): 105-115.  DOI: 10.1007/s11390-013-1415-4
    Abstract   PDF(4234KB) ( 1664 )   Chinese Summary
    Predicting the response variables of the target dataset is one of the main problems in machine learning. Predictive models are desired to perform satisfactorily in a broad range of target domains. However, that may not be plausible if there is a mismatch between the source and target domain distributions. The goal of domain adaptation algorithms is to solve this issue and deploy a model across different target domains. We propose a method based on kernel distribution embedding and Hilbert-Schmidt independence criterion (HSIC) to address this problem. The proposed method embeds both source and target data into a new feature space with two properties: 1) the distributions of the source and the target datasets are as close as possible in the new feature space, and 2) the important structural information of the data is preserved. The embedded data can be in lower dimensional space while preserving the aforementioned properties and therefore the method can be considered as a dimensionality reduction method as well. Our proposed method has a closed-form solution and the experimental results show that it works well in practice.
    References | Related Articles | Metrics
    On Density-Based Data Streams Clustering Algorithms:A Survey
    Amineh Amini, Teh Ying Wah, and Hadi Saboohi
    Journal of Computer Science and Technology, 2014, 29 (1): 116-141.  DOI: 10.1007/s11390-013-1416-3
    Abstract   PDF(5973KB) ( 6211 )   Chinese Summary
    Clustering data streams has drawn lots of attention in the last few years due to their ever-growing presence. Data streams put additional challenges on clustering such as limited time and memory and one pass clustering. Furthermore, discovering clusters with arbitrary shapes is very important in data stream applications. Data streams are infinite and evolving over time, and we do not have any knowledge about the number of clusters. In a data stream environment due to various factors, some noise appears occasionally. Density-based method is a remarkable class in clustering data streams, which has the ability to discover arbitrary shape clusters and to detect noise. Furthermore, it does not need the number of clusters in advance. Due to data stream characteristics, the traditional density-based clustering is not applicable. Recently, a lot of density-based clustering algorithms are extended for data streams. The main idea in these algorithms is using densitybased methods in the clustering process and at the same time overcoming the constraints, which are put out by data stream's nature. The purpose of this paper is to shed light on some algorithms in the literature on density-based clustering over data streams. We not only summarize the main density-based clustering algorithms on data streams, discuss their uniqueness and limitations, but also explain how they address the challenges in clustering data streams. Moreover, we investigate the evaluation metrics used in validating cluster quality and measuring algorithms' performance. It is hoped that this survey will serve as a steppingstone for researchers studying data streams clustering, particularly density-based algorithms.
    References | Related Articles | Metrics
    Computer Graphics and Multimedia
    Accurate Approximation of the Earth Mover’s Distance in Linear Time
    Min-Hee Jang, Sang-Wook Kim, Christos Faloutsos, and Sunju Park
    Journal of Computer Science and Technology, 2014, 29 (1): 142-154.  DOI: 10.1007/s11390-013-1417-2
    Abstract   PDF(3110KB) ( 1654 )   Chinese Summary
    Color descriptors are one of the important features used in content-based image retrieval. The dominant color descriptor (DCD) represents a few perceptually dominant colors in an image through color quantization. For image retrieval based on DCD, the earth mover's distance (EMD) and the optimal color composition distance were proposed to measure the dissimilarity between two images. Although providing good retrieval results, both methods are too time-consuming to be used in a large image database. To solve the problem, we propose a new distance function that calculates an approximate earth mover's distance in linear time. To calculate the dissimilarity in linear time, the proposed approach employs the space-filling curve for multidimensional color space. To improve the accuracy, the proposed approach uses multiple curves and adjusts the color positions. As a result, our approach achieves order-of-magnitude time improvement but incurs small errors. We have performed extensive experiments to show the effectiveness and efficiency of the proposed approach. The results reveal that our approach achieves almost the same results with the EMD in linear time.
    References | Related Articles | Metrics
    Movie Scene Recognition Using Panoramic Frame and Representative Feature Patches
    Guang-Yu Gao, Hua-Dong Ma
    Journal of Computer Science and Technology, 2014, 29 (1): 155-164.  DOI: 10.1007/s11390-013-1418-1
    Abstract   PDF(9112KB) ( 1646 )   Chinese Summary
    Recognizing scene information in images or videos, such as locating the objects and answering "Where am I?", has attracted much attention in computer vision research field. Many existing scene recognition methods focus on static images, and cannot achieve satisfactory results on videos which contain more complex scenes features than images. In this paper, we propose a robust movie scene recognition approach based on panoramic frame and representative feature patch. More specifically, the movie is first efficiently segmented into video shots and scenes. Secondly, we introduce a novel key-frame extraction method using panoramic frame and also a local feature extraction process is applied to get the representative feature patches (RFPs) in each video shot. Thirdly, a Latent Dirichlet Allocation (LDA) based recognition model is trained to recognize the scene within each individual video scene clip. The correlations between video clips are considered to enhance the recognition performance. When our proposed approach is implemented to recognize the scene in realistic movies, the experimental results shows that it can achieve satisfactory performance.
    References | Related Articles | Metrics
    Theory and Algorithms
    Related-Key Impossible Differential Attack on Reduced-Round Lblock
    Long Wen, Mei-Qin Wang, and Jing-Yuan Zhao
    Journal of Computer Science and Technology, 2014, 29 (1): 165-176.  DOI: 10.1007/s11390-013-1419-0
    Abstract   PDF(1300KB) ( 2129 )   Chinese Summary
    LBlock is a 32-round lightweight block cipher with 64-bit block size and 80-bit key. This paper identifies 16round related-key impossible differentials of LBlock, which are better than the 15-round related-key impossible differentials used in the previous attack. Based on these 16-round related-key impossible differentials, we can attack 23 rounds of LBlock while the previous related-key impossible differential attacks could only work on 22-round LBlock. This makes our attack on LBlock the best attack in terms of the number of attacked rounds.
    References | Related Articles | Metrics
  Journal Online
Just Accepted
Archive
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Surveys
Special Issues
  Download
   ScholarOne Manuscripts
   Log In

User ID:

Password:

  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved