Bimonthly    Since 1986
ISSN 1000-9000(Print)
CN 11-2296/TP
Indexed in:
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Distributed by:
China: All Local Post Offices
Other Countries: Springer
30 Most Down Articles
Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

Please wait a minute...
For Selected: View Abstracts Toggle Thumbnails
Implementing a 1GHz Four-Issue Out-of-Order Execution Microprocessor in a Standard Cell ASIC Methodology
Wei-Wu Hu, Ji-Ye Zhao, Shi-Qiang Zhong, Xu Yang, Elio Guidetti, and Chris Wu
Abstract78561)      PDF(pc) (541KB)(50425)   
This paper introduces the microarchitecture and physical implementation of the Godson-2E processor, which is a four-issue superscalar RISC processor that supports the 64-bit MIPS instruction set. The adoption of the aggressive out-of-order execution and memory hierarchy techniques help Godson-2E to achieve high performance. The Godson-2E processor has been physically designed in a 7-metal 90nm CMOS process using the cell-based methodology with some bit-sliced manual placement and a number of crafted cells and macros. The processor can be run at 1GHz and achieves a SPEC CPU2000 rate higher than 500.
Reference | Related Articles | Metrics
Towards Automated Provisioning and Emergency Handling in Renewable Energy Powered Datacenters
Chao Li, Rui Wang, Yang Hu, Ruijin Zhou, Ming Liu, Long-Jun Liu, Jing-Ling Yuan, Tao Li, and De-Pei Qian
null    2014, 29 (4): 618-630.   DOI: 10.1007/s11390-014-1454-5
Abstract2556)      PDF(pc) (3656KB)(25483)   
Designing eco-friendly system has been at the forefront of computing research. Faced with a growing concern about the server energy expenditure and the climate change, both industry and academia start to show high interests in computing systems powered by renewable energy sources. Existing proposals on this issue mainly focus on optimizing resource utilization or workload performance. The key supporting hardware structures for cross-layer power management and emergency handling mechanisms are often left unexplored. This paper presents GreenPod, a research framework for exploring scalable and dependable renewable power management in datacenters. An important feature of GreenPod is that it enables joint management of server power supplies and virtualized server workloads. Its interactive communication portal between servers and power supplies allows datacenter operators to perform real-time renewable energy driven load migration and power emergency handling. Based on our system prototype, we discuss an important topic: virtual machine (VM) workloads survival when facing extended utility outage and insufficient onsite renewable power budget. We show that whether a VM can survive depends on the operating frequencies and workload characteristics. The proposed framework can greatly encourage and facilitate innovative research in dependable green computing.
Reference | Related Articles | Metrics
End-to-End Utilization Control for Aperiodic Tasks in Distributed Real-Time Systems
Yong Liao, Xu-Dong Chen, Guang-Ze Xiong, Qing-Xin Zhu, and Nan Sang
Abstract22385)      PDF(pc) (585KB)(20407)   
An increasing number of {DRTS} (Distributed Real-Time Systems) are employing an end-to-end aperiodic task model. The key challenges of such {DRTS} are guaranteeing utilization on multiple processors to achieve overload protection, and meeting the end-to-end deadlines of aperiodic tasks. This paper proposes an end-to-end utilization control architecture and an {IC-EAT} (Integration Control for End-to-End Aperiodic Tasks) algorithm, which features a distributed feedback loop that dynamically enforces the desired utilization bound on multiple processors. IC-EAT integrates admission control with feedback control, which is able to dynamically determine the QoS (Quality of Service) of incoming tasks and guarantee the end-to-end deadlines of admitted tasks. Then an LQOCM (Linear Quadratic Optimal Control Model) is presented. Finally, experiments demonstrate that, for the end-to-end {DRTS} whose control matrix $\pmb G$ falls into the stable region, the {IC-EAT} is convergent and stable. Moreover, it is capable of providing better QoS guarantees for end-to-end aperiodic tasks and improving the system throughput.
Reference | Related Articles | Metrics
A Comprehensive and Adaptive Trust Model for Large-Scale P2P Networks
Xiao-Yong Li and Xiao-Lin Gui, Senior Member, CCF
null    2009, 24 (5): 868-882.  
Abstract4268)      PDF(pc) (735KB)(20027)   

Based on human psychological cognitive behavior, a Comprehensive and Adaptive Trust (CAT) model for large-scale P2P networks is proposed. Firstly, an adaptive trusted decision-making method based on HEW (Historical Evidences Window) is proposed, which can not only reduce the risk and improve system efficiency, but also solve the trust forecasting problem when the direct evidences are insufficient. Then, direct trust computing method based on IOWA (Induced Ordered Weighted Averaging) operator and feedback trust converging mechanism based on DTT (Direct Trust Tree) are set up, which makes the model have a better scalability than previous studies. At the same time, two new parameters, confidence factor and feedback factor, are introduced to assign the weights to direct trust and feedback trust adaptively, which overcomes the shortage of traditional method, in which the weights are assigned by subjective ways. Simulation results show that, compared to the existing approaches, the proposed model has remarkable enhancements in the accuracy of trust decision-making and has a better dynamic adaptation capability in handling various dynamic behaviors of peers.

Reference | Related Articles | Metrics
Middleware for Wireless Sensor Networks: A Survey
Miao-Miao Wang, Jian-Nong Cao, Jing Li, and Sajal K. Das
null    2008, 23 (3): 305-326 .  
Abstract15720)      PDF(pc) (6603KB)(18441)   

Wireless Sensor Networks (WSNs) have found more and more applications in a variety of pervasive computing environments. However, how to support the development, maintenance, deployment and execution of applications over WSNs remains to be a nontrivial and challenging task, mainly because of the gap between the high level requirements from pervasive computing applications and the underlying operation of WSNs. Middleware for WSN can help bridge the gap and remove impediments. In recent years, research has been carried out on WSN middleware from different aspects and for different purposes. In this paper, we provide a comprehensive review of the existing work on WSN middleware, seeking for a better understanding of the current issues and future directions in this field. We propose a reference framework to analyze the functionalities of WSN middleware in terms of the system abstractions and the services provided. We review the approaches and techniques for implementing the services. On the basis of the analysis and by using a feature tree, we provide taxonomy of the features of WSN middleware and their relationships, and use the taxonomy to classify and evaluate existing work. We also discuss open problems in this important area of research.

Reference | Related Articles | Metrics
GBP-WAHSN: A Group-Based Protocol for Large Wireless Ad Hoc and Sensor Networks
Jaime Lloret, Miguel Garcia, Jesus Tomás, and Fernando Boronat
Abstract13365)      PDF(pc) (8581KB)(13426)   
Grouping nodes gives better performance to the whole network by diminishing the average network delay and avoiding unnecessary message forwarding and additional overhead. Many routing protocols for ad-hoc and sensor networks have been designed but none of them are based on groups. In this paper, we will start defining group-based topologies, and then we will show how some wireless ad hoc sensor networks (WAHSN) routing protocols perform when the nodes are arranged in groups. In our proposal connections between groups are established as a function of the proximity of the nodes and the neighbor's available capacity (based on the node's energy). We describe the architecture proposal, the messages that are needed for the proper operation and its mathematical description. We have also simulated how much time is needed to propagate information between groups. Finally, we will show a comparison with other architectures.
Reference | Related Articles | Metrics
Clustering Text Data Streams
Yu-Bao Liu, Jia-Rong Cai, Jian Yin, and Ada Wai-Chee Fu
Abstract14154)      PDF(pc) (1181KB)(12563)   
Clustering text data streams is an important issue in data mining community and has a number of applications such as news group filtering, text crawling, document organization and topic detection and tracing etc. However, most methods are similarity-based approaches and only use the TF$*$IDF scheme to represent the semantics of text data and often lead to poor clustering quality. Recently, researchers argue that semantic smoothing model is more efficient than the existing TF$*$IDF scheme for improving text clustering quality. However, the existing semantic smoothing model is not suitable for dynamic text data context. In this paper, we extend the semantic smoothing model into text data streams context firstly. Based on the extended model, we then present two online clustering algorithms OCTS and OCTSM for the clustering of massive text data streams. In both algorithms, we also present a new cluster statistics structure named cluster profile which can capture the semantics of text data streams dynamically and at the same time speed up the clustering process. Some efficient implementations for our algorithms are also given. Finally, we present a series of experimental results illustrating the effectiveness of our technique.
Reference | Related Articles | Metrics
Cited: Baidu(75)
Wavelet Based Image Authentication and Recovery
Rafiullah Chamlawi, Asifullah Khan, and Adnan Idris
Abstract13246)      PDF(pc) (564KB)(11319)   
In this paper, we propose a secure semi-fragile watermarking technique based on integer wavelet transform with a choice of two watermarks to be embedded. A self-recovering algorithm is employed, that hides the image digest into some wavelet subbands for detecting possible illicit object manipulation undergone in the image. The semi-fragility makes the scheme tolerant against JPEG lossy compression with the quality factor as low as 70\%, and locates the tampered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced by using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees safety of a watermark, recovery of image and localization of tampered area.
Reference | Related Articles | Metrics
Cited: Baidu(43)
Geometric Bone Modeling: From Macro to Micro Structures
Oded Zaideman and Anath Fischer
null    2010, 25 (3): 614-622.  
Abstract2688)      PDF(pc) (23500KB)(11310)   

There is major interest within the bio-engineering community in developing accurate and non-invasive means for visualizing, modeling and analyzing bone micro-structures. Bones are composed of hierarchical bio-composite materials characterized by complex multi-scale structural geometry. The process of reconstructing a volumetric bone model is usually based upon CT/MRI scanned images. Meshes generated by current commercial CAD systems cannot be used for further modeling or analysis. Moreover, recently developed methods are only capable of capturing the micro-structure for small volumes (biopsy samples). This paper examines the problem of re-meshing a 3D computerized model of bone micro-structure. The proposed method is based on the following phases: defining sub-meshes of the original model in a grid-based structure, remeshing each sub-mesh using the neural network (NN) method, and merging the sub-meshes into a global mesh. Applying the NN method to micro-structures proved to be quite time consuming. Therefore, a parallel, grid-based approach was applied, yielding a simpler structure in each grid cell. The performance of this method is analyzed, and the method is demonstrated on real bone micro-structures. Furthermore, the method may be used as the basis for generating a multi-resolution bone geometric model.

Reference | Related Articles | Metrics
Simultaneous Minimization of Capacity and Conflict Misses
Zhiyuan Li
Abstract13642)      PDF(pc) (748KB)(11054)   
Loop tiling (or loop blocking) is a well-known loop transformation to improve temporal locality in nested loops which perform matrix computations. When targeting caches that have low associativities, one of the key challenges for loop tiling is to simultaneously minimize capacity misses and conflict misses. This paper analyzes the effect of the tile size and the array-dimension size on capacity misses and conflict misses. The analysis supports the approach of combining tile-size selection (to minimize capacity misses) with array padding (to minimize conflict misses).
Reference | Related Articles | Metrics
Revocable Ring Signature
Dennis Y. W. Liu, Joseph K. Liu, Yi Mu, Willy Susilo and Duncan S. Wong
Abstract15323)      PDF(pc) (418KB)(10775)   
Group signature allows the anonymity of a real signer in a group to be revoked by a trusted party called group manager. It also gives the group manager the absolute power of controlling the formation of the group. Ring signature, on the other hand, does not allow anyone to revoke the signer anonymity, while allowing the real signer to form a group (also known as a ring) {\it arbitrarily} without being controlled by any other party. In this paper, we propose a new variant for ring signature, called {\it Revocable Ring Signature}. The signature allows a real signer to form a ring arbitrarily while allowing a set of authorities to revoke the anonymity of the real signer. This new variant inherits the desirable properties from both group signature and ring signature in such a way that the real signer will be responsible for what it has signed as the anonymity is revocable by authorities while the real signer still has the freedom on ring formation. We provide a formal security model for revocable ring signature and propose an efficient construction which is proven secure under our security model.
Reference | Related Articles | Metrics
Cited: Baidu(43)
A Robust and Fast Non-Local Means Algorithm for Image Denoising
Yan-Li Liu, Jin Wang, Xi Chen, Yan-Wen Guo, and Qun-Sheng Peng
Abstract13735)      PDF(pc) (3844KB)(10370)   
In the paper, we propose a robust and fast image denoising method. The approach integrates both Non-Local means algorithm and Laplacian Pyramid. Given an image to be denoised, we first decompose it into Laplacian pyramid. Exploiting the redundancy property of Laplacian pyramid, we then perform non-local means on every level image of Laplacian pyramid. Essentially, we use the similarity of image features in Laplacian pyramid to act as weight to denoise image. Since the features extracted in Laplacian pyramid are localized in spatial position and scale, they are much more able to describe image, and computing the similarity between them is more reasonable and more robust. Also, based on the efficient Summed Square Image (SSI) scheme and Fast Fourier Transform (FFT), we present an accelerating algorithm to break the bottleneck of non-local means algorithm --- similarity computation of compare windows. After speedup, our algorithm is fifty times faster than original non-local means algorithm. Experiments demonstrated the effectiveness of our algorithm.
Reference | Related Articles | Metrics
CASA: A New IFU Architecture for Power-Efficient Instruction Cache and TLB Designs
Han-Xin Sun, Kun-Peng Yang, Yu-Lai Zhao, Dong Tong, and Xu Cheng
Abstract8831)      PDF(pc) (2317KB)(9954)   
The instruction fetch unit (IFU) usually dissipates a considerable portion of total chip power. In traditional IFU architectures, as soon as the fetch address is generated, it needs to be sent to the instruction cache and TLB arrays for instruction fetch. Since limited work can be done by the power-saving logic after the fetch address generation and before the instruction fetch, previous power-saving approaches usually suffer from the unnecessary restrictions from traditional IFU architectures. In this paper, we present CASA, a new power-aware IFU architecture, which effectively reduces the unnecessary restrictions on the power-saving approaches and provides sufficient time and information for the power-saving logic of both instruction cache and TLB. By analyzing, recording, and utilizing the key information of the dynamic instruction flow early in the front-end pipeline, CASA brings the opportunity to maximize the power efficiency and minimize the performance overhead. Compared to the baseline configuration, the leakage and dynamic power of instruction cache is reduced by 89.7\% and 64.1\% respectively, and the dynamic power of instruction TLB is reduced by 90.2\%. Meanwhile the performance degradation in the worst case is only 0.63\%. Compared to previous state-of-the-art power-saving approaches, the CASA-based approach saves IFU power more effectively, incurs less performance overhead and achieves better scalability. It is promising that CASA can stimulate further work on architectural solutions to power-efficient IFU designs.
Reference | Related Articles | Metrics
CLASCN: Candidate Network Selection for Efficient Top-k Keyword Queries over Databases
Jun Zhang, Zhao-Hui Peng, Shan Wang, and Hui-Jing Nie
Abstract7265)      PDF(pc) (477KB)(9868)   
Keyword Search Over Relational Databases (KSORD) enables casual or Web users easily access databases through free-form keyword queries. Improving the performance of KSORD systems is a critical issue in this area. In this paper, a new approach CLASCN (Classification, Learning And Selection of Candidate Network) is developed to efficiently perform top-$k$ keyword queries in schema-graph-based online KSORD systems. In this approach, the Candidate Networks (CNs) from trained keyword queries or executed user queries are classified and stored in the databases, and top-$k$ results from the CNs are learned for constructing CN Language Models (CNLMs). The CNLMs are used to compute the similarity scores between a new user query and the CNs from the query. The CNs with relatively large similarity score, which are the most promising ones to produce top-$k$ results, will be selected and performed. Currently, CLASCN is only applicable for past queries and New All-keyword-Used (NAU) queries which are frequently submitted queries. Extensive experiments also show the efficiency and effectiveness of our CLASCN approach.
Reference | Related Articles | Metrics
Topology-Based Recommendation of Users in Micro-Blogging Communities
Marcelo G. Armentano, Daniela Godoy, and Analia Amandi
null    2012, 27 (3): 624-634.   DOI: 10.1007/s11390-012-1249-5
Abstract4509)      PDF(pc) (1460KB)(9568)   
Nowadays, more and more users share real-time news and information in micro-blogging communities such as Twitter, Tumblr or Plurk. In these sites, information is shared via a followers/followees social network structure in which a follower will receive all the micro-blogs from the users he/she follows, named followees. With the increasing number of registered users in this kind of sites, finding relevant and reliable sources of information becomes essential. The reduced number of characters present in micro-posts along with the informal language commonly used in these sites make it difficult to apply standard content-based approaches to the problem of user recommendation. To address this problem, we propose an algorithm for recommending relevant users that explores the topology of the network considering different factors that allow us to identify users that can be considered good information sources. Experimental evaluation conducted with a group of users is reported, demonstrating the potential of the approach.
Reference | Related Articles | Metrics
Server-Based Data Push Architecture for Multi-Processor Environments
Xian-He Sun, Surendra Byna, and Yong Chen
Abstract16565)      PDF(pc) (543KB)(9541)   
Data access delay is a major bottleneck in utilizing current high-end computing (HEC) machines. Prefetching, where data is fetched before CPU demands for it, has been considered as an effective solution to masking data access delay. However, current client-initiated prefetching strategies, where a computing processor initiates prefetching instructions, have many limitations. They do not work well for applications with complex, non-contiguous data access patterns. While technology advances continue to increase the gap between computing and data access performance, trading computing power for reducing data access delay has become a natural choice. In this paper, we present a server-based data-push approach and discuss its associated implementation mechanisms. In the server-push architecture, a dedicated server called Data Push Server (DPS) initiates and proactively pushes data closer to the client in time. Issues, such as what data to fetch, when to fetch, and how to push are studied. The SimpleScalar simulator is modified with a dedicated prefetching engine that pushes data for another processor to test DPS based prefetching. Simulation results show that L1 Cache miss rate can be reduced by up to 97\% (71\% on average) over a superscalar processor for SPEC CPU2000 benchmarks that have high cache miss rates.
Reference | Related Articles | Metrics
Orthogonal Methods Based Ant Colony Search for Solving Continuous Optimization Problems
Xiao-Min Hu, Jun Zhang, and Yun Li
Abstract13347)      PDF(pc) (828KB)(9474)   
Research into ant colony algorithms for solving continuous optimization problems forms one of the most significant and promising areas in swarm computation. Although traditional ant algorithms are designed for combinatorial optimization, they have shown great potential in solving a wide range of optimization problems, including continuous optimization. Aimed at solving continuous problems effectively, this paper develops a novel ant algorithm termed ``continuous orthogonal ant colony'' (COAC), whose pheromone deposit mechanisms would enable ants to search for solutions collaboratively and effectively. By using the orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently. By implementing an ``adaptive regional radius'' method, the proposed algorithm can reduce the probability of being trapped in local optima and therefore enhance the global search capability and accuracy. An elitist strategy is also employed to reserve the most valuable points. The performance of the COAC is compared with two other ant algorithms for continuous optimization --- API and CACO by testing seventeen functions in the continuous domain. The results demonstrate that the proposed COAC algorithm outperforms the others.
Reference | Related Articles | Metrics
Cryptanalysis of a Type of CRT-Based RSA Algorithms
Bao-Dong Qin, Ming Li, and Fan-Yu Kong
Abstract14323)      PDF(pc) (401KB)(9413)   
It is well known that the Chinese Remainder Theorem (CRT) can greatly improve the performances of RSA cryptosystem in both running times and memory requirements. However, if the implementation of CRT-based RSA is careless, an attacker can reveal some secret information by exploiting hardware fault cryptanalysis. In this paper, we present some fault attacks on a type of CRT-RSA algorithms namely BOS type schemes including the original BOS scheme proposed by Bl\"{o}mer, Otto, and Seifert at CCS 2003 and its modified scheme proposed by Liu {\it et al.} at DASC 2006. We first demonstrate that if some special signed messages such as $m = 0, \pm1$ are dealt carelessly, they can be exploited by an adversary to completely break the security of both the BOS scheme and Liu {\it et al.}'s scheme. Then we present a new permanent fault attack on the BOS scheme with a success probability about 25\%. Lastly, we propose a polynomial time attack on Liu {\it et al.}'s CRT-RSA algorithm, which combines physical fault injection and lattice reduction techniques when the public exponent is short.
Reference | Related Articles | Metrics
Lighting Estimation of a Convex Lambertian Object Using Redundant Spherical Harmonic Frames
Wen-Yong Zhao, Shao-Lin Chen, Yuan Zheng, and Si-Long Peng
null    2013, 28 (3): 454-467.   DOI: 10.1007/s11390-013-1347-z
Abstract3670)      PDF(pc) (23295KB)(9402)   
An explicit lighting estimation from a single image of Lambertian objects is influenced by two factors: data incompletion and noise contamination. Measurement of lighting consistency purely using the orthogonal spherical harmonic basis cannot achieve an accurate estimation. We present a novel signal-processing framework to represent the lighting field. We construct a redundant spherical harmonic frame with geometric symmetry on the sphere S2. Spherical harmonic frames are defined over the generating rotation matrices about symmetry axes of finite symmetry subgroups of SO(3), and the generating functions are spherical harmonic basis functions. Compared with the orthogonal spherical harmonic basis, the redundant spherical harmonic frames not only describe the multidirectional lighting distribution intuitively, but also resist the noise theoretically. Subsequently, we analyze the relationship of the irradiance to the incoming radiance in terms of spherical harmonic frames, and reconstruct the lighting function filtered by the Lambertian BRDF (bidirectional reflectance distribution function). The experiments show that the frame coefficients of spherical harmonic frames can better characterize the complex lighting environments finely and robustly.
Reference | Related Articles | Metrics
Performance of IEEE 802.15.4 Clusters with Power Management and Key Exchange
Fereshteh Amini, Moazzam Khan, Jelena Misic, and Hossein Pourreza
Abstract13323)      PDF(pc) (2974KB)(9252)   
The IEEE 802.15.4 specification is a recent low data rate wireless personal area network standard. While basic security services are provided for, there is a lack of more advanced techniques which are indispensable in modern personal area network applications. In addition, performance implications of those services are not known. In this paper, we describe a secure data exchange protocol based on the ZigBee specification and built on top of IEEE 802.15.4 link layer. This protocol includes a key exchange mechanism. We assume that all nodes are applying power management technique based on the constant event sensing reliability required by the coordinator. Power management generates random sleep times by every node which in average fairly distributes the sensing load among the nodes. Key exchange is initiated by a cluster coordinator after some given number of sensing packets have been received by the coordinator. We develop and integrate simulation model of the key exchange and power management technique into the cluster's reliable sensing function. We evaluate the impact of security function and its periodicity on cluster performance.
Reference | Related Articles | Metrics
Delay and Capacity Trade-offs in Mobile Wireless Networks with Infrastructure Support
Zhuo Li (李卓), Wen-Zhong Li (李文中), Member, CCF, ACM, IEEE, Song Guo (郭嵩), Senior Member, IEEE, Member, ACM, Sang-Lu Lu, (陆桑璐), Senior Member, CCF, Member, ACM, IEEE, and Dao-Xu Chen (陈道蓄), Senior Member, CCF, Member, ACM, IEEE
null    2012, (2): 328-340.   DOI: 10.1007/s11390-012-1226-z
Abstract4749)      PDF(pc) (664KB)(9226)   
In this paper, we investigate the trade-offs between delay and capacity in mobile wireless networks with infrastructure support. We consider three different mobility models, independent and identically distributed (i.i.d) mobility model, random walk mobility model with constant speed and Lévy flight mobility model. For i.i.d mobility model and random walk mobility model with the speed Θ((1/√n)), we get the theoretical results of the average packet delay when capacity is Θ(1), Θ((1/√n)) individually, where n is the number of nodes. We find that the optimal average packet delay is achieved when capacity  where K is the number of gateways. It is proved that average packet delay D(n) divided by capacity λ(n) is bounded below by (n/K·W) . When ω(√n) ≤ K < n, the critical average delay for capacity compared with static hybrid wireless networks is Θ((K2/n) ). Lévy flight mobility model is based on human mobility and is more sophisticated. For the model with parameter α, it is found that (D(n)/λ(n)) > O(n((1-η)·(α+1)/2) ln n) when K = O(nη) (0 ≤ η < 1). We also prove that when ω(√n) ≤ K < n, the critical average delay is Θ(n(α-1/2)·K).
Reference | Related Articles | Metrics
WNN-Based Network Security Situation Quantitative Prediction Method and Its Optimization
Ji-Bao Lai , Hui-Qiang Wang, Xiao-Wu Liu, Ying Liang, Rui-Juan Zheng, and Guo-Sheng Zhao
Abstract13549)      PDF(pc) (615KB)(8572)   
The accurate and real-time prediction of network security situation is the premise and basis of preventing intrusions and attacks in a large-scale network. In order to predict the security situation more accurately, a quantitative prediction method of network security situation based on Wavelet Neural Network with Genetic Algorithm (GAWNN) is proposed. After analyzing the past and the current network security situation in detail, we build a network security situation prediction model based on wavelet neural network that is optimized by the improved genetic algorithm and then adopt GAWNN to predict the non-linear time series of network security situation. Simulation experiments prove that the proposed method has advantages over Wavelet Neural Network (WNN) method and Back Propagation Neural Network (BPNN) method with the same architecture in convergence speed, functional approximation and prediction accuracy. What is more, system security tendency and laws by which security analyzers and administrators can adjust security policies in near real-time are revealed from the prediction results as early as possible.
Reference | Related Articles | Metrics
Histogram-Based Estimation of Distribution Algorithm: A Competent Method for Continuous Optimization
Nan Ding, Shu-De Zhou, and Zeng-Qi Sun
Abstract11598)      PDF(pc) (384KB)(8526)   
Designing efficient estimation of distribution algorithms for optimizing complex continuous problems is still a challenging task. This paper utilizes histogram probabilistic model to describe the distribution of population and to generate promising solutions. The advantage of histogram model, its intrinsic multimodality, makes it proper to describe the solution distribution of complex and multimodal continuous problems. To make histogram model more efficiently explore and exploit the search space, several strategies are brought into the algorithms: the surrounding effect reduces the population size in estimating the model with a certain number of the bins and the shrinking strategy guarantees the accuracy of optimal solutions. Furthermore, this paper shows that histogram-based EDA (Estimation of distribution algorithm) can give comparable or even much better performance than those predominant EDAs based on Gaussian models.
Reference | Related Articles | Metrics
Summarizing Software Artifacts: A Literature Review
Najam Nazar, Yan Hu, He Jiang
null    2016, 31 (5): 883-909.   DOI: 10.1007/s11390-016-1671-1
Abstract1685)      PDF(pc) (2126KB)(8515)   
This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field.
Reference | Related Articles | Metrics
ROPAS: Cross-Layer Cognitive Architecture for Mobile UWB Networks
Chittabrata Ghosh, Bin Xie, and Dharma P. Agrawal
Abstract11678)      PDF(pc) (1228KB)(8324)   
The allocation of bandwidth to unlicensed users, without significantly increasing the interference on the existing licensed users, is a challenge for Ultra Wideband (UWB) networks. Our research work presents a novel Rake Optimization and Power Aware Scheduling (ROPAS) architecture for UWB networks. Since UWB communication is rich in multipath effects, a Rake receiver is used for path diversity. Our idea of developing an optimized Rake receiver in our ROPAS architecture stems from the intention of reducing the computation complexity in terms of the number of multiplications and additions needed for the weight derivation attached to each finger of the Rake receiver. Our proposed work uses the Cognitive Radio (CR) for dynamic channel allocation among the requesting users while limiting the average power transmitted in each sub-band. In our proposed novel ROPAS architecture, dynamic channel allocation is achieved by a CR-based cross-layer design between the PHY and Medium Access Control (MAC) layers. Additionally, the maximum number of parallel transmissions within a frame interval is formulated as an optimization problem. This optimal decision is based on the distance parameter between a transmitter-receiver pair, bit error rate and frequency of request by a particular application. Moreover, the optimization problem improvises a differentiation technique among the requesting applications by incorporating priority levels among user applications. This provides fairness and higher throughput among services with varying power constraint and data rates required for a UWB network.
Reference | Related Articles | Metrics
WWW Business Applications Based on the Cellular Model
Toshio Kodama, Tosiyasu L. Kunii, and Yoichi Seki
Abstract11874)      PDF(pc) (1263KB)(8181)   
A cellular model based on the Incrementally Modular Abstraction Hierarchy (IMAH) is a novel model that can represent the architecture of and changes in cyberworlds, preserving invariants from a general level to a specific one. We have developed a data processing system called the Cellular Data System (CDS). In the development of business applications, you can prevent combinatorial explosion in the process of business design and testing by using CDS. In this paper, we have first designed and implemented wide-use algebra on the presentation level. Next, we have developed and verified the effectiveness of two general business applications using CDS: 1) a customer information management system, and 2) an estimate system.
Reference | Related Articles | Metrics
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen, Hui He, and Jun Guo
Abstract11264)      PDF(pc) (451KB)(7938)   
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
Reference | Related Articles | Metrics
Algebraic Construction for Zero-Knowledge Sets
Rui Xue, Ning-Hui Li, and Jiang-Tao Li
Abstract13838)      PDF(pc) (421KB)(7847)   
Zero knowledge sets is a new cryptographic primitive introduced by Micali, Rabin, and Kilian in FOCS 2003. It has been intensively studied recently. However all the existing ZKS schemes follow the basic structure by Micali {\it et al}. That is, the schemes employ the Merkle tree as a basic structure and mercurial commitments as the commitment units to nodes of the tree. The proof for any query consists of an authentication chain. We propose in this paper a new algebraic scheme that is completely different from all the existing schemes. Our new scheme is computationally secure under the standard strong RSA assumption. Neither mercurial commitments nor tree structure is used in the new construction. In fact, the prover in our construction commits the desired set without any trapdoor information, which is another key important difference from the previous approaches.
Reference | Related Articles | Metrics
Short Group Signatures Without Random Oracles
Bo Qin, Qian-Hong Wu, Willy Susilo, Yi Mu, Yu-Min Wang, and Zheng-Tao Jiang
Abstract14206)      PDF(pc) (526KB)(7812)   
We propose {\em short} group signature (GS) schemes which are provably secure {\em without} random oracles. Our basic scheme is about 14 times shorter than the Boyen-Waters GS scheme at Eurocrypt 2006, and 42\% shorter than the recent GS schemes due to Ateniese {\em et al}. The security proofs are provided in the Universally Composable model, which allows the proofs of security valid not only when our scheme is executed in isolation, but also in composition with other secure cryptographic primitives. We also present several new computational assumptions and justify them in the generic group model. These assumptions are useful in the design of high-level protocols and may be of independent interest.
Reference | Related Articles | Metrics
Cited: Baidu(52)
An Efficient Clustering Algorithm for k-Anonymisation
Grigorios Loukides and Jian-Hua Shao
Abstract13337)      PDF(pc) (3594KB)(7745)   
K-anonymisation is an approach to protecting individuals from being identified from data. Good $k$-anonymisations should retain data utility and preserve privacy, but few methods have considered these two conflicting requirements together. In this paper, we extend our previous work on a clustering-based method for balancing data utility and privacy protection, and propose a set of heuristics to improve its effectiveness. We introduce new clustering criteria that treat utility and privacy on equal terms and propose sampling-based techniques to optimally set up its parameters. Extensive experiments show that the extended method achieves good accuracy in query answering and is able to prevent linking attacks effectively.
Reference | Related Articles | Metrics
AHT Bézier Curves and NUAHT B-Spline Curves
Gang Xu and Guo-Zhao Wang
Abstract9839)      PDF(pc) (2756KB)(7548)   
In this paper, we present two new unified mathematics models of conics and polynomial curves, called {\it algebraic hyperbolic trigonometric} ({\it AHT}) {\it B\'{e}zier curves} and {\it non-uniform algebraic hyperbolic trigonometric $($NUAHT$)$ B-spline curves of order n}, which are generated over the space ${\rm span}\{\sin t,\cos t,\sinh t,\cosh t,1,t,\ldots,t^{n-5}\}$, $n\ge 5$. The two kinds of curves share most of the properties as those of the B\'{e}zier curves and B-spline curves in polynomial space. In particular, they can represent exactly some remarkable transcendental curves such as the helix, the cycloid and the catenary. The subdivision formulae of these new kinds of curves are also given. The generations of the tensor product surfaces are straightforward. Using the new mathematics models, we present the control mesh representations of two classes of minimal surfaces.
Reference | Related Articles | Metrics
Engineering the Divide-and-Conquer Closest Pair Algorithm
Minghui Jiang and Joel Gillespie
Abstract10903)      PDF(pc) (891KB)(7545)   
We improve the famous divide-and-conquer algorithm by Bentley and Shamos for the planar closest-pair problem. For $n$ points on the plane, our algorithm keeps the optimal $O(n \log n)$ time complexity and, using a circle-packing property, computes at most $7n/2$ Euclidean distances, which improves Ge {\it et al.}'s bound of $(3n\log n)/2$ Euclidean distances. We present experimental results of our comparative studies on four different versions of the divide-and-conquer closest pair algorithm and propose two effective heuristics.
Reference | Related Articles | Metrics
Leakage Current Estimation of CMOS Circuit with Stack Effect
Yong-Jun Xu, Zu-Ying Luo, Xiao-Wei Li, Li-Jian Li, and Xian-Long Hong
Abstract2111)      PDF(pc) (510KB)(7426)   
Leakage current of CMOS circuit increases dramatically with the technology scaling down and has become a critical issue of high performance system. Subthreshold, gate and reverse biased junction band-to-band tunneling (BTBT) leakages are considered three main determinants of total leakage current. Up to now, how to accurately estimate leakage current of large-scale circuits within endurable time remains unsolved, even though accurate leakage models have been widely discussed. In this paper, the authors first dip into the stack effect of CMOS technology and propose a new simple gate-level leakage current model. Then, a table-lookup based total leakage current simulator is built up according to the model. To validate the simulator, accurate leakage current is simulated at circuit level using popular simulator HSPICE for comparison. Some further studies such as maximum leakage current estimation, minimum leakage current generation and a high-level average leakage current macromodel are introduced in detail. Experiments on ISCAS85 and ISCAS89 benchmarks demonstrate that the two proposed leakage current estimation methods are very accurate and efficient.
Related Articles | Metrics
New Sealed-Bid Electronic Auction with Fairness, Security and Efficiency
Chia-Chi Wu, Chin-Chen Chang, and Iuon-Chang Lin
Abstract10436)      PDF(pc) (304KB)(7324)   
Electronic sealed-bid auction schemes usually have a common drawback, the third party (auction host) can conspire with a malicious bidder to leak all bidding prices before the opening stage. It results in the malicious bidder wining the auction with an optimal bidding price. Recently, Liaw {\it et al}. proposed an auction protocol for electronic online bidding in which they designed a deposit deduction certification for government procurement. However, it also has above mentioned flaw. Moreover, we further found that there were some extra security drawbacks in their protocol. First, the bidder can forge a bidding receipt to claim that he/she is a valid auction winner. Second, it may suffer from the third party forging attack. Third, their protocol leaked some bidders' private information to the third party, such as the bidder's bank account number and the authorization code. Thus, it cannot protect the bidder's privacy at all. In this paper, we not only point out the drawbacks from the previous scheme but also propose a new electronic auction scheme to overcome the above mentioned drawbacks. Furthermore, the computational complexity can be decreased in our online sealed-bid auction scheme.
Reference | Related Articles | Metrics
A Cloud-Based BPM Architecture with User-End Distribution of Non-Compute-Intensive Activities and Sensitive Data
Yan-Bo Han (韩燕波), Jun-Yi Sun (孙君意), Gui-Ling Wang (王桂玲) and Hou-Fu Li (李厚福)
null    2010, 25 (6): 1157-1167.   DOI: 10.1007/s11390-010-1092-5
Abstract4968)      PDF(pc) (652KB)(7302)   

While cloud-based BPM (Business Process Management) shows potentials of inherent scalability and expenditure reduction, such issues as user autonomy, privacy protection and efficiency have popped up as major concerns. Users may have their own rudimentary or even full-fledged BPM systems, which may be embodied by local EAI systems, at their end, but still intend to make use of cloud-side infrastructure services and BPM capabilities, which may appear as PaaS (Platform-as-a-Service) services, at the same time. A whole business process may contain a number of non-compute-intensive activities, for which cloud computing is over-provision. Moreover, some users fear data leakage and loss of privacy if their sensitive data is processed in the cloud. This paper proposes and analyzes a novel architecture of cloud-based BPM, which supports user-end distribution of non-compute-intensive activities and sensitive data. An approach to optimal distribution of activities and data for synthetically utilizing both user-end and cloud-side resources is discussed. Experimental results show that with the help of suitable distribution schemes, data privacy can be satisfactorily protected, and resources on both sides can be utilized at lower cost.

Reference | Related Articles | Metrics
Efficient Optimization of Multiple Subspace Skyline Queries
Zhen-Hua Huang, Jian-Kui Guo, Sheng-Li Sun, and Wei Wang
Abstract9023)      PDF(pc) (546KB)(7177)   
We present the first efficient sound and complete algorithm (i.e., AOMSSQ) for optimizing multiple subspace skyline queries simultaneously in this paper. We first identify three performance problems of the na\'\i ve approach (i.e., SUBSKY) which can be used in processing arbitrary single-subspace skyline query. Then we propose a cell-dominance computation algorithm (i.e., CDCA) to efficiently overcome the drawbacks of SUBSKY. Specially, a novel pruning technique is used in CDCA to dramatically decrease the query time. Finally, based on the CDCA algorithm and the share mechanism between subspaces, we present and discuss the AOMSSQ algorithm and prove it sound and complete. We also present detailed theoretical analyses and extensive experiments that demonstrate our algorithms are both efficient and effective.
Reference | Related Articles | Metrics
Malware-Propagative Mobile Ad Hoc Networks: Asymptotic Behavior Analysis
Vasileios Karyotis, Anastasios Kakalis, and Symeon Papavassiliou
Abstract12599)      PDF(pc) (541KB)(7013)   
In this paper, the spreading of malicious software over ad hoc networks, where legitimate nodes are prone to propagate the infections they receive from either an attacker or their already infected neighbors, is analyzed. Considering the Susceptible-Infected-Susceptible (SIS) node infection paradigm we propose a probabilistic model, on the basis of the theory of closed queuing networks, that aims at describing the aggregated behavior of the system when attacked by malicious nodes. Because of its nature, the model is also able to deal more effectively with the stochastic behavior of attackers and the inherent probabilistic nature of the wireless environment. The proposed model is able to describe accurately the asymptotic behavior of malware-propagative large scale ad hoc networking environments. Using the Norton equivalent of the closed queuing network, we obtain analytical results for its steady state behavior, which in turn is used for identifying the critical parameters affecting the operation of the network. Finally, through modeling and simulation, some additional numerical results are obtained with respect to the behavior of the system when multiple attackers are present, and regarding the time-dependent evolution and impact of an attack.
Reference | Related Articles | Metrics
A Protocol for a Private Set-Operation
Rong-Hua Li and Chuan-Kun Wu
Abstract11270)      PDF(pc) (340KB)(6885)   
A new private set-operation problem is proposed. Suppose there are $n$ parties with each owning a secret set. Let one of them, say $P$, be the leader, $S$ be $P$'s secret set, and $t$ (less than $n-1$) be a threshold value. For each element $w$ of $S$, if $w$ appears more than $t$ times in the rest parties' sets, then $P$ learns which parties' sets include $w$, otherwise $P$ cannot know whether $w$ appears in any party's set. For this problem, a secure protocol is proposed in the semi-honest model based on semantically secure homomorphic encryption scheme, secure sharing scheme, and the polynomial representation of sets. The protocol only needs constant rounds of communication.
Reference | Related Articles | Metrics
An Improved HEAPSORT Algorithm with n log n - 0.788928n Comparisons in the Worst Case
Xiao-Dong Wang and Ying-Jie Wu
Abstract14288)      PDF(pc) (302KB)(6737)   
A new variant of HEAPSORT is presented in this paper. The algorithm is not an internal sorting algorithm in the strong sense, since extra storage for $n$ integers is necessary. The basic idea of the new algorithm is similar to the classical sorting algorithm HEAPSORT, but the algorithm rebuilds the heap in another way. The basic idea of the new algorithm is it uses only one comparison at each node. The new algorithm shift walks down a path in the heap until a leaf is reached. The request of placing the element in the root immediately to its destination is relaxed. The new algorithm requires about $n \log n-0.788928n$ comparisons in the worst case and $n\log n-n$ comparisons on the average which is only about $0.4n$ more than necessary. It beats on average even the clever variants of QUICKSORT, if $n$ is not very small. The difference between the worst case and the best case indicates that there is still room for improvement of the new algorithm by constructing heap more carefully.
Reference | Related Articles | Metrics
Progress and Challenge of Artificial Intelligence
Zhong-Zhi Shi and Nan-Ning Zheng
Abstract8286)      PDF(pc) (438KB)(6689)   
Artificial Intelligence (AI) is generally considered to be a subfield of computer science, that is concerned to attempt simulation, extension and expansion of human intelligence. Artificial intelligence has enjoyed tremendous success over the last fifty years. In this paper we only focus on visual perception, granular computing, agent computing, semantic grid. Human-level intelligence is the long-term goal of artificial intelligence. We should do joint research on basic theory and technology of intelligence by brain science, cognitive science, artificial intelligence and others. A new cross discipline intelligence science is undergoing a rapid development. Future challenges are given in final section.
Reference | Related Articles | Metrics

ISSN 1000-9000(Print)

CN 11-2296/TP

Editorial Board
Author Guidelines
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
  Copyright ©2015 JCST, All Rights Reserved