Loading...




Bimonthly    Since 1986
ISSN 1000-9000(Print)
/1860-4749(Online)
CN 11-2296/TP
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
Other Countries: Springer
 
ip访问总数:
ip当日访问总数:
当前在线人数:
  • Table of Content
      05 September 2009, Volume 24 Issue 5 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Preface
    Preface
    Ling Liu and Wei-Song Shi
    Journal of Computer Science and Technology, 2009, 24 (5): 805-807. 
    Abstract   PDF(180KB) ( 1794 )   Chinese Summary

    We have witnessed the necessity of collaboration and resource sharing in many distributed applications, e.g., grid computing platform for e-Science, the PlanetLab experimental platform for distributed systems and overlay networks, P2P file sharing for fast information dissemination, cooperative caching for Web content delivery, enterprise collaboration in E-commerce, location-based services, and so on. In such a loosely-coupled open computing system, trust and reputation management becomes essential for building a healthy collaboration among participants that do not have prior knowledge about one another.
    Reputation-based trust management is widely recognized as an effective way for an open system to identify and avoid malicious nodes and protect the system from possible misuses and abuses in a decentralized networked computing environment. In reputation-based trust systems, trust describes the ability to infer expectations on the susceptibility of other nodes to behave cooperatively. Trust-based reputation of a node in an open system is typically built based on the history of its behavior in terms of communication and transaction with other nodes. Reputation and trust management is highly interdisciplinary, involving researchers from communication and information systems, artificial intelligence, game theory, as well as social sciences and evolutionary biology. With the increasing importance of reputation-based trust management in large-scale data-intensive systems, two most important research issues are confronted to the data and knowledge management community: how to deal with massive amounts of historical data in reputation management and how to exploit reputation-based trust inference for building more reliable distributed information systems. Furthermore, trust management also includes the construction and management of trust hierarchies in operating systems and distributed applications, trust assignments for downloaded programs and for updating software from vendors and applications with or without signatures.
    This special section selects eight peer-reviewed papers that represent recent progress in trust and reputation management of future computing systems and applications, including P2P systems, E-commerce, desktop grid, online ratings and Web services.
    Reputation mechanisms are one of the key techniques for trust assessment in large-scale decentralized systems. The effectiveness of reputation-based trust management fundamentally relies on the assumption that an entity's future behavior may be predicted based on its current and past behavior. However, a key challenge in designing a good reputation mechanism is the capability of handing dishonest behaviors. In the paper "On the Modeling of Honest Players in Reputation Systems'', Zhang, Wei and Yu investigate the modeling of honest entities in decentralized systems by building a statistical model for the transaction histories of honest players. This statistical model serves as a profiling tool to identify suspicious entities. By combining it with existing trust schemes, the authors show that their approach can be applied to the entities whose transaction records are consistent with the statistical model. This approach limits the manipulation capability of adversaries, and thus can significantly improves the quality of reputation-based trust assessment. The continued advance of the Internet, in particular the wide deployment of Internet-enabled business-to-consumers (B2C) E-business solutions, has enabled many Small and Medium Enterprises (SMEs) to respond to the globalization challenge and opportunity by extending the geographic reach of their operations. Although many existing technologies are available for making the transactions more secure, there remain the risks that the unknown provider will not comply with the protocol used. Thus, the decision of who to trust and with whom to engage in a transaction becomes more difficult and usually falls on the shoulders of the individual nodes. In such an environment, reputation systems is one effective way to assist consumers in decision making. The paper titled "On Desideratum for B2C E-Commerce Reputation Systems'' by Gutowska, Sloane and Buckley proposes a novel reputation model dedicated to the distributed reputation system for B2C E-commerce applications. This model can overcome the drawbacks of existing approaches by considering a number of issues that have some bearing on trust and reputation. such as age of ratings, transaction value, credibility of referees, number of malicious incidents, collusions, and unfair ratings.
    Peer-to-Peer Desktop Grid (P2PDG) has emerged as a pervasive cyber-infrastructure, tackling large-scale applications with high impacts. To handle trustworthiness issues of these services, trust and reputation schemes are proposed to establish trust among peers in P2PDG. In the paper titled "H-Trust: A Group Trust Management System for Peer-to-Peer Desktop Grid'', Zhao and Li propose a group trust management system, called H-Trust, inspired by the h-index aggregation technique. Leveraging the robustness of the h-index algorithm under incomplete and uncertain circumstances, H-Trust offers a robust personalized reputation evaluation mechanism for both individual and group trusts with minimal communication and computation overheads. The H-Trust scheme consists of five phases: trust recording, local trust evaluation, trust query phase, spatial-temporal update phase, and group reputation evaluation phase. Simulation based experimental results demonstrate that H-Trust is robust and can identify and isolate malicious peers in large scale systems even when a large portion of peers are malicious.
    Considering the reputation systems in the context of decentralized distributed hash table based systems, Bonnaire and Rosas propose a new metric for reputation systems on top of a Distributed Hash Table in the paper, titled "WTR: A Reputation Metric for Distributed Hash Tables Based on a Risk and Credibility Factor''. WTR uses a notion of risk to make the applications aware of certain behaviors of malicious nodes. Simulation results show that the proposed metric can significantly reduce the number of malicious transactions, and that it also provides very strong resistance to several traditional attacks of reputations systems. Furthermore, the proposed solution can easily scale, and be adapted to various types of Distributed Hash Table based systems.
    With the evolutionary development of E-commerce, online feedback-based rating systems are gaining increased popularity. A major challenge in building a trustworthy online rating system is to deal with unfair ratings from dishonest raters. In E-commerce systems, it is observed that collaborative dishonest raters can provide unfair ratings intentionally to boost or down-grade the rating scores of certain products or reputation of other users. In the paper titled "Dishonest Behaviors in Online Rating Systems: Cyber Competition, Attack Models, and Attack Generator'', Yang and his colleagues argue that the lack of unfair rating data from real human users and realistic attack behavior models has become an obstacle towards developing reliable rating systems. To address this problem, the authors design and launch a rating challenge to collect unfair rating data from real human users. In order to broaden the scope of the data collection, a comprehensive signal-based unfair rating detection system is developed. Based on the analysis of real attack data, the paper discovers important features in unfair ratings, builds attack models, and develops an unfair rating generator. The models and the generator developed in this paper can be directly used to test current rating aggregation systems, as well as to assist the design of future rating systems.
    The decentralized nature of P2P systems demands for enhanced trust between peers in order to enable reliable communication and exchange of services between peers. In the paper titled "A Comprehensive and Adaptive Trust Model for Large-Scale P2P Networks'', Li and Gui propose an adaptive trusted decision-making method, which can reduce the risk and improve system efficiency considerably. The novelty of this paper is their approach to determining the weight used in the general trust model. Two new parameters, confidence factor and feedback factor, are introduced to adaptively assign the weights to direct trust and feedback trust. This approach overcomes the weakness of traditional methods in which the weights are assigned by subjective manners. Simulation-based experimental results show that the proposed model has good enhancements in the accuracy of trust decision-making and better dynamic adaptation capability in handling various types of dynamic behaviors of peers.
    In the paper "RCCtrust: A Combined Trust Model for Electronic Community'', Zhang and his colleagues propose RCCtrust for deriving reputation in the E-communities. RCCtrust combines Reputation-based, Content-based, and Context-based mechanisms to provide more accurate, fine-grained and efficient trust management for the electronic community. Concretely, RCCtrust extracts trust-related information from user-generated content and community context from Web to extend the reputation-based trust models. Following the study in sociology, RCCtrust exploits similarities between pairs of users to depict differentiated trust scales. The experimental results show that RCCtrust outperforms pure user similarity based method and linear decay trust-aware technique in both accuracy and coverage for a Recommender System.
    Web services continue to gain its popularity as a new distributed computing paradigm. Most of the Web services are built with XML documents via loosely coupled, self-description software. In the paper titled "A Review-Based Reputation Evaluation Approach for Web Services'', Li, Du and Tian argue that reputation evaluation is an efficient way to mitigate the threats in Web service environments. They note that the current feedback-based approach is inaccurate and ineffective because of its inherent limitations (e.g., the feedback quality problem). Such limitations greatly degrade their importance on service reputation evaluation. To tackle this problem, the authors present a novel trust evaluation approach, which reviews the quality in terms of multiple metrics, followed by some improvement in service reputation evaluation based on those filtered reviews. Experimental results show the effectiveness and efficiency of this proposed approach through a comparison with the naive feedback-based approaches.
    In summary, we are pleased to present this selection of eight articles in this special section on trust and reputation management in future computing systems and applications. We believe that this collection represents the state-of-the-art progress in the field of reputation and trust management. We trust that you will enjoy reading this special section.

    Related Articles | Metrics
    Special Section on Trust and Reputation Management in Future Computing Systems and Applications
    On the Modeling of Honest Players in Reputation Systems
    Qing Zhang, Wei Wei, and Ting Yu, Member, ACM
    Journal of Computer Science and Technology, 2009, 24 (5): 808-819. 
    Abstract   PDF(513KB) ( 2467 )   Chinese Summary

    Reputation mechanisms are a key technique to trust assessment in large-scale decentralized systems. The effectiveness of reputation-based trust management fundamentally relies on the assumption that an entity's future behavior may be predicted based on its past behavior. Though many reputation-based trust schemes have been proposed, they can often be easily manipulated and exploited, since an attacker may adapt its behavior, and make the above assumption invalid. In other words, existing trust schemes are in general only effective when applied to honest players who usually act with certain consistency instead of adversaries who can behave arbitrarily. In this paper, we investigate the modeling of honest entities in decentralized systems. We build a statistical model for the transaction histories of honest players. This statistical model serves as a profiling tool to identify suspicious entities. It is combined with existing trust schemes to ensure that they are applied to entities whose transaction records are consistent with the statistical model. This approach limits the manipulation capability of adversaries, and thus can significantly improve the quality of reputation-based trust assessment.

    References | Related Articles | Metrics
    On Desideratum for B2C E-Commerce Reputation Systems
    Anna Gutowska, Andrew Sloane, and Kevan A. Buckley
    Journal of Computer Science and Technology, 2009, 24 (5): 820-832. 
    Abstract   PDF(621KB) ( 2375 )   Chinese Summary

    This paper reviews existing approaches to reputation systems, their constraints as well as available solutions. Furthermore, it presents and evaluates a novel and comprehensive reputation model devoted to the distributed reputation system for Business-to-Consumer (B2C) E-commerce applications that overcomes the discussed drawbacks. The algorithm offers a comprehensive approach as it considers a number of issues that have a bearing on trust and reputation such as age of ratings, transaction value, credibility of referees, number of malicious incidents, collusion and unfair ratings. Moreover, it also extends the existing frameworks based on information about past behaviour, with other aspects affecting online trading decisions which relate to the characteristic of the providers, such as existence of trustmark seals, payment intermediaries, privacy statements, security/privacy strategies, purchase protection/insurance, alternative dispute resolutions as well as the existence of first party information.

    References | Related Articles | Metrics
    H-Trust: A Group Trust Management System for Peer-to-Peer Desktop Grid
    Huanyu Zhao, Student Member, IEEE, and Xiaolin Li, Member, ACM, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 833-843. 
    Abstract   PDF(5680KB) ( 2299 )   Chinese Summary

    Peer-to-Peer Desktop Grid (P2PDG) has emerged as a pervasive cyber-infrastructure tackling many large-scale applications with high impacts. As a burgeoning research area, P2PDG can support numerous applications, including scientific computing, file sharing, web services, and virtual organization for collaborative activities and projects. To handle trustworthiness issues of these services, trust and reputation schemes are proposed to establish trust among peers in P2PDG. In this paper, we propose a robust group trust management system, called H-Trust, inspired by the H-index aggregation technique. Leveraging the robustness of the H-index algorithm under incomplete and uncertain circumstances, H-Trust offers a robust personalized reputation evaluation mechanism for both individual and group trusts with minimal communication and computation overheads. We present the H-Trust scheme in five phases, including trust recording, local trust evaluation, trust query phase, spatial-temporal update phase, and group reputation evaluation phases. The rationale for its design, the analysis of the algorithm are further investigated. To validate the performance of H-Trust scheme, we designed the H-Trust simulator HTrust-Sim to conduct multi-agent-based simulations. Simulation results demonstrate that H-Trust is robust and can identify and isolate malicious peers in large scale systems even when a large portion of peers are malicious.

    References | Related Articles | Metrics
    WTR: A Reputation Metric for Distributed Hash Tables Based on a Risk and Credibility Factor
    Xavier Bonnaire and Erika Rosas
    Journal of Computer Science and Technology, 2009, 24 (5): 844-854. 
    Abstract   PDF(518KB) ( 1893 )   Chinese Summary

    The growing number of popular peer to peer applications during the last five years has implied for researchers to focus on how to build trust in such very large scale distributed systems. Reputation systems have shown to be a very good solution to build trust in presence of malicious nodes. We propose in this paper a new metric for reputation systems on top of a Distributed Hash Table that uses a notion of risk to make the applications aware of certain behaviours of malicious nodes. We show that our metric is able to significantly reduce the number of malicious transactions, and that it also provides very strong resistance to several traditional attacks of reputations systems. We also show that our solution can easily scale, and can be adapted to various Distributed Hash Tables.

    References | Related Articles | Metrics
    Dishonest Behaviors in Online Rating Systems: Cyber Competition, Attack Models, and Attack Generator
    Ya-Fei Yang, Member, IEEE, Qin-Yuan Feng, Yan (Lindsay) Sun, Member, IEEE, and Ya-Fei Dai
    Journal of Computer Science and Technology, 2009, 24 (5): 855-867. 
    Abstract   PDF(3556KB) ( 2216 )   Chinese Summary

    Recently, online rating systems are gaining popularity. Dealing with unfair ratings in such systems has been recognized as an important but challenging problem. Many unfair rating detection approaches have been developed and evaluated against simple attack models. However, the lack of unfair rating data from real human users and realistic attack behavior models has become an obstacle toward developing reliable rating systems. To solve this problem, we design and launch a rating challenge to collect unfair rating data from real human users. In order to broaden the scope of the data collection, we also develop a comprehensive signal-based unfair rating detection system. Based on the analysis of real attack data, we discover important features in unfair ratings, build attack models, and develop an unfair rating generator. The models and generator developed in this paper can be directly used to test current rating aggregation systems, as well as to assist the design of future rating systems.

    References | Related Articles | Metrics
    A Comprehensive and Adaptive Trust Model for Large-Scale P2P Networks
    Xiao-Yong Li and Xiao-Lin Gui, Senior Member, CCF
    Journal of Computer Science and Technology, 2009, 24 (5): 868-882. 
    Abstract   PDF(735KB) ( 20028 )   Chinese Summary

    Based on human psychological cognitive behavior, a Comprehensive and Adaptive Trust (CAT) model for large-scale P2P networks is proposed. Firstly, an adaptive trusted decision-making method based on HEW (Historical Evidences Window) is proposed, which can not only reduce the risk and improve system efficiency, but also solve the trust forecasting problem when the direct evidences are insufficient. Then, direct trust computing method based on IOWA (Induced Ordered Weighted Averaging) operator and feedback trust converging mechanism based on DTT (Direct Trust Tree) are set up, which makes the model have a better scalability than previous studies. At the same time, two new parameters, confidence factor and feedback factor, are introduced to assign the weights to direct trust and feedback trust adaptively, which overcomes the shortage of traditional method, in which the weights are assigned by subjective ways. Simulation results show that, compared to the existing approaches, the proposed model has remarkable enhancements in the accuracy of trust decision-making and has a better dynamic adaptation capability in handling various dynamic behaviors of peers.

    References | Related Articles | Metrics
    RCCtrust: A Combined Trust Model for Electronic Community
    Yu Zhang, Student Member, CCF, Hua-Jun Chen, Xiao-Hong Jiang, Hao Sheng, and Zhao-Hui Wu, Senior Member, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 883-892. 
    Abstract   PDF(3138KB) ( 1933 )   Chinese Summary

    Previous trust models are mainly focused on reputational mechanism based on explicit trust ratings. However, the large amount of user-generated content and community context published on Web is often ignored. Without enough information, there are several problems with previous trust models: first, they cannot determine in which field one user trusts in another, so many models assume that trust exists in all fields. Second some models are not able to delineate the variation of trust scales, therefore they regard each user trusts all his friends to the same extent. Third, since these models only focus on explicit trust ratings, so the trust matrix is very sparse. To solve these problems, we present RCCtrust --- a trust model which combines Reputation-, Content- and Context-based mechanisms to provide more accurate, fine-grained and efficient trust management for the electronic community. We extract trust-related information from user-generated content and community context from Web to extend reputation-based trust models. We introduce role-based and behavior-based reasoning functionalities to infer users' interests and \emph{category-specific} trust relationships. Following the study in sociology, RCCtrust exploits similarities between pairs of users to depict differentiated trust scales. The experimental results show that RCCtrust outperforms pure user similarity method and linear decay trust-aware technique in both accuracy and coverage for a Recommender System.

    References | Related Articles | Metrics
    A Review-Based Reputation Evaluation Approach for Web Services
    Hai-Hua Li, Xiao-Yong Du, Member, CCF, and Xuan Tian
    Journal of Computer Science and Technology, 2009, 24 (5): 893-900. 
    Abstract   PDF(1095KB) ( 2939 )   Chinese Summary

    Web services are commonly perceived as an environment of both offering opportunities and threats. In this environment, one way to minimize threats is to use reputation evaluation, which can be computed, for example, through transaction feedback.~However, the current feedback-based approach is inaccurate and ineffective because of its inner limitations (e.g., feedback quality problem). As the main source of feedback, the qualities of existing on-line reviews are often varied greatly from low to high, the main reasons include: (1) they have no standard expression formats, (2) dishonest comments may exist among these reviews due to malicious attacking. Up to present, the quality problem of review has not been well solved, which greatly degrades their importance on service reputation evaluation. Therefore, we firstly present a novel evaluation approach for review quality in terms of multiple metrics. Then, we make a further improvement in service reputation evaluation based on those filtered reviews. Experimental results show the effectiveness and efficiency of our proposed approach compared with the naive feedback-based approaches.

    References | Related Articles | Metrics
    Architecture and High Performance Computer Systems
    ArchSim: A System-Level Parallel Simulation Platform for the Architecture Design of High Performance Computer
    Yong-Qin Huang, Senior Member, CCF, Hong-Liang Li, Xiang-Hui Xie, Lei Qian, Zi-Yu Hao, Feng Guo, and Kun Zhang
    Journal of Computer Science and Technology, 2009, 24 (5): 901-912. 
    Abstract   PDF(1581KB) ( 3360 )   Chinese Summary

    High performance computer (HPC) is a complex huge system, of which the architecture design meets increasing difficulties and risks. Traditional methods, such as theoretical analysis, component-level simulation and sequential simulation, are not applicable to system-level simulations of HPC systems. Even the parallel simulation using large-scale parallel machines also have many difficulties in scalability, reliability, generality, as well as efficiency. According to the current needs of HPC architecture design, this paper proposes a system-level parallel simulation platform: ArchSim. We first introduce the architecture of ArchSim simulation platform which is composed of a global server (GS), local server agents (LSA) and entities. Secondly, we emphasize some key techniques of ArchSim, including the synchronization protocol, the communication mechanism and the distributed checkpointing/restart mechanism. We then make a synthesized test of some main performance indices of ArchSim with the phold benchmark and analyze the extra overhead generated by ArchSim. Finally, based on ArchSim, we construct a parallel event-driven interconnection network simulator and a system-level simulator for a small scale HPC system with 256 processors. The results of the performance test and HPC system simulations demonstrate that ArchSim can achieve high speedup ratio and high scalability on parallel host machine and support system-level simulations for the architecture design of HPC systems.

    References | Related Articles | Metrics
    Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach
    Gabriel Falcão, Student Member, IEEE, Shinichi Yamagiwa, Member, IEEE, Vitor Silva, and Leonel Sousa, Member, ACM, Senior Member, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 913-924. 
    Abstract   PDF(2477KB) ( 3128 )   Chinese Summary

    Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a {\it{Tanner}} graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the {\it{Tanner}} graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude.

    References | Related Articles | Metrics
    Adaptive Execution of Jobs in Computational Grid Environment
    Sarbani Roy and Nandini Mukherjee, Member, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 925-938. 
    Abstract   PDF(579KB) ( 1779 )   Chinese Summary

    In a computational grid, jobs must adapt to the dynamically changing heterogeneous environment with an objective of maintaining the {\it quality of service}. In order to enable adaptive execution of multiple jobs running concurrently in a computational grid, we propose an integrated performance-based resource management framework that is supported by a multi-agent system (MAS). The multi-agent system initially allocates the jobs onto different resource providers based on a resource selection algorithm. Later, during runtime, if performance of any job degrades or quality of service cannot be maintained for some reason (resource failure or overloading), the multi-agent system assists the job to adapt to the system. This paper focuses on a part of our framework in which adaptive execution facility is supported. Adaptive execution facility is availed by reallocation and local tuning of jobs. Mobile, as well as static agents are employed for this purpose. The paper provides a summary of the design and implementation and demonstrates the efficiency of the framework by conducting experiments on a local grid test bed.

    References | Related Articles | Metrics
    Verification and Test
    Scan Cell Positioning for Boosting the Compression of Fan-Out Networks
    Ozgur Sinanoglu, Mohammed Al-Mulla, Noora A. Shunaiber, and Alex Orailoglu, Member, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 939-948. 
    Abstract   PDF(468KB) ( 1580 )   Chinese Summary

    Ensuring a high manufacturing test quality of an integrated electronic circuit mandates the application of a large volume test set. Even if the test data can be fit into the memory of an external tester, the consequent increase in test application time reflects into elevated production costs. Test data compression solutions have been proposed to address the test time and data volume problem by storing and delivering the test data in a compressed format, and subsequently by expanding the data on-chip. In this paper, we propose a scan cell positioning methodology that accompanies a compression technique in order to boost the compression ratio, and squash the test data even further. While we present the application of the proposed approach in conjunction with the fan-out based decompression architecture, this approach can be extended for application along with other compression solutions as well. The experimental results also confirm the compression enhancement of the proposed methodology.

    References | Related Articles | Metrics
    Aspect-Oriented Modeling and Verification with Finite State Machines
    Dian-Xiang Xu, Senior Member, IEEE, Omar El-Ariss, Wei-Feng Xu, Senior Member, IEEE, and Lin-Zhang Wang, Member, CCF, ACM, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 949-961. 
    Abstract   PDF(596KB) ( 1830 )   Chinese Summary

    Aspect-oriented programming modularizes crosscutting concerns into aspects with the advice invoked at the specified points of program execution. Aspects can be used in a harmful way that invalidates desired properties and even destroys the conceptual integrity of programs. To assure the quality of an aspect-oriented system, rigorous analysis and design of aspects are highly desirable. In this paper, we present an approach to aspect-oriented modeling and verification with finite state machines. Our approach provides explicit notations (e.g., pointcut, advice and aspect) for capturing crosscutting concerns and incremental modification requirements with respect to class state models. For verification purposes, we compose the aspect models and class models in an aspect-oriented model through a weaving mechanism. Then we transform the woven models and the class models not affected by the aspects into FSP (Finite State Processes), which are to be checked by the LTSA (Labeled Transition System Analyzer) model checker against the desired system properties. We have applied our approach to the modeling and verification of three aspect-oriented systems. To further evaluate the effectiveness of verification, we created a large number of flawed aspect models and verified them against the system requirements. The results show that the verification has revealed all flawed models. This indicates that our approach is effective in quality assurance of aspect-oriented state models. As such, our approach can be used for model-checking state-based specification of aspect-oriented design and can uncover some system design problems before the system is implemented.

    References | Related Articles | Metrics
    Interactive Fault Localization Using Test Information
    Dan Hao, Member, CCF, ACM, Lu Zhang, Senior Member, CCF, Member, ACM, Tao Xie, Hong Mei, Senior Member, CCF, and Jia-Su Sun, Senior Member, CCF
    Journal of Computer Science and Technology, 2009, 24 (5): 962-974. 
    Abstract   PDF(1240KB) ( 2017 )   Chinese Summary

    Debugging is a time-consuming task in software development. Although various automated approaches have been proposed, they are not effective enough. On the other hand, in manual debugging, developers have difficulty in choosing breakpoints. To address these problems and help developers locate faults effectively, we propose an interactive fault-localization framework, combining the benefits of automated approaches and manual debugging. Before the fault is found, this framework continuously recommends checking points based on statements' suspicions, which are calculated according to the execution information of test cases and the feedback information from the developer at earlier checking points. Then we propose a naive approach, which is an initial implementation of this framework. However, with this naive approach or manual debugging, developers' wrong estimation of whether the faulty statement is executed before the checking point (breakpoint) may make the debugging process fail. So we propose another robust approach based on this framework, handling cases where developers make mistakes during the fault-localization process. We performed two experimental studies and the results show that the two interactive approaches are quite effective compared with existing fault-localization approaches. Moreover, the robust approach can help developers find faults when they make wrong estimation at some checking points.

    References | Related Articles | Metrics
    Computer Network and Internet
    Leapfrog: Optimal Opportunistic Routing in Probabilistically Contacted Delay Tolerant Networks
    Ming-Jun Xiao, Member, CCF, Liu-Sheng Huang, Senior Member, CCF, Qun-Feng Dong, An Liu, and Zhen-Guo Yang
    Journal of Computer Science and Technology, 2009, 24 (5): 975-986. 
    Abstract   PDF(657KB) ( 2615 )   Chinese Summary

    Delay tolerant networks (DTNs) experience frequent and long lasting network disconnection due to various reasons such as mobility, power management, and scheduling. One primary concern in DTNs is to route messages to keep the end-to-end delivery delay as low as possible. In this paper, we study the single-copy message routing problem and propose an optimal opportunistic routing strategy --- Leapfrog Routing --- for probabilistically contacted DTNs where nodes encounter or contact in some fixed probabilities. We deduce the iterative computation formulate of minimum expected opportunistic delivery delay from each node to the destination, and discover that under the optimal opportunistic routing strategy, messages would be delivered from high-delay node to low-delay node in the leapfrog manner. Rigorous theoretical analysis shows that such a routing strategy is exactly the optimal among all possible ones. Moreover, we apply the idea of Reverse Dijkstra algorithm to design an algorithm. When a destination is given, this algorithm can determine for each node the routing selection function under the Leapfrog Routing strategy. The computation overhead of this algorithm is only O(n^2) where n is the number of nodes in the network. In addition, through extensive simulations based on real DTN traces, we demonstrate that our algorithm can significantly outperform the previous ones.

    References | Related Articles | Metrics
    Estimation of a Population Size in Large-Scale Wireless Sensor Networks
    Shao-Liang Peng, Member, CCF, ACM, IEEE, Shan-Shan Li, Xiang-Ke Liao, Yu-Xing Peng, and Nong Xiao, Member, CCF, ACM, IEEE
    Journal of Computer Science and Technology, 2009, 24 (5): 987-inside back cover. 
    Abstract   PDF(3155KB) ( 2155 )   Chinese Summary

    Efficient estimation of population size is a common requirement for many wireless sensor network applications. Examples include counting the number of nodes alive in the network and measuring the scale and shape of physically correlated events. These tasks must be accomplished at extremely low overhead due to the severe resource limitation of sensor nodes, which poses a challenge for large-scale sensor networks. In this article we design a novel measurement technique, FLAKE based on sparse sampling that is generic, in that it is applicable to arbitrary wireless sensor networks (WSN). It can be used to efficiently evaluate system size, scale of event, and other global aggregating or summation information of individual nodes over the whole network in low communication cost. This functionality is useful in many applications, but hard to achieve when each node has only a limited, local knowledge of the network. Therefore, FLAKE is composed of two main components to solve this problem. One is the Injected Random Data Dissemination (Sampling) method, the other is sparse sampling algorithm based on Inverse Sampling, upon which it improves by achieving a target variance with small error and low communication cost. FLAKE uses approximately uniform random data dissemination and sparse sampling in sensor networks, which is an unstructured and localized method. At last we provide experimental results demonstrating the effectiveness of our algorithm on both small-scale and large-scale WSNs. Our measurement technique appears to be the practical and appropriate choice.

    References | Related Articles | Metrics
  Journal Online
Just Accepted
Archive
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Surveys
Special Issues
  Download
   ScholarOne Manuscripts
   Log In

User ID:

Password:

  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved