.tabbox {width:400px; margin-top: 15px;margin-bottom: 5px} .tabmenu {width:400px;height:28px;border-left:1px solid #CCC;border-top:1px solid #ccc;} .tabmenu ul {margin:0;padding:0;list-style-type: none;} .tabmenu li { text-align:center; float:left; display:block; width:99px; overflow:hidden; background-color: #f1f1f1; line-height:27px; border-right:#ccc 1px solid; border-bottom:#ccc 1px solid; display:inline;} .tabmenu .cli {text-align:center;float:left;display:block;width:99px;overflow:hidden;background-color: #fff;line-height:27px;border-right:#ccc 1px solid;border-bottom:#fff 1px solid;display:inline; cursor:pointer; color: #810505; font-weight:bold} #tabcontent {width:399px;background-color:#fff;border-left:#CCC 1px solid;border-right:#CCC 1px solid;border-bottom:#CCC 1px solid; height:60px;} #tabcontent ul {margin:0;padding:5px;list-style-type: none;} #tabcontent .hidden {display:none;} Search Browse by Issue Fig/Tab Adv Search
 HOME ABOUT JCST AUTHORS REVIEWERS PUBLISHED PAPERS FORTHCOMING PAPERS

Surveys

Under Construction

Default Latest Most Read
 For Selected View Abstracts Download Citations Toggle Thumbnails
 A Survey on Task and Participant Matching in Mobile Crowd Sensing Yue-Yue Chen, Pin Lv, De-Ke Guo, Tong-Qing Zhou, Ming Xu 2018,33(4 ):768 -791. DOI:10.1007/s11390-018-1855-y Abstract  （448） PDF（pc） （1201KB）（364） Save Mobile crowd sensing is an innovative paradigm which leverages the crowd, i.e., a large group of people with their mobile devices, to sense various information in the physical world. With the help of sensed information, many tasks can be fulfilled in an efficient manner, such as environment monitoring, traffic prediction, and indoor localization. Task and participant matching is an important issue in mobile crowd sensing, because it determines the quality and efficiency of a mobile crowd sensing task. Hence, numerous matching strategies have been proposed in recent research work. This survey aims to provide an up-to-date view on this topic. We propose a research framework for the matching problem in this paper, including participant model, task model, and solution design. The participant model is made up of three kinds of participant characters, i.e., attributes, requirements, and supplements. The task models are separated according to application backgrounds and objective functions. Offline and online solutions in recent literatures are both discussed. Some open issues are introduced, including matching strategy for heterogeneous tasks, context-aware matching, online strategy, and leveraging historical data to finish new tasks. Reference丨 Related Articles丨 Metrics
 Visual Simulation of Multiple Fluids in Computer Graphics: A State-of-the-Art Report Bo Ren, Xu-Yun Yang, Ming C. Lin, Nils Thuerey, Matthias Teschner, Chenfeng Li 2018,33(3 ):431 -451. DOI:10.1007/s11390-018-1829-0 Abstract  （611） PDF（pc） （3256KB）（705） Save Realistic animation of various interactions between multiple fluids, possibly undergoing phase change, is a challenging task in computer graphics. The visual scope of multi-phase multi-fluid phenomena covers complex tangled surface structures and rich color variations, which can greatly enhance visual effect in graphics applications. Describing such phenomena requires more complex models to handle challenges involving calculation of interactions, dynamics and spatial distribution of multiple phases, which are often involved and hard to obtain real-time performance. Recently, a diverse set of algorithms have been introduced to implement the complex multi-fluid phenomena based on the governing physical laws and novel discretization methods to accelerate the overall computation while ensuring numerical stability. By sorting through the target phenomena of recent research in the broad subject of multiple fluid, this state-of-the-art report summarizes recent advances on multi-fluid simulation in computer graphics. Reference丨 Related Articles丨 Metrics
 Indexing Techniques of Distributed Ordered Tables: A Survey and Analysis Chen Feng, Chun-Dian Li, Rui Li 2018,33(1 ):169 -189. DOI:10.1007/s11390-018-1813-8 Abstract  （530） PDF（pc） （2555KB）（358） Save Many NoSQL (Not Only SQL) databases were proposed to store and query on a huge amount of data. Some of them like BigTable, PNUTS, and HBase, can be modeled as distributed ordered tables (DOTs). Many additional indexing techniques have been presented to support queries on non-key columns for DOTs. However, there was no comprehensive analysis or comparison of these techniques, which brings troubles to users in selecting or proposing a proper indexing technique for a certain workload. This paper proposes a taxonomy based on six indexing issues to classify indexing techniques on DOTs and provides a comprehensive review of the state-of-the-art techniques. Based on the taxonomy, we propose a performance model named QSModel to estimate the query time and storage cost of these techniques and run experiments on a practical workload from Tencent to evaluate this model. The results show that the maximum error rates of the query time and storage cost are 24.2% and 9.8%, respectively. Furthermore, we propose IndexComparator, an open source project that implements representative indexing techniques. Therefore, users can select the best-fit indexing technique based on both theoretical analysis and practical experiments. Reference丨 Related Articles丨 Metrics
 Spear and Shield: Evolution of Integrated Circuit Camouflaging Xue-Yan Wang, Qiang Zhou, Yi-Ci Cai, Gang Qu 2018,33(1 ):42 -57. DOI:10.1007/s11390-018-1807-6 Abstract  （630） PDF（pc） （864KB）（628） Save Intellectual property (IP) protection is one of the hardcore problems in hardware security. Semiconductor industry still lacks effective and proactive defense to shield IPs from reverse engineering (RE) based attacks. Integrated circuit (IC) camouflaging technique fills this gap by replacing some conventional logic gates in the IPs with specially designed logic cells (called camouflaged gates) without changing the functions of the IPs. The camouflaged gates can perform different logic functions while maintaining an identical look to RE attackers, thus preventing them from obtaining the layout information of the IP directly from RE tools. Since it was first proposed in 2012, circuit camouflaging has become one of the hottest research topics in hardware security focusing on two fundamental problems. How to choose the types of camouflaged gates and decide where to insert them in order to simultaneously minimize the performance overhead and optimize the RE complexity? How can an attacker de-camouflage a camouflaged circuit and complete the RE attack? In this article, we review the evolution of circuit camouflaging through this spear and shield race. First, we introduce the design methods of four different kinds of camouflaged cells based on true/dummy contacts, static random access memory (SRAM), doping, and emerging devices, respectively. Then we elaborate four representative de-camouflaging attacks:brute force attack, IC testing based attack, satisfiability-based (SAT-based) attack, and the circuit partition based attack, and the corresponding countermeasures:clique-based camouflaging, CamoPerturb, AND-tree camouflaging, and equivalent class based camouflaging, respectively. We argue that the current research efforts should be on reducing overhead introduced by circuit camouflaging and defeating de-camouflaging attacks. We point out that exploring features of emerging devices could be a promising direction. Finally, as a complement to circuit camouflaging, we conclude with a brief review of other state-of-the-art IP protection techniques. Reference丨 Related Articles丨 Metrics
 A Survey on Human Performance Capture and Animation Shihong Xia, Lin Gao, Yu-Kun Lai, Ming-Ze Yuan, Jinxiang Chai 2017,32(3 ):536 -554. DOI:10.1007/s11390-017-1742-y Abstract  （1551） PDF（pc） （1566KB）（1071） Save With the rapid development of computing technology, three-dimensional (3D) human body models and their dynamic motions are widely used in the digital entertainment industry. Human performance mainly involves human body shapes and motions. Key research problems in human performance animation include how to capture and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate human body motions with physical effects. In this survey, according to the main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely human body surface reconstruction, motion capture and synthesis, as well as physics-based motion simulation, and further discuss future research problems and directions. We hope this will be helpful for readers to have a comprehensive understanding of human performance capture and animation. Reference丨 Related Articles丨 Metrics
 A Survey on Pre-Processing in Image Matting Gui-Lin Yao 2017,32(1 ):122 -138. DOI:10.1007/s11390-017-1709-z Abstract  （1690） PDF（pc） （7373KB）（1991） Save Pre-processing is an important step in digital image matting, which aims to classify more accurate foreground and background pixels from the unknown region of the input three-region mask (Trimap). This step has no relation with the well-known matting equation and only compares color differences between the current unknown pixel and those known pixels. These newly classified pure pixels are then fed to the matting process as samples to improve the quality of the final matte. However, in the research field of image matting, the importance of pre-processing step is still blurry. Moreover, there are no corresponding review articles for this step, and the quantitative comparison of Trimap and alpha mattes after this step still remains unsolved. In this paper, the necessity and the importance of pre-processing step in image matting are firstly discussed in details. Next, current pre-processing methods are introduced by using the following two categories:static thresholding methods and dynamic thresholding methods. Analyses and experimental results show that static thresholding methods, especially the most popular iterative method, can make accurate pixel classifications in those general Trimaps with relatively fewer unknown pixels. However, in a much larger Trimap, there methods are limited by the conservative color and spatial thresholds. In contrast, dynamic thresholding methods can make much aggressive classifications on much difficult cases, but still strongly suffer from noises and false classifications. In addition, the sharp boundary detector is further discussed as a prior of pure pixels. Finally, summaries and a more effective approach are presented for pre-processing compared with the existing methods Reference丨 Related Articles丨 Metrics
 Summarizing Software Artifacts: A Literature Review Najam Nazar, Yan Hu, He Jiang 2016,31(5 ):883 -909. DOI:10.1007/s11390-016-1671-1 Abstract  （1382） PDF（pc） （2126KB）（8456） Save This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field. Reference丨 Related Articles丨 Metrics
 A Survey of Visual Analytic Pipelines Xu-Meng Wang, Tian-Ye Zhang, Yu-Xin Ma, Jing Xia, Wei Chen 2016,31(4 ):787 -804. DOI:10.1007/s11390-016-1663-1 Abstract  （1605） PDF（pc） （5040KB）（1806） Save Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives:data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them. Reference丨 Related Articles丨 Metrics
 Subgroup Discovery Algorithms: A Survey and Empirical Evaluation Sumyea Helal 2016,31(3 ):561 -576. DOI:10.1007/s11390-016-1647-1 Abstract  （1561） PDF（pc） （330KB）（1418） Save Subgroup discovery is a data mining technique that discovers interesting associations among different variables with respect to a property of interest. Existing subgroup discovery methods employ different strategies for searching, pruning and ranking subgroups. It is very crucial to learn which features of a subgroup discovery algorithm should be considered for generating quality subgroups. In this regard, a number of reviews have been conducted on subgroup discovery. Although they provide a broad overview on some popular subgroup discovery methods, they employ few datasets and measures for subgroup evaluation. In the light of the existing measures, the subgroups cannot be appraised from all perspectives. Our work performs an extensive analysis on some popular subgroup discovery methods by using a wide range of datasets and by defining new measures for subgroup evaluation. The analysis result will help with understanding the major subgroup discovery methods, uncovering the gaps for further improvement and selecting the suitable category of algorithms for specific application domains. Reference丨 Related Articles丨 Metrics
 Survey on Simulation for Mobile Ad-Hoc Communication for Disaster Scenarios Erika Rosas, Nicolás Hidalgo, Veronica Gil-Costa, Carolina Bonacic, et al 2016,31(2 ):326 -349. DOI:10.1007/s11390-016-1630-x Abstract  （1850） PDF（pc） （530KB）（1389） Save Mobile ad-hoc communication is a demonstrated solution to mitigate the impact of infrastructure failures during large-scale disasters. A very complex issue in this domain is the design validation of software applications that support decision-making and communication during natural disasters. Such disasters are irreproducible, highly unpredictable, and impossible to scale down, and thus extensive assessments cannot be led in situ. In this context, simulation constitutes the best approach towards the testing of software solutions for natural disaster responses. The present survey reviews mobility models, ad-hoc network architectures, routing protocols and network simulators. Our aim is to provide guidelines for software developers with regards to the performance evaluation of their applications by means of simulation. Reference丨 Related Articles丨 Metrics
 A Synthesis of Multi-Precision Multiplication and Squaring Techniques for 8-Bit Sensor Nodes: State-of-the-Art Research and Future Challenges Zhe Liu, Hwajeong Seo, Howon Kim 2016,31(2 ):284 -299. DOI:10.1007/s11390-016-1627-5 Abstract  （1959） PDF（pc） （536KB）（824） Save Multi-precision multiplication and squaring are the performance-critical operations for the implementation of public-key cryptography, such as exponentiation in RSA, and scalar multiplication in elliptic curve cryptography (ECC). In this paper, we provide a survey on the multi-precision multiplication and squaring techniques, and make special focus on the comparison of their performance and memory footprint on sensor nodes using 8-bit processors. Different from the previous work, our advantages are in at least three aspects. Firstly, this survey includes the existing techniques for multiprecision multiplication and squaring on sensor nodes over prime fields. Secondly, we analyze and evaluate each method in a systematic and objective way. Thirdly, this survey also provides suggestions for selecting appropriate multiplication and squaring techniques for concrete implementation of public-key cryptography. At the end of this survey, we propose the research challenges on efficient implementation of the multiplication and the squaring operations based on our observation. Reference丨 Related Articles丨 Metrics
 A Survey of Blue-Noise Sampling and Its Applications Dong-Ming Yan, Jian-Wei Guo, Bin Wang, Xiao-Peng Zhang, Peter Wonka 2015,30(3 ):439 -452. DOI:10.1007/s11390-015-1535-0 Abstract  （3948） PDF（pc） （20136KB）（2642） Save In this paper, we survey recent approaches to blue-noise sampling and discuss their beneficial applications. We discuss the sampling algorithms that use points as sampling primitives and classify the sampling algorithms based on various aspects, e.g., the sampling domain and the type of algorithm. We demonstrate several well-known applications that can be improved by recent blue-noise sampling techniques, as well as some new applications such as dynamic sampling and blue-noise remeshing. Reference丨 Related Articles丨 Metrics
 SRAM-Based FPGA Systems for Safety-Critical Applications: A Survey on Design Standards and Proposed Methodologies Cinzia Bernardeschi, Luca Cassano, Andrea Domenici 2015,30(2 ):373 -390. DOI:10.1007/s11390-015-1530-5 Abstract  （2467） PDF（pc） （1379KB）（2996） Save As the ASIC design cost becomes affordable only for very large-scale productions, the FPGA technology is currently becoming the leading technology for those applications that require a small-scale production. FPGAs can be considered as a technology crossing between hardware and software. Only a small-number of standards for the design of safety-critical systems give guidelines and recommendations that take the peculiarities of the FPGA technology into consideration. The main contribution of this paper is an overview of the existing design standards that regulate the design and verification of FPGA-based systems in safety-critical application fields. Moreover, the paper proposes a survey of significant published research proposals and existing industrial guidelines about the topic, and collects and reports about some lessons learned from industrial and research projects involving the use of FPGA devices. Reference丨 Related Articles丨 Metrics
 Social Influence Study in Online Networks: A Three-Level Review Hui Li, Jiang-Tao Cui, Jian-Feng Ma 2015,30(1 ):184 -199. DOI:10.1007/s11390-015-1512-7 Abstract  （2898） PDF（pc） （880KB）（1537） Save Social network analysis (SNA) views social relationships in terms of network theory consisting of nodes and ties. Nodes are the individual actors within the networks; ties are the relationships between the actors. In the sequel, we will use the term node and individual interchangeably. The relationship could be friendship, communication, trust, etc. These reason is that these relationships and ties are driven by social in uence, which is the most important phenomenon that distinguishes social network from other networks. In this paper, we present an overview of the representative research work in social in uence study. Those studies can be classi ed into three levels, namely individual, community, and network levels. Throughout the study, we are able to unveil a series of research directions in future and possible applications based on the state-of-the-art study. Reference丨 Related Articles丨 Metrics
 Survey of Large-Scale Data Management Systems for Big Data Applications Lengdong Wu, Liyan Yuan, Jiahuai You 2015,30(1 ):163 -183. DOI:10.1007/s11390-015-1511-8 Abstract  （2669） PDF（pc） （2181KB）（2005） Save Today, data is flowing into various organizations at an unprecedented scale. The ability to scale out for processing an enhanced workload has become an important factor for the proliferation and popularization of database systems. Big data applications demand and consequently lead to the developments of diverse large-scale data management systems in different organizations, ranging from traditional database vendors to new emerging Internet-based enterprises. In this survey, we investigate, characterize, and analyze the large-scale data management systems in depth and develop comprehensive taxonomies for various critical aspects covering the data model, the system architecture, and the consistency model. We map the prevailing highly scalable data management systems to the proposed taxonomies, not only to classify the common techniques but also to provide a basis for analyzing current system scalability limitations. To overcome these limitations, we predicate and highlight the possible principles that future efforts need to be undertaken for the next generation large-scale data management systems. Reference丨 Related Articles丨 Metrics
 A Survey of Phase Change Memory Systems Fei Xia, De-Jun Jiang, Jin Xiong, Ning-Hui Sun 2015,30(1 ):121 -144. DOI:10.1007/s11390-015-1509-2 Abstract  （3327） PDF（pc） （2230KB）（3457） Save As the scaling of applications increases, the demand of main memory capacity increases in order to serve large working set. It is difficult for DRAM (dynamic random access memory) based memory system to satisfy the memory capacity requirement due to its limited scalability and high energy consumption. Compared to DRAM, PCM (phase change memory) has better scalability, lower energy leakage, and non-volatility. PCM memory systems have become a hot topic of academic and industrial research. However, PCM technology has the following three drawbacks: long write latency, limited write endurance, and high write energy, which raises challenges to its adoption in practice. This paper surveys architectural research work to optimize PCM memory systems. First, this paper introduces the background of PCM. Then, it surveys research efforts on PCM memory systems in performance optimization, lifetime improving, and energy saving in detail, respectively. This paper also compares and summarizes these techniques from multiple dimensions. Finally, it concludes these optimization techniques and discusses possible research directions of PCM memory systems in future. Reference丨 Related Articles丨 Metrics
 Name-Face Association in Web Videos: A Large-Scale Dataset, Baselines, and Open Issues Zhi-Neng Chen, Chong-Wah Ngo, Wei Zhang, Juan Cao, Yu-Gang Jiang 2014,29(5 ):785 -798. DOI:10.1007/s11390-014-1468-z Abstract  （2829） PDF（pc） （9096KB）（1564） Save Associating faces appearing in Web videos with names presented in the surrounding context is an important task in many applications. However, the problem is not well investigated particularly under large-scale realistic scenario, mainly due to the scarcity of dataset constructed in such circumstance. In this paper, we introduce a Web video dataset of celebrities, named WebV-Cele, for name-face association. The dataset consists of 75,073 Internet videos of over 4,000 hours, covering 2,427 celebrities and 649,001 faces. This is to our knowledge the most comprehensive dataset for this problem. We describe the details of dataset construction, discuss several interesting findings by analyzing this dataset like celebrity community discovery, and provide experimental results of name-face association using five existing techniques. We also outline important and challenging research problems that could be investigated in the future. Reference丨 Related Articles丨 Metrics
 Allocating Bandwidth in Datacenter Networks: A Survey Li Chen, Baochun Li, Bo Li 2014,29(5 ):910 -917. DOI:10.1007/s11390-014-1478-x Abstract  （4302） PDF（pc） （4139KB）（1575） Save Datacenters have played an increasingly essential role as the underlying infrastructure in cloud computing. As implied by the essence of cloud computing, resources in these datacenters are shared by multiple competing entities, which can be either tenants that rent virtual machines (VMs) in a public cloud such as Amazon EC2, or applications that embrace data parallel frameworks like MapReduce in a private cloud maintained by Google. It has been generally observed that with traditional transport-layer protocols allocating link bandwidth in datacenters, network traffic from competing applications interferes with each other, resulting in a severe lack of predictability and fairness of application performance. Such a critical issue has drawn a substantial amount of recent research attention on bandwidth allocation in datacenter networks, with a number of new mechanisms proposed to efficiently and fairly share a datacenter network among competing entities. In this article, we present an extensive survey of existing bandwidth allocation mechanisms in the literature, covering the scenarios of both public and private clouds. We thoroughly investigate their underlying design principles, evaluate the tradeoff involved in their design choices and summarize them in a unified design space, with the hope of conveying some meaningful insights for better designs in the future. Reference丨 Related Articles丨 Metrics
 A Survey on Silicon PUFs and Recent Advances in Ring Oscillator PUFs Ji-Liang Zhang, Gang Qu, Yong-Qiang Lv, and Qiang Zhou 2014,29(4 ):664 -678. DOI:10.1007/s11390-014-1458-1 Abstract  （2989） PDF（pc） （3752KB）（2822） Save Silicon Physical Unclonable Function (PUF) is a popular hardware security primitive that exploits the intrinsic variation of IC manufacturing process to generate chip-unique information for various security related applications. For example, the PUF information can be used as a chip identifier, a secret key, the seed for a random number generator, or the response to a given challenge. Due to the unpredictability and irreplicability of IC manufacturing variation, silicon PUF has emerged as a promising hardware security primitive and gained a lot of attention over the past few years. In this article, we first give a survey on the current state-of-the-art of silicon PUFs, then analyze known attacks to PUFs and the countermeasures, after that we discuss PUF-based applications, highlight some recent research advances in ring oscillator PUFs, and conclude with some challenges and opportunities in PUF research and applications. Reference丨 Related Articles丨 Metrics
 A Survey on Data Dissemination in Wireless Sensor Networks Xiao-Long Zheng and Meng Wan 2014,29(3 ):470 -486. DOI:10.1007/s11390-014-1443-8 Abstract  （3380） PDF（pc） （2068KB）（2631） Save Wireless Sensor Networks (WSNs) have been applied in a variety of application areas. Most WSN systems, once deployed, are intended to operate unattended for a long period. During the lifetime, it is necessary to fix bugs, reconfigure system parameters, and upgrade the software in order to achieve reliable system performance. However, manually collecting all nodes back and reconfiguring through serial connections with computer is infeasible since it is labor-intensive and inconvenient due to the harsh deploying environments. Hence, data dissemination over multi-hop is desired to facilitate such tasks. This survey discusses the challenges and requirements of data dissemination in WSNs, reviews existing works, introduces some relevant techniques, presents the metrics of the performance and comparisons of the state-of-the-art works, and finally suggests the possible future directions in data dissemination studies. This survey elaborates and compares existing approaches by two categories: structure-less schemes and structure-based schemes, classified by whether or not the network structure information is used during the disseminating process. In existing literatures, different categories have definite boundary and limited analysis on the tradeoff between different categories. Besides, there is no survey has discussed the emerging techniques such as Constructive Interference (CI). But these emerging techniques have the chance to change the framework of data dissemination. In a word, even though many efforts have been made, data dissemination in WSNs still needs some works to embrace the new techniques and improve the efficiency and practicability further. Reference丨 Related Articles丨 Metrics
 A Survey of Visual Analytics Techniques and Applications: State-of-the-Art Research and Future Challenges Guo-Dao Sun, Ying-Cai Wu Rong-Hua Liang, and Shi-Xia Liu 2013,28(5 ):852 -867. DOI:10.1007/s11390-013-1383-8 Abstract  （3532） PDF（pc） （8402KB）（2571） Save Visual analytics employs interactive visualizations to integrate users' knowledge and inference capability into numerical/algorithmic data analysis processes. It is an active research field that has applications in many sectors, such as security, finance, and business. The growing popularity of visual analytics in recent years creates the need for a broad survey that reviews and assesses the recent developments in the field. This report reviews and classifies recent work into a set of application categories including space and time, multivariate, text, graph and network, and other applications. More importantly, this report presents analytics space, inspired by design space, which relates each application category to the key steps in visual analytics, including visual mapping, model-based analysis, and user interactions. We explore and discuss the analytics space to add the current understanding and better understand research trends in the field. Reference丨 Related Articles丨 Metrics
 A Survey of Commonsense Knowledge Acquisition Liang-Jun Zang, Cong Cao, Ya-Nan Cao, Yu-Ming Wu, and Cun-Gen Cao 2013,28(4 ):689 -719. DOI:10.1007/s11390-013-1369-6 Abstract  （3680） PDF（pc） （1185KB）（2350） Save Collecting massive commonsense knowledge (CSK) for commonsense reasoning has been a long time standing challenge within artificial intelligence research. Numerous methods and systems for acquiring CSK have been developed to overcome the knowledge acquisition bottleneck. Although some specific commonsense reasoning tasks have been presented to allow researchers to measure and compare the performance of their CSK systems, we compare them at a higher level from the following aspects: CSK acquisition task (what CSK is acquired from where), technique used (how can CSK be acquired), and CSK evaluation methods (how to evaluate the acquired CSK). In this survey, we first present a categorization of CSK acquisition systems and the great challenges in the field. Then, we review and compare the CSK acquisition systems in detail. Finally, we conclude the current progress in this field and explore some promising future research issues. Reference丨 Related Articles丨 Metrics
 Arabic Bank Check Processing: State of the Art Irfan Ahmad and Sabri A. Mahmoud 2013,28(2 ):285 -299. DOI:10.1007/s11390-013-1332-6 Abstract  （3338） PDF（pc） （2339KB）（1622） Save In this paper, we present a general model for Arabic bank check processing indicating the major phases of a check processing system. We then survey the available databases for Arabic bank check processing research. The state of the art in the different phases of Arabic bank check processing is surveyed (i.e., pre-processing, check analysis and segmentation, features extraction, and legal and courtesy amounts recognition). The open issues for future research are stated and areas that need improvements are presented. To the best of our knowledge, it is the first survey of Arabic bank check processing. Reference丨 Related Articles丨 Metrics
 Internet of Things: Objectives and Scientific Challenges Hua-Dong Ma (马华东), Member, CCF, ACM, IEEE 2011,26(6 ):919 -924. DOI:10.1007/s11390-011-1189-5 Abstract  （6015） PDF（pc） （225KB）（3331） Save The Internet of Things (IoT) is aimed at enabling the interconnection and integration of the physical world and the cyber space. It represents the trend of future networking, and leads the third wave of the IT industry revolution. In this article, we first introduce some background and related technologies of IoT and discuss the concepts and objectives of IoT. Then, we present the challenges and key scientific problems involved in IoT development. Moreover, we introduce the current research project supported by the National Basic Research Program of China (973 Program). Finally, we outline future research directions. Reference丨 Related Articles丨 Metrics
 Tag-Aware Recommender Systems: A State-of-the-Art Survey Zi-Ke Zhang (张子柯), Tao Zhou (周涛), and Yi-Cheng Zhang (张翼成) 2011,26(5 ):767 -777. DOI:10.1007/s11390-011-0176-1 Abstract  （29247） PDF（pc） （811KB）（3254） Save In the past decade, Social Tagging Systems have attracted increasing attention from both physical and computer science communities. Besides the underlying structure and dynamics of tagging systems, many efforts have been addressed to unify tagging information to reveal user behaviors and preferences, extract the latent semantic relations among items, make recommendations, and so on. Specifically, this article summarizes recent progress about tag-aware recommender systems, emphasizing on the contributions from three mainstream perspectives and approaches: network-based methods, tensor-based methods, and the topic-based methods. Finally, we outline some other tag-related studies and future challenges of tag-aware recommendation algorithms. Reference丨 Related Articles丨 Metrics
 Personalized News Recommendation: A Review and an Experimental Investigation LI Lei, WANG Ding-Ding, SHU Shun-Zhi, LI Chao 2011,26(5 ):754 -766. DOI:10.1007/s11390-011-0175-2 Abstract  （7337） PDF（pc） （1153KB）（2918） Save Online news articles, as a new format of press releases, have sprung up on the Internet. With its convenience and recency, more and more people prefer to read news online instead of reading the paper-format press releases. However, a gigantic amount of news events might be released at a rate of hundreds, even thousands per hour. A challenging problem is how to efficiently select specific news articles from a large corpus of newly-published press releases to recommend to individual readers, where the selected news items should match the reader's reading preference as much as possible. This issue refers to personalized news recommendation. Recently, personalized news recommendation has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Existing personalized news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. A variety of techniques have been proposed to tackle personalized news recommendation, including content-based, collaborative filtering systems and hybrid versions of these two. In this paper, we provide a comprehensive investigation of existing personalized news recommenders. We discuss several essential issues underlying the problem of personalized news recommendation, and explore possible solutions for performance improvement. Further, we provide an empirical study on a collection of news articles obtained from various news websites, and evaluate the effect of different factors for personalized news recommendation. We hope our discussion and exploration would provide insights for researchers who are interested in personalized news recommendation. Reference丨 Related Articles丨 Metrics
 Overview of Center for Domain-Specific Computing Jason Cong (丛京生) 2011,26(4 ):632 -635. DOI:10.1007/s11390-011-1163-2 Abstract  （3682） PDF（pc） （1101KB）（1526） Save In this short article, we would like to introduce the Center for Domain-Specific Computing (CDSC) established in 2009, primarily funded by the US National Science Foundation with an award from the 2009 Expeditions in Computing Program. In this project we look beyond parallelization and focus on customization as the next disruptive technology to bring orders-of-magnitude power-performance efficiency improvement for applications in a specific domain. Reference丨 Related Articles丨 Metrics
 New Methodologies for Parallel Architecture Dong-Rui Fan (范东睿), Member, CCF,IEEE, Xiao-Wei Li (李晓维), and Guo-Jie Li (李国杰), Fellow, CCF 2011,26(4 ):578 -587. DOI:10.1007/s11390-011-1158-z Abstract  （3446） PDF（pc） （799KB）（3220） Save Moore's law continues to grant computer architects ever more transistors in the foreseeable future, and para-llelism is the key to continued performance scaling in modern microprocessors. In this paper, the achievements in our research project, which is supported by the National Basic Research 973 Program of China, on parallel architecture, are systematically presented. The innovative approaches and techniques to solve the significant problems in parallel architecture design are summarized, including architecture level optimization, compiler and language-supported technologies, reliability, power-performance efficient design, test and verification challenges, and platform building. Two prototype chips, a multi-heavy-core Godson-3 and a many-light-core Godson-T, are described to demonstrate the highly scalable and reconfigurable parallel architecture designs. We also present some of our achievements appearing in ISCA, MICRO, ISSCC, HPCA, PLDI, PACT, IJCAI, Hot Chips, DATE, IEEE Trans. VLSI, IEEE Micro, IEEE Trans. Computers, etc. Reference丨 Related Articles丨 Metrics
 The Godson Processors: Its Research, Development, and Contributions Wei-Wu Hu (胡伟武), Senior Member, CCF, Yan-Ping Gao (高燕萍), Member, CCF, Tian-Shi Chen (陈天石), and Jun-Hua Xiao (肖俊华), Member, CCF 2011,26(3 ):363 -372. DOI:10.1007/s11390-011-1139-2 Abstract  （5738） PDF（pc） （1156KB）（1767） Save The Godson project with an R&D history of 10 years is an independent national program of China that aims at developing advanced microprocessor technologies based on fundamental research and commercialization of the chip technology. We will give a comprehensive presentation of the Godson project, including its history, technical roadmaps, and several unique technical merits. Reference丨 Related Articles丨 Metrics
 Gradient Domain Mesh Deformation - A Survey Wei-Wei Xu and Kun Zhou 2009,24(1 ):6 -18 . Abstract  （5667） PDF（pc） （10359KB）（2181） Save This survey reviews the recent development of gradient domain mesh deformation method. Different to other deformation methods, the gradient domain deformation method is a surface-based, variational optimization method. It directly encodes the geometric details in differential coordinates, which are also called Laplacian coordinates in literature. By preserving the Laplacian coordinates, the mesh details can be well preserved during deformation. Due to the locality of the Laplacian coordinates, the variational optimization problem can be casted into a sparse linear system. Fast sparse linear solver can be adopted to generate deformation result interactively, or even in real-time. The nonlinear nature of gradient domain mesh deformation leads to the development of two categories of deformation methods: linearization methods and nonlinear optimization methods. Basically, the linearization methods only need to solve the linear least-squares system once. They are fast, easy to understand and control, while the deformation result might be suboptimal. Nonlinear optimization methods can reach optimal solution of deformation energy function by iterative updating. Since the computation of nonlinear methods is expensive, reduced deformable models should be adopted to achieve interactive performance. The nonlinear optimization methods avoid the user burden to input transformation at deformation handles, and they can be extended to incorporate various nonlinear constraints, like volume constraint, skeleton constraint, and so on. We review representative methods and related approaches of each category comparatively and hope to help the user understand the motivation behind the algorithms. Finally, we discuss the relation between physical simulation and gradient domain mesh deformation to reveal why it can achieve physically plausible deformation result. Reference丨 Related Articles丨 Metrics
 Survey on Anonymity in Unstructured Peer-to-Peer Systems Ren-Yi Xiao 2008,23(4 ):660 -671 . Abstract  （3871） PDF（pc） （675KB）（4632） Save Although anonymizing Peer-to-Peer (P2P) networks often means extra cost in terms of transfer efficiency, many systems try to mask the identities of their users for privacy consideration. By comparison and analysis of existing approaches, we investigate the properties of unstructured P2P anonymity, and summarize current attack models on these designs. Most of these approaches are path-based, which require peers to pre-construct anonymous paths before transmission, thus suffering significant overhead and poor reliability. We also discuss the open problems in this field and propose several future research directions. Reference丨 Related Articles丨 Metrics
 Middleware for Wireless Sensor Networks: A Survey Miao-Miao Wang, Jian-Nong Cao, Jing Li, and Sajal K. Das 2008,23(3 ):305 -326 . Abstract  （15487） PDF（pc） （6603KB）（18384） Save Wireless Sensor Networks (WSNs) have found more and more applications in a variety of pervasive computing environments. However, how to support the development, maintenance, deployment and execution of applications over WSNs remains to be a nontrivial and challenging task, mainly because of the gap between the high level requirements from pervasive computing applications and the underlying operation of WSNs. Middleware for WSN can help bridge the gap and remove impediments. In recent years, research has been carried out on WSN middleware from different aspects and for different purposes. In this paper, we provide a comprehensive review of the existing work on WSN middleware, seeking for a better understanding of the current issues and future directions in this field. We propose a reference framework to analyze the functionalities of WSN middleware in terms of the system abstractions and the services provided. We review the approaches and techniques for implementing the services. On the basis of the analysis and by using a feature tree, we provide taxonomy of the features of WSN middleware and their relationships, and use the taxonomy to classify and evaluate existing work. We also discuss open problems in this important area of research. Reference丨 Related Articles丨 Metrics
 Computational Mechanisms for Metaphor in Languages: A Survey Chang-Le Zhou, Yun Yang, and Xiao-Xi Huang 2007,22(2 ):308 -319 . Abstract  （6917） PDF（pc） （447KB）（5586） Save Metaphor computation has attracted more and more attention because metaphor, to some extent, is the focus of mind and language mechanism. However, it encounters problems not only due to the rich expressive power of natural language but also due to cognitive nature of human being. Therefore machine-understanding of metaphor is now becoming a bottle-neck in natural language processing and machine translation. This paper first suggests how a metaphor is understood and then presents a survey of current computational approaches, in terms of their linguistic historical roots, underlying foundations, methods and techniques currently used, advantages, limitations, and future trends. A comparison between metaphors in English and Chinese languages is also introduced because compared with development in English language Chinese metaphor computation is just at its starting stage. So a separate summarization of current progress made in Chinese metaphor computation is presented. As a conclusion, a few suggestions are proposed for further research on metaphor computation especially on Chinese metaphor computation. Reference丨 Related Articles丨 Metrics
 Beyond Knowledge Engineering Ru-Qian Lu and Zhi Jin 2006,21(5 ):790 -799 . Abstract  （5127） PDF（pc） （408KB）（1514） Save Knowledge engineering stems from E. A. Figenbaum's proposal in 1977, but it will enter a new decade with the new challenges. This paper first summarizes three knowledge engineering experiments we have undertaken to show possibility of separating knowledge development from intelligent software development. We call it the ICAX mode of intelligent application software generation. The key of this mode is to generate knowledge base, which is the source of intelligence of ICAX software, independently and parallel to intelligent software development. That gives birth to a new and more general concept "knowware". Knowware is a commercialized knowledge module with documentation and intellectual property, which is computer operable, but free of any built-in control mechanism, meeting some industrial standards and embeddable in software/hardware. The process of development, application and management of knowware is called knowware engineering. Two different knowware life cycle models are discussed: the furnace model and the crystallization model. Knowledge middleware is a class of software functioning in all aspects of knowware life cycle models. Finally, this paper also presents some examples of building knowware in the domain of information system engineering. Reference丨 Related Articles丨 Metrics
 Some Issues on Computer Networks: Architecture and Key Technologies Guan-Qun Gu and Jun-Zhou Luo 2006,21(5 ):708 -722 . Abstract  （6475） PDF（pc） （378KB）（5225） Save The evolution of computer networks has experienced several major steps, and research focus of each step has been kept changing and evolving, from ARPANET to OSI/RM, then HSN (high speed network) and HPN (high performance network). During the evolution, computer networks represented by Internet have made great progress and gained unprecedented success. However, with the appearance and intensification of tussle, along with the three difficult problems (service customizing, resource control and user management) of modern network, it is found that traditional Internet and its architecture no longer meet the requirements of next generation network. Therefore, it is the next generation network that current Internet must evolve to. With the mindset of achieving valuable guidance for research on next generation network, this paper firstly analyzes some dilemmas facing current Internet and its architecture, and then surveys some recent influential research work and progresses in computer networks and related areas, including new generation network architecture, network resource control technologies, network management and security, distributed computing and middleware, wireless/mobile network, new generation network services and applications, and foundational theories on network modeling. Finally, this paper concludes that within the research on next generation network, more attention should be paid to the high availability network and corresponding architecture, key theories and supporting technologies. Reference丨 Related Articles丨 Metrics
 Study on Parallel Computing Guo-Liang Chen, Guang-Zhong Sun, Yun-Quan Zhang, and Ze-Yao Mo 2006,21(5 ):665 -673 . Abstract  （6083） PDF（pc） （325KB）（2231） Save In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of architecture --- algorithm --- programming --- application''. Only in this way, parallel computing research becomes continuous development and more realistic. Reference丨 Related Articles丨 Metrics
 Trends in Computing with DNA Natasa Jonoska 2004,19(1 ):0 -0. Abstract  （1718） PDF（pc） （446KB）（1402） Save As an emerging new research area, DNA computation, or more generally biomolecular computation, extends into other fields such as nanotechnology and material design, and is developing into a new sub-discipline of science and engineering. This paper provides a brief survey of some concepts and developments in this area. In particular several approaches are described for biomolecular solutions of the satisfiability problem (using bit strands, DNA tiles and graph self-assembly).Theoretical models such as the primer splicing systems as well as the recent model of forbidding and enforcing are also described. We review some experimental results of self-assembly of DNA nanostructures and nanomechanical devices as well as the design of an autonomous finite state machine. Related Articles丨 Metrics
 The Haplotyping Problem: An Overview of Computational Models and Solutions Paola Bonizzoni , Gianluca Della Vedova , Riccardo Dondi and Jing Li 2003,18(6 ):0 -0. Abstract  （2192） PDF（pc） （388KB）（1432） Save The investigation of genetic differences among humans has given evidence that mutations in DNA sequences are responsible for some genetic diseases. The most common mutation is the one that involves only a single nucleotide of the DNA sequence, which is called a single nucleotide polymorphism (SNP). As a consequence, computing a complete map of all SNPs occurring in the human populations is one of the primary goals of recent studies in human genomics. The construction of such a map requires to determine the DNA sequences that from all chromosomes. In diploid organisms like humans, each chromosome consists of two sequences called haplotypes. Distinguishing the information contained in both haplotypes when analyzing chromosome sequences poses several new computational issues which collectively form a new emerging topic of Computational Biology known as Haplotyping. This paper is a comprehensive study of some new combinatorial approaches proposed in this research area and it mainly focuses on the formulations and algorithmic solutions of some basic biological problems. Three statistical approaches are briefly discussed at the end of the paper. Related Articles丨 Metrics
 An Overview of Duration Calculus Zhou Chaochen; 1998,13(6 ):552 -null. Abstract  （2435） PDF（pc） （47KB）（1033） Save Related Articles丨 Metrics
 ISSN 1000-9000(Print)          1860-4749(Online) CN 11-2296/TP Home Editorial Board Author Guidelines Subscription Journal of Computer Science and Technology Institute of Computing Technology, Chinese Academy of Sciences P.O. Box 2704, Beijing 100190 P.R. China Tel.:86-10-62610746 E-mail: jcst@ict.ac.cn