Loading...




Bimonthly    Since 1986
ISSN 1000-9000(Print)
/1860-4749(Online)
CN 11-2296/TP
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal Of Computer Science and Technology
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
Other Countries: Springer
 
ip访问总数:
ip当日访问总数:
当前在线人数:
  • Table of Content
      05 November 2015, Volume 30 Issue 6 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    Special Section on Networking and Distributed Computing for Big Data
    Preface
    Wenwu Zhu, Yonggang Wen, Zhi Wang
    Journal of Computer Science and Technology, 2015, 30 (6): 1161-1162.  DOI: 10.1007/s11390-015-1590-6
    Abstract   PDF(114KB) ( 784 )   Chinese Summary
    Research on computer network and distributed systems has been driven by emerging technologies and applications.
    Big data has emerged as an important research topic and application domain, and has been known for its
    volume, velocity, variety, and veracity. It is challenging the conventional design in computer networking and distributed
    computing, ranging from transmission, storage, processing, and system. This special section is an effort to
    promote research to address the big data challenges from the networking and distributed computing perspectives.
    The goal of this special section is to present the state-of-the-art and high-quality research papers in the area
    of computer networking and distributed computing for big data, with topics including network architectures to
    support big data analytics, distributed systems for big data, parallel and distributed algorithms for big data, etc.
    .......
    Related Articles | Metrics
    Towards Cost-Effective Cloud Downloading with Tencent Big Data
    Zhen-Hua Li, Gang Liu, Zhi-Yuan Ji, Roger Zimmermann
    Journal of Computer Science and Technology, 2015, 30 (6): 1163-1174.  DOI: 10.1007/s11390-015-1591-5
    Abstract   PDF(1627KB) ( 1343 )   Chinese Summary
    The cloud downloading scheme, first proposed by us in 2011, has effectively optimized hundreds of millions of users' downloading experiences. Also, people start to build a variety of useful Internet services on top of cloud downloading. In brief, by using cloud facilities to download (and cache) the requested file from the “best-effort” Internet on behalf of the user, cloud downloading ensures the data availability and remarkably enhances the data delivery speed. Although this scheme seems simple and straightforward, designing a real-world cloud downloading system involves complicated and subtle trade-offs (between deployment cost and user experience) when serving a large number of users: 1) how to plan the cloud cache capacity to achieve a high and affordable cache hit ratio, 2) how to accelerate the data delivery from the cloud to numerous users, 3) how to handle the dense user requests for highly popular files, and 4) how to judge a potential downloading failure of the cloud. This paper addresses these design trade-offs from a practical perspective, based on big data from a nationwide commercial cloud downloading system, i.e., Tencent QQXuanfeng. Its running traces help us find reasonable design strategies and parameters, and its real performances confirm the efficacy of our design. Our study provides solid experiences and valuable heuristics for the developers of similar and relevant systems.
    References | Related Articles | Metrics
    Quantifying the Influence of Websites Based on Online Collective Attention Flow
    Yong Li, Jiang Zhang, Xiao-Feng Meng, Chang-Qing Wang
    Journal of Computer Science and Technology, 2015, 30 (6): 1175-1187.  DOI: 10.1007/s11390-015-1592-4
    Abstract   PDF(1120KB) ( 1949 )   Chinese Summary
    The availability of network big data, such as those from online users' surfing records, communication records, and e-commerce records, makes it possible for us to probe into and quantify the regular patterns of users' long-range and complex interactions between websites. If we see the Web as a virtual living organism, according to the metabolic theory, the websites must absorb “energy” to grow, reproduce, and develop. We are interested in the following two questions: 1) where does the “energy” come from? 2) will the websites generate macro influence on the whole Web based on the “energy”? Our data consist of more than 30 000 online users' surfing log data from China Internet Network Information Center. We would consider the influence as metabolism and users' attention flow as the energy for the websites. We study how collective attention distributes and flows among different websites by the empirical attention flow network. Different from traditional studies which focused on information flow, we study users' attention flow, which is not only a “reversed” way to study Web structure and transmission mode, but also the first step to understand the underlying dynamics of the World Wide Web. We find that the macro influence of websites scales sub-linearly against the collective attention flow dwelling time, which is not consistent with the heuristics that the more users' dwelling time is, the greater influence a website will have. Further analysis finds a supper-linear scaling relationship between the influence of websites and the attention flow intensity. This is a websites version of Kleiber's law. We further notice that the development cycle of the websites can be split into three phases: the uncertain growth phase, the partially accelerating growth phase, and the fully accelerating growth phase. We also find that compared with the widespread hyperlinks analysis models, the attention flow network is an effective theoretical tool to estimate and rank websites.
    References | Related Articles | Metrics
    From Interest to Location: Neighbor-Based Friend Recommendation in Social Media
    Jin-Qi Zhu, Li Lu, Chun-Mei Ma
    Journal of Computer Science and Technology, 2015, 30 (6): 1188-1200.  DOI: 10.1007/s11390-015-1593-3
    Abstract   PDF(487KB) ( 1706 )   Chinese Summary
    Recent years have witnessed the tremendous development of social media, which attracts a vast number of Internet users. The tweets these users posted provide an effective way of understanding user behaviors. A large amount of previous work benefits from mining user interest to make friend recommendation. However, the potentially strong but inconspicuous relation between location and interest interaction among social media users is overlooked in these studies. Different from the previous researches, we propose a new concept named neighbor-based friend recommendation (NBFR) to improve the friend recommendation results. By recommending surrounding users who have similar interest to each other, social media users are provided a unique opportunity to interact with surrounding people they may want to know. Based on this concept, we first mine users' interest from short tweets, and then propose to model the user interest with multiple topics under the hypercube structure for friend recommendation. At the same time, we also offer a topic matching shortcut algorithm for more extensive recommendation. The evaluations using the data gathered from the real users demonstrate the advantage of NBFR compared with the traditional recommendation approaches.
    References | Related Articles | Metrics
    Enhancing Telco Service Quality with Big Data Enabled Churn Analysis: Infrastructure, Model, and Deployment
    Hui Li, Di Wu, Gao-Xiang Li, Yi-Hao Ke, Wen-Jie Liu, Yuan-Huan Zheng, Xiao-La Lin
    Journal of Computer Science and Technology, 2015, 30 (6): 1201-1214.  DOI: 10.1007/s11390-015-1594-2
    Abstract   PDF(2862KB) ( 2305 )   Chinese Summary
    The penetration of mobile phones is nearly saturated in both developing and developed regions. In such a circumstance, how to prevent subscriber churn has become an important issue for today's telecom operators, as the cost to acquire a new subscriber is much higher than that to retain an existing subscriber. In this paper, we propose to leverage the power of big data to mitigate the problem of subscriber churn and enhance the service quality of telecom operators. As the information hub, telecom operators have accumulated a huge volume of valuable data on subscriber behaviors, service usage, and network operations. To enable efficient big data processing, we first build a dedicated distributed cloud infrastructure that integrates both online and offline processing capabilities. Second, we develop a complete churn analysis model based on deep data mining techniques, and utilize inter-subscriber influence to improve prediction accuracy. Finally, we use real datasets obtained from a large telecom operator in China to verify the accuracy of our churn analysis models. The dataset contains the information of over 3.5 million subscribers, which generate over 600 million call detail records (CDRs) per month. The empirical results demonstrate that our proposed method can achieve around 90% accuracy for T + 1 testing periods and identify subscribers with high negative influence successfully.
    References | Related Articles | Metrics
    SEIP: System for Efficient Image Processing on Distributed Platform
    Tao Liu, Yi Liu, Qin Li, Xiang-Rong Wang, Fei Gao, Yan-Chao Zhu, De-Pei Qian
    Journal of Computer Science and Technology, 2015, 30 (6): 1215-1232.  DOI: 10.1007/s11390-015-1595-1
    Abstract   PDF(2701KB) ( 1580 )   Chinese Summary
    Nowadays, there exist numerous images in the Internet, and with the development of cloud computing and big data applications, many of those images need to be processed for different kinds of applications by using specific image processing algorithms. Meanwhile, there already exist many kinds of image processing algorithms and their variations, while new algorithms are still emerging. Consequently, an ongoing problem is how to improve the efficiency of massive image processing and support the integration of existing implementations of image processing algorithms into the systems. This paper proposes a distributed image processing system named SEIP, which is built on Hadoop, and employs extensible innode architecture to support various kinds of image processing algorithms on distributed platforms with GPU accelerators. The system also uses a pipeline-based framework to accelerate massive image file processing. A demonstration application for image feature extraction is designed. The system is evaluated in a small-scale Hadoop cluster with GPU accelerators, and the experimental results show the usability and efficiency of SEIP.
    References | Related Articles | Metrics
    An Efficient Algorithm for Distributed Outlier Detection in Large Multi-Dimensional Datasets
    Xi-Te Wang, De-Rong Shen, Mei Bai, Tie-Zheng Nie, Yue Kou, Ge Yu
    Journal of Computer Science and Technology, 2015, 30 (6): 1233-1248.  DOI: 10.1007/s11390-015-1596-0
    Abstract   PDF(1932KB) ( 1737 )   Chinese Summary
    The distance-based outlier is a widely used definition of outlier. A point is distinguished as an outlier on the basis of the distances to its nearest neighbors. In this paper, to solve the problem of outlier computing in distributed environments, DBOZ, a distributed algorithm for distance-based outlier detection using Z-curve hierarchical tree (ZH-tree) is proposed. First, we propose a new index, ZH-tree, to effectively manage the data in a distributed environment. ZH-tree has two desirable advantages, including clustering property to help search the neighbors of a point, and hierarchical structure to support space pruning. We also design a bottom-up approach to build ZH-tree in parallel, whose time complexity is linear to the number of dimensions and the size of dataset. Second, DBOZ is proposed to compute outliers in distributed environments. It consists of two stages. 1) To avoid calculating the exact nearest neighbors of all the points, we design a greedy method and a new ZH-tree based k-nearest neighbor searching algorithm (ZHkNN for short) to obtain a threshold LW. 2) We propose a filter-and-refine approach, which first filters out the unpromising points using LW, and then outputs the final outliers through refining the remaining points. At last, the efficiency and the effectiveness of ZH-tree and DBOZ are testified through a series of experiments.
    References | Related Articles | Metrics
    Computer Networks and Distributed Computing
    Infrastructure-Free Floor Localization Through Crowdsourcing
    Hai-Bo Ye, Tao Gu, Xian-Ping Tao, Jian Lv
    Journal of Computer Science and Technology, 2015, 30 (6): 1249-1273.  DOI: 10.1007/s11390-015-1597-z
    Abstract   PDF(3471KB) ( 1538 )   Chinese Summary
    Mobile phone localization plays a key role in the fast-growing location-based applications domain. Most of the existing localization schemes rely on infrastructure support such as GSM, Wi-Fi or GPS. In this paper, we present FTrack, a novel floor localization system to identify the floor level in a multi-floor building on which a mobile user is located. FTrack uses the mobile phone's sensors only without any infrastructure support. It does not require any prior knowledge of the building such as floor height or floor levels. Through crowdsourcing, FTrack builds a mapping table which contains the magnetic field signature of users taking the elevator/escalator or walking on the stairs between any two floors. The table can then be used for mobile users to pinpoint their current floor levels. We conduct both simulation and field studies to demonstrate the efficiency, scalability and robustness of FTrack. Our field trial shows that FTrack achieves an accuracy of over 96% in three different buildings.
    References | Related Articles | Metrics
    A Family of Stable Multipath Dual Congestion Control Algorithms
    Ying Liu, Hong-Ying Liu, Ke Xu, Meng Shen
    Journal of Computer Science and Technology, 2015, 30 (6): 1274-1289.  DOI: 10.1007/s11390-015-1598-y
    Abstract   PDF(549KB) ( 969 )   Chinese Summary
    We consider the problem of multipath congestion control in the Internet. The aim is to take advantage of multiple paths diversity to achieve efficient bandwidth allocation and improve network efficiency. But there exist some potential difficulties when one directly uses the well-known network utility maximization model to design stable multipath congestion control algorithms for the alternative paths. In this paper, we propose a generalized multipath utility maximization model to consider the problem of joint routing and rate control, which can be reduced to specific models with different parameter settings. And then we develop a family of multipath dual congestion control algorithms which are stable in the absence of delays. We also derive decentralized and scalable sufficient conditions for a particular scheme when propagation delays exist in networks. The simulation results show that the proposed multipath dual congestion control algorithms with appropriate parameter settings can achieve stable resource shares while maintaining fairness among the involved users.
    References | Related Articles | Metrics
    Throughput Optimization in Cognitive Radio Networks Ensembling Physical Layer Measurement
    Yan-Chao Zhao, Jie Wu, Sang-Lu Lu
    Journal of Computer Science and Technology, 2015, 30 (6): 1290-1305.  DOI: 10.1007/s11390-015-1599-x
    Abstract   PDF(657KB) ( 1117 )   Chinese Summary
    Wireless networks are developed under the fashion of wider spectrum utilization (e.g., cognitive radio) and multi-hop communication (e.g., wireless mesh networks). In these paradigms, how to effectively allocate the spectrum to different transmission links with minimized mutual interference becomes the key concern. In this paper, we study the throughput optimization via spectrum allocation in cognitive radio networks (CRNs). The previous studies incorporate either the conflict graph or SINR model to characterize the interference relationship. However, the former model neglects the accumulative interference effect and leads to unwanted interference and sub-optimal results, while the work based on the latter model neglects its heavy reliance on the accuracy of estimated RSS (receiving signal strength) among all potential links. Both are inadequate to characterize the complex relationship between interference and throughput. To this end, by considering the feature of CRs, like spectrum diversity and non-continuous OFDM, we propose a measurement-assisted SINR-based cross-layer throughput optimization solution. Our work concerns features in different layers: in the physical layer, we present an efficient RSS estimation algorithm to improve the accuracy of the SINR model; in the upper layer, a flow level SINR-based throughput optimization problem for WMNs is modelled as a mixed integer non-linear programming problem which is proved to be NP-hard. To solve this problem, a centralized (1 - ε)-optimal algorithm and an efficient distributed algorithm are provided. To evaluate the algorithm performance, the real-world traces are used to illustrate the effectiveness of our scheme.
    References | Related Articles | Metrics
    Cognitive Power Management in Wireless Sensor Networks
    Seyed Mehdi Tabatabaei, Vesal Hakami, Mehdi Dehghan
    Journal of Computer Science and Technology, 2015, 30 (6): 1306-1317.  DOI: 10.1007/s11390-015-1600-8
    Abstract   PDF(1978KB) ( 1073 )   Chinese Summary
    Dynamic power management (DPM) in wireless sensor nodes is a well-known technique for reducing idle energy consumption. DPM controls a node's operating mode by dynamically toggling the on/off status of its units based on predictions of event occurrences. However, since each mode change induces some overhead in its own right, guaranteeing DPM's efficiency is no mean feat in environments exhibiting non-determinism and uncertainty with unknown statistics. Our solution suite in this paper, collectively referred to as cognitive power management (CPM), is a principled attempt toward enabling DPM in statistically unknown settings and gives two different analytical guarantees. Our first design is based on learning automata and guarantees better-than-pure-chance DPM in the face of non-stationary event processes. Our second solution caters for an even more general setting in which event occurrences may take on an adversarial character. In this case, we formulate the interaction of an individual mote with its environment in terms of a repeated zero-sum game in which the node relies on a no-external-regret procedure to learn its mini-max strategies in an online fashion. We conduct numerical experiments to measure the performance of our schemes in terms of network lifetime and event loss percentage.
    References | Related Articles | Metrics
    Theory and Algorithms
    Privacy Petri Net and Privacy Leak Software
    Le-Jun Fan, Yuan-Zhuo Wang, Jing-Yuan Li, Xue-Qi Cheng, Chuang Lin
    Journal of Computer Science and Technology, 2015, 30 (6): 1318-1343.  DOI: 10.1007/s11390-015-1601-7
    Abstract   PDF(4122KB) ( 1262 )   Chinese Summary
    Private information leak behavior has been widely discovered in malware and suspicious applications. We refer to such software as privacy leak software (PLS). Nowadays, PLS has become a serious and challenging problem to cyber security. Previous methodologies are of two categories: one focuses on the outbound network traffic of the applications; the other dives into the inside information flow of the applications. We present an abstract model called Privacy Petri Net (PPN) which is more applicable to various applications and more intuitive and vivid to users. We apply our approach to both malware and suspicious applications in real world. The experimental result shows that our approach can effectively find categories, content, procedure, destination and severity of the private information leaks for the target software.
    References | Related Articles | Metrics
    Understanding Sybil Groups in the Wild
    Jing Jiang, Zi-Fei Shan, Xiao Wang, Li Zhang, Ya-Fei Dai
    Journal of Computer Science and Technology, 2015, 30 (6): 1344-1357.  DOI: 10.1007/s11390-015-1602-6
    Abstract   PDF(655KB) ( 991 )   Chinese Summary
    Sybil attacks are one kind of well-known and powerful attacks against online social networks (OSNs). In a sybil attack, a malicious attacker generates a sybil group consisting of multiple sybil users, and controls them to attack the system. However, data confidentiality policies of major social network providers have severely limited researchers' access to large-scale datasets of sybil groups. A deep understanding of sybil groups can provide important insights into the characteristics of malicious behavior, as well as numerous practical implications on the design of security mechanisms. In this paper, we present an initial study to measure sybil groups in a large-scale OSN, Renren. We analyze sybil groups at different levels, including individual information, social relationships, and malicious activities. Our main observations are: 1) user information in sybil groups is usually incomplete and in poor quality; 2) sybil groups have special evolution patterns in connectivity structure, including bursty actions to add nodes, and a monotonous merging pattern that lacks non-singleton mergings; 3) several sybil groups have strong relationships with each other and compose sybil communities, and these communities cover a large number of users and pose great potential threats; 4) some sybil users are not banned until a long time after registration in some sybil groups. The characteristics of sybil groups can be leveraged to improve the security mechanisms in OSNs to defend against sybil attacks. Specifically, we suggest that OSNs should 1) check information completeness and quality, 2) learn from dynamics of community connectivity structure to detect sybil groups, 3) monitor sybil communities and inspect them carefully to prevent collusion, and 4) inspect sybil groups that behave normally even for a long time to prevent potential malicious behaviors.
    References | Related Articles | Metrics
    Zero-Correlation Linear Cryptanalysis of Reduced-Round SIMON
    Xiao-Li Yu, Wen-Ling Wu, Zhen-Qing Shi, Jian Zhang, Lei Zhang, Yan-Feng Wang
    Journal of Computer Science and Technology, 2015, 30 (6): 1358-1369.  DOI: 10.1007/s11390-015-1603-5
    Abstract   PDF(431KB) ( 1297 )   Chinese Summary
    In June 2013, the U.S. National Security Agency proposed two families of lightweight block ciphers, called SIMON and SPECK respectively. These ciphers are designed to perform excellently on both hardware and software platforms. In this paper, we mainly present zero-correlation linear cryptanalysis on various versions of SIMON. Firstly, by using missin- the-middle approach, we construct zero-correlation linear distinguishers of SIMON, and zero-correlation linear attacks are presented based on careful analysis of key recovery phase. Secondly, multidimensional zero-correlation linear attacks are used to reduce the data complexity. Our zero-correlation linear attacks perform better than impossible differential attacks proposed by Abed et al. in ePrint Report 2013/568. Finally, we also use the divide-and-conquer technique to improve the results of linear cryptanalysis proposed by Javad et al. in ePrint Report 2013/663.
    References | Related Articles | Metrics
    Solving Closest Vector Instances Using an Approximate Shortest Independent Vectors Oracle
    Cheng-Liang Tian, Wei Wei, Dong-Dai Lin
    Journal of Computer Science and Technology, 2015, 30 (6): 1370-1377.  DOI: 10.1007/s11390-015-1604-4
    Abstract   PDF(272KB) ( 797 )   Chinese Summary
    Given an n-dimensional lattice L and some target vector, this paper studies the algorithms for approximate closest vector problem (CVPγ) by using an approximate shortest independent vectors problem oracle (SIVPγ). More precisely, if the distance between the target vector and the lattice is no larger than c/(γn)λ1(L) for arbitrary large but finite constant c > 0, we give randomized and deterministic polynomial time algorithms to find a closest vector, while previous reductions were only known for 1/(2γn)λ1(L). Moreover, if the distance between the target vector and the lattice is larger than some quantity with respect to λn(L), using SIVPγ oracle and Babai's nearest plane algorithm, we can solve CVPγn in deterministic polynomial time. Specially, if the approximate factor λ∈(1, 2) in the SIVPγ oracle, we obtain a better reduction factor for CVP.
    References | Related Articles | Metrics
  Journal Online
Just Accepted
Archive
Top Cited Papers
Top 30 Most Read
Paper Lists of Areas
Surveys
Special Issues
  Download
   ScholarOne Manuscripts
   Log In

User ID:

Password:

  Forgot your password?

Enter your e-mail address to receive your account information.

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved