info@itechprosolutions.in | +91 9790176891

JAVA Projects

Category Archives

A Decentralized Service Discovery Approach on Peer-to-Peer Networks

ABSTRACT:

Service-Oriented Computing (SOC) is emerging as a paradigm for developing distributed applications. A critical issue of utilizing SOC is to have a scalable, reliable, and robust service discovery mechanism. However, traditional service discovery methods using centralized registries can easily suffer from problems such as performance bottleneck and vulnerability to failures in large scalable service networks, thus functioning abnormally. To address these problems, this paper proposes a peer-to-peer-based decentralized service discovery approach named Chord4S. Chord4S utilizes the data distribution and lookup capabilities of the popular Chord to distribute and discover services in a decentralized manner. Data availability is further improved by distributing published descriptions of functionally equivalent services to different successor nodes that are organized into virtual segments in the Chord4S circle. Based on the service publication approach, Chord4S supports QoS-aware service discovery. Chord4S also supports service discovery with wildcard(s). In addition, the Chord routing protocol is extended to support efficient discovery of multiple services with a single query. This enables late negotiation of Service Level Agreements (SLAs) between service consumers and multiple candidate service providers. The experimental evaluation shows that Chord4S achieves higher data availability and provides efficient query with reasonable overhead.

DOWNLOAD


A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks

ABSTRACT:

Given the sensitivity of the potential WSN applications and because of resource limitations, key management emerges as a challenging issue for WSNs. One of the main concerns when designing a key management scheme is the network scalability. Indeed, the protocol should support a large number of nodes to enable a large scale deployment of the network. In this paper, we propose a new scalable key management scheme for WSNs which provides a good secure connectivity coverage. For this purpose, we make use of the unital design theory. We show that the basic mapping from unitals to key pre-distribution allows us to achieve high network scalability. Nonetheless, this naive mapping does not guarantee a high key sharing probability. Therefore, we propose an enhanced unital-based key pre-distribution scheme providing high network scalability and good key sharing probability approximately lower bounded by 1 e    0.632. We conduct approximate analysis and simulations and compare our solution to those of existing methods for different criteria such as storage overhead, network scalability, network connectivity, average secure path length and network resiliency. Our results show that the proposed approach enhances the network scalability while providing high secure connectivity coverage and overall improved performance. Moreover, for an equal network size, our solution reduces significantly the storage overhead compared to those of existing solutions.

DOWNLOAD


A Method for Mining Infrequent Causal Associations and Its Application in Finding Adverse Drug Reaction Signal Pairs

ABSTRACT:

In many real-world applications, it is important to mine causal relationships where an event or event pattern causes certain outcomes with low probability. Discovering this kind of causal relationships can help us prevent or correct negative outcomes caused by their antecedents. In this paper, we propose an innovative data mining framework and apply it to mine potential causal associations in electronic patient data sets where the drug-related events of interest occur infrequently. Specifically, we created a novel interestingness measure, exclusive causal-leverage, based on a computational, fuzzy recognition-primed decision (RPD) model that we previously developed. On the basis of this new measure, a data mining algorithm was developed to mine the causal relationship between drugs and their associated adverse drug reactions (ADRs). The algorithm was tested on real patient data retrieved from the Veterans Affairs Medical Center in Detroit, Michigan. The retrieved data included 16,206 patients (15,605 male, 601 female). The exclusive causal-leverage was employed to rank the potential causal associations between each of the three selected drugs (i.e., enalapril, pravastatin, and rosuvastatin) and 3,954 recorded symptoms, each of which corresponded to a potential ADR. The top10 drug-symptom pairs for each drug were evaluated by the physicians on our project team. The numbers of symptoms considered as likely real ADRs for enalapril, pravastatin, and rosuvastatin were 8, 7, and 6, respectively. These preliminary results indicate the usefulness of our method in finding potential ADR signal pairs for further analysis (e.g., epidemiology study) and investigation (e.g., case review) by drug safety professionals.

DOWNLOAD


An efficient Mailing System for Detecting Spam Mails

ABSTRACT:

Compromised machines are one of the key security threats on the Internet; they are often used to launch various security attacks such as spamming and spreading malware, DDoS, and identity theft. Given that spamming provides a key economic incentive for attackers to recruit the large number of compromised machines, we focus on the detection of the compromised machines in a network that are involved in the spamming activities, commonly known as spam zombies. We develop an effective spam zombie detection system named SPOT by monitoring outgoing messages of a network. SPOT is designed based on a powerful statistical tool called Sequential Probability Ratio Test, which has bounded false positive and false negative error rates. In addition, we also evaluate the performance of the developed SPOT system using a two-month e-mail trace collected in a large US campus network. Our evaluation studies show that SPOT is an effective and efficient system in automatically detecting compromised machines in a network.

For example, among the 440 internal IP addresses observed in the e-mail trace, SPOT identifies 132 of them as being associated with compromised machines. Out of the 132 IP addresses identified by SPOT, 126 can be either independently confirmed (110) or highly likely (16) to be compromised. Moreover, only seven internal IP addresses associated with compromised machines in the trace are missed by SPOT. In addition, we also compare the performance of SPOT with two other spam zombie detection algorithms based on the number and percentage of spam messages originated or forwarded by internal machines, respectively, and show that SPOT outperforms these two detection algorithms.

DOWNLOAD


An enhanced model for achieving high throughput using variant buffer size methodology

ABSTRACT:-

We consider the sizing of network buffers in 802.11 based networks. Wireless networks face a number of fundamental issues that do not arise in wired networks. We demonstrate that the use of fixed size buffers in 802.11 networks inevitably leads to either undesirable channel under-utilization or unnecessary high delays. We present two novel dynamic buffer sizing algorithms that achieve high throughput while maintaining low delay across a wide range of network conditions. Experimental measurements demonstrate the utility of the proposed algorithms in a production WLAN and a lab test bed.

DOWNLOAD


Annotating Search Results from Web Databases

ABSTRACT:

An increasing number of databases have become web accessible through HTML form-based search interfaces. The data units returned from the underlying database are usually encoded into the result pages dynamically for human browsing. For the encoded data units to be machine process able, which is essential for many applications such as deep web data collection and Internet comparison shopping, they need to be extracted out and assigned meaningful labels. In this paper, we present an automatic annotation approach that first aligns the data units on a result page into different groups such that the data in the same group have the same semantic. Then, for each group we annotate it from different aspects and aggregate the different annotations to predict a final annotation label for it. An annotation wrapper for the search site is automatically constructed and can be used to annotate new result pages from the same web database. Our experiments indicate that the proposed approach is highly effective.

DOWNLOAD


ANOMALY BASED INTRUSION DETECTION SYSTEM

 

ABSTRACT:

Alert aggregation is an important subtask of intrusion detection. The goal is to identify and to cluster different alerts—produced by low-level intrusion detection systems, firewalls, etc.—belonging to a specific attack instance which has been initiated by an attacker at a certain point in time. Thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts may then be the basis for reporting to security experts or for communication within a distributed intrusion detection system. We propose a novel technique for online alert aggregation which is based on a dynamic, probabilistic model of the current attack situation. Basically, it can be regarded as a data stream version of a maximum likelihood approach for the estimation of the model parameters. With three benchmark data sets, we demonstrate that it is possible to achieve reduction rates of up to 99.96 percent while the number of missing meta-alerts is extremely low. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance.

DOWNLOAD


Anonymization for Secure Data in Web

ABSTRACT:

In this paper, we consider the collaborative data publishing problem for anonymizing horizontally partitioned data at multiple data providers. We consider a new type of “insider attack” by colluding data providers who may use their own data records (a subset of the overall data) to infer the data records contributed by other data providers. The paper addresses this new threat, and makes several contributions. First, we introduce the notion of m-privacy, which guarantees that the anonymized data satisfies a given privacy constraint against any group of up to m colluding data providers. Second, we present heuristic algorithms exploiting the monotonicity of privacy constraints for efficiently checking m-privacy given a group of records. Third, we present a data provider-aware anonymization algorithm with adaptive m-privacy checking strategies to ensure high utility and m-privacy of anonymized data with efficiency. Finally, we propose secure multi-party computation protocols for collaborative data publishing with m-privacy. All protocols are extensively analyzed and their security and efficiency are formally proved. Experiments on real-life datasets suggest that our approach achieves better or comparable utility and efficiency than existing and baseline algorithms while satisfying m-privacy.

DOWNLOAD


A Secure Cloud server System using Proxy Re-Encryption Model

ABSTRACT:

A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party’s cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.

DOWNLOAD


A Secure payment System for banking transactions

ABSTRACT:

                    Anonymity has received increasing attention in the literature due to the users’ awareness of their privacy nowadays. Anonymity provides protection for users to enjoy network services without being traced. While anonymity-related issues have been extensively studied in payment-based systems such as e-cash and peer-to-peer (P2P) systems, little effort has been devoted to wireless mesh networks (WMNs). On the other hand, the network authority requires conditional anonymity such that misbehaving entities in the network remain traceable. Here, we propose a security architecture to ensure unconditional anonymity for honest users and traceability of misbehaving users for network authorities in WMNs. The proposed architecture strives to resolve the conflicts between the anonymity and traceability objectives, in addition to guaranteeing fundamental security requirements including authentication, confidentiality, data integrity, and norepudiation. Thorough analysis on security and efficiency is incorporated, demonstrating the feasibility and effectiveness of the proposed architecture.

DOWNLOAD


A Secure Protocol for Spontaneous Wireless Ad Hoc Networks Creation

ABSTRACT:

This paper presents a secure protocol for spontaneous wireless ad hoc networks which uses a hybrid symmetric/ asymmetric scheme and the trust between users in order to exchange the initial data and to exchange the secret keys that will be used to encrypt the data. Trust is based on the first visual contact between users. Our proposal is a complete self-configured secure protocol that is able to create the network and share secure services without any infrastructure. The network allows sharing resources and offering new services among users in a secure environment. The protocol includes all functions needed to operate without any external support. We have designed and developed it in devices with limited resources. Network creation stages are detailed and the communication, protocol messages, and network management are explained. Our proposal has been implemented in order to test the protocol procedure and performance. Finally, we compare the protocol with other spontaneous ad hoc network protocols in order to highlight its features and we provide a security analysis of the system.

DOWNLOAD


A System for Denial-of-Service Attack Detection

ABSTRACT:

Interconnected systems, such as Web servers, database servers, cloud computing servers etc, are now under threads from network attackers. As one of most common and aggressive means, Denial-of-Service (DoS) attacks cause serious impact on these computing systems. In this paper, we present a DoS attack detection system that uses Multivariate Correlation Analysis (MCA) for accurate network traffic characterization by extracting the geometrical correlations between network traffic features. Our MCA-based DoS attack detection system employs the principle of anomaly-based detection in attack recognition. This makes our solution capable of detecting known and unknown DoS attacks effectively by learning the patterns of legitimate network traffic only. Furthermore, a triangle-area-based technique is proposed to enhance and to speed up the process of MCA. The effectiveness of our proposed detection system is evaluated using KDD Cup 99 dataset, and the influences of both non-normalized data and normalized data on the performance of the proposed detection system are examined. The results show that our system outperforms two other previously developed state-of-the-art approaches in terms of detection accuracy.

DOWNLOAD


Attribute-based Encryption System for Secured Data Storage

ABSTRACT:

Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the cipher text-policy attribute-based encryption (CP-ABE) scheme by Bethencourt et al. and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

DOWNLOAD


Automatic Testing System for Web Application

ABSTRACT:

AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side runtime manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error prone and harder to test. We propose a method for testing AJAX applications automatically, based on a crawler to infer a state-flow graph for all (client-side) user interface states. We identify AJAX-specific faults that can occur in such states (related to, e.g., DOM validity, error messages, discoverability, back-button compatibility) as well as DOM-tree invariants that can serve as oracles to detect such faults. Our approach, called ATUSA, is implemented in a tool offering generic invariant checking components, a plugin mechanism to add application-specific state validators, and generation of a test suite covering the paths obtained during crawling. We describe three case studies, consisting of six subjects, evaluating the type of invariants that can be obtained for AJAX applications as well as the fault revealing capabilities, scalability, required manual effort, and level of automation of our testing approach.

DOWNLOAD


Automatic Vehicle Detection in Surveillance Video

ABSTRACT

We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixel wise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixel wise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and non-vehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixel wise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

DOWNLOAD


Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

ABSTRACT:

Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing–coding tradeoff.

DOWNLOAD


BANDWIDTH ESTIMATION SYSTEMS IN NETWORKS

ABSTRACT:

We develop a stochastic foundation for bandwidth estimation of networks with random service, where bandwidth availability is expressed in terms of bounding functions with a defined violation probability. Exploiting properties of a stochastic max-plus algebra and system theory, the task of bandwidth estimation is formulated as inferring an unknown bounding function from measurements of probing traffic. We derive an estimation methodology that is based on iterative constant rate probes. Our solution provides evidence for the utility of packet trains for bandwidth estimation in the presence of variable cross traffic. Taking advantage of statistical methods, we show how our estimation method can be realized in practice, with adaptive train lengths of probe packets, probing rates, and replicated measurements required to achieve both high accuracy and confidence levels. We evaluate our method in a controlled test bed network, where we show the impact of cross traffic variability on the time-scales of service availability, and provide a comparison with existing bandwidth estimation tools.

DOWNLOAD


BLOCKING INTRUDERS FOR SECURED NETWORK MAINTENACE

 

Abstract:

                              Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular websites. Website administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior — servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.

DOWNLOAD


Boolean Retrieval for Medical datas

ABSTRACT:

Extended Boolean retrieval (EBR) models were proposed nearly three decades ago, but have had little practical impact, despite their significant advantages compared to either ranked keyword or pure Boolean retrieval. In particular, EBR models produce meaningful rankings; their query model allows the representation of complex concepts in an and-or format; and they are scrutable, in that the score assigned to a document depends solely on the content of that document, unaffected by any collection statistics or other external factors. These characteristics make EBR models attractive in domains typified by medical and legal searching, where the emphasis is on iterative development of reproducible complex queries of dozens or even hundreds of terms. However, EBR is much more computationally expensive than the alternatives. We consider the implementation of the p-norm approach to EBR, and demonstrate that ideas used in the max-score and wand exact optimization techniques for ranked keyword retrieval can be adapted to allow selective bypass of documents via a low-cost screening process for this and similar retrieval models. We also propose term independent bounds that are able to further reduce the number of score calculations for short, simple queries under the extended Boolean retrieval model. Together, these methods yield an overall saving from 50 to 80 percent of the evaluation cost on test queries drawn from biomedical search.

DOWNLOAD


Cache based User Location Querying System

ABSTRACT:

This paper proposes distributed cache invalidation mechanism (DCIM), a client-based cache consistency scheme that is implemented on top of a previously proposed architecture for caching data items in mobile ad hoc networks (MANETs), namely COACS, where special nodes cache the queries and the addresses of the nodes that store the responses to these queries. We have also previously proposed a server-based consistency scheme, named SSUM, whereas in this paper, we introduce DCIM that is totally client-based. DCIM is a pull-based algorithm that implements adaptive time to live (TTL), piggybacking, and prefetching, and provides near strong consistency capabilities. Cached data items are assigned adaptive TTL values that correspond to their update rates at the data source, where items with expired TTL values are grouped in validation requests to the data source to refresh them, whereas unexpired ones but with high request rates are prefetched from the server. In this paper, DCIM is analyzed to assess the delay and bandwidth gains (or costs) when compared to polling every time and push-based schemes. DCIM was also implemented using ns2, and compared against client-based and server-based schemes to assess its performance experimentally. The consistency ratio, delay, and overhead traffic are reported versus several variables, where DCIM showed to be superior when compared to the other systems.

DOWNLOAD


Caching techniques in MANET and various routing protocols in MANET.

ABSTRACT:

We address cooperative caching in wireless networks, where the nodes may be mobile and exchange information in a peer-to-peer fashion. We consider both cases of nodes with large and small-sized caches. For large-sized caches, we devise a strategy where nodes, independent of each other, decide whether to cache some content and for how long. In the case of small-sized caches, we aim to design a content replacement strategy that allows nodes to successfully store newly received information while maintaining the good performance of the content distribution system. Under both conditions, each node takes decisions according to its perception of what nearby users may store in their caches and with the aim of differentiating its own cache content from the other nodes’. The result is the creation of content diversity within the nodes neighborhood so that a requesting user likely finds the desired information nearby. We simulate our caching algorithms in different ad hoc network scenarios and compare them with other caching schemes, showing that our solution succeeds in creating the desired content diversity, thus leading to a resource-efficient information access.

DOWNLOAD


CAPACITY OF DATA COLLECTION IN ARBITRARY WIRELESS SENSOR NETWORKS

ABSTRACT:

                            How to efficiently collect sensing data from all sensor nodes is critical to the performance of wireless sensor networks. In this paper, we aim to understand the theoretical limitations of data collection in terms of possible and achievable maximum capacity. Previously, the study of data collection capacity has only concentrated on large-scale random networks. However, in most of practical sensor applications, the sensor network is not deployed uniformly and the number of sensors may not be as huge as in theory. Therefore, it is necessary to study the capacity of data collection in an arbitrary network. In this paper, we derive the upper and constructive lower bounds for data collection capacity in arbitrary networks. The proposed data collection method can lead to order-optimal performance for any arbitrary sensor networks. We also examine the design of data collection under a general graph model and discuss performance implications.

DOWNLOAD


CFR: AN EFFICIENT CONGESTION FREE ROUTER

ABSTRACT:

Resource management is a complicated problem in multiprocessor system. When tasks with real-time characteristic are scheduled on processor resource constraints such as CPU and memory have to be met, however, it is usually difficult for a system to ensure load balancing and resource constraints simultaneity. In order to handle this problem lots of algorithms have been brought forward. In this paper, a load balancing policy is proposed to manage and integrate processing resource of multiprocessor system which is used to send packets of packets flows on time. The policy dynamically predicts the wait time before a packet to be processed based on the load of the processor and the timestamp of the packet, and decides the migration of packet between processors to guarantee that the packets can be processed before deadline. This policy can not only increase the multiprocessor utilization but also enhance the capacity of the system against the jitter of load by packets migration between the processors as well. We use simulation experiment to show that the policy can reduce the possibility of packets delay and increase the concurrency of flows that the system can serve.

DOWNLOAD


CHANNEL ALLOCATION SYSTEM IN MOBILE NETWORKS

ABSTRACT:

In wireless Multicast Broadcast Service (MBS), the common channel is used to multicast the MBS content to the Mobile Stations (MSs) on the MBS calls within the coverage area of a Base Station (BS), which causes interference to the dedicated channels serving the traditional calls, and degrades the system capacity. The MBS zone technology is proposed in Mobile Communications Network (MCN) standards to improve system capacity and reduce the handoff delay for the wireless MBS calls. In the MBS zone technology, a group of BSs form an MBS zone, where the macro diversity is applied in the MS, the BSs synchronize to transmit the MBS content on the same common channel, interference caused by the common channel is reduced, and the MBS MSs need not perform handoff while moving between the BSs in the same MBS zone. However, when there is no MBS MS in a BS with the MBS zone technology, the transmission on the common channel wastes the bandwidth of the BS. It is an important issue to determine the condition for the MBS Controller (MBSC) to enable the MBS zone technology by considering the QoS for traditional calls and MBS calls. In this paper, we propose two Dynamic Channel Allocation schemes: DCA and EDCA by considering the condition for enabling the MBS zone technology. Analysis and simulation experiments are conducted to investigate the performance of DCA and EDCA.

DOWNLOAD


Clone Node Detection in Wireless Sensor Networking

ABSTRACT:

Wireless sensor networks are vulnerable to the node clone, and several distributed protocols have been proposed to detect this attack. However, they require too strong assumptions to be practical for large-scale, randomly deployed sensor networks. In this paper, we propose two novel node clone detection protocols with different tradeoffs on network conditions and performance. The first one is based on a distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes effectively. The protocol performance on efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. Although the DHT-based protocol incurs similar communication cost as previous approaches, it may be considered a little high for some scenarios. To address this concern, our second distributed detection protocol, named randomly directed exploration, presents good communication performance for dense sensor networks, by a probabilistic directed forwarding technique along with random initial direction and border determination. The simulation results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability

DOWNLOAD


Cloud Data Protection for the Masses

ABSTRACT:

Offering strong data protection to cloud users while enabling rich applications is a challenging task. We explore a new cloud platform architecture called Data Protection as a Service, which dramatically reduces the per-application development effort required to offer data protection, while still allowing rapid development and maintenance.

DOWNLOAD


CloudMov: Cloud based Video Streaming System

ABSTRACT:

The rapidly increasing power of personal mobile devices (smart phones, tablets, etc.) is providing much richer contents and social interactions to users on the move. This trend however is throttled by the limited battery lifetime of mobile devices and unstable wireless connectivity, making the highest possible quality of service experienced by mobile users not feasible. The recent cloud computing technology, with its rich resources to compensate for the limitations of mobile devices and connections, can potentially provide an ideal platform to support the desired mobile services. Tough challenges arise on how to effectively exploit cloud resources to facilitate mobile services, especially those with stringent interaction delay requirements. In this paper, we propose the design of a Cloud-based, novel Mobile social TV system (CloudMoV). The system effectively utilizes both PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a- Service) cloud services to offer the living-room experience of video watching to a group of disparate mobile users who can interact socially while sharing the video. To guarantee good streaming quality as experienced by the mobile users with time varying wireless connectivity, we employ a surrogate for each user in the IaaS cloud for video downloading and social exchanges on behalf of the user. The surrogate performs efficient stream transcoding that matches the current connectivity quality of the mobile user. Given the battery life as a key performance bottleneck, we advocate the use of burst transmission from the surrogates to the mobile users, and carefully decide the burst size which can lead to high energy efficiency and streaming quality. Social interactions among the users, in terms of spontaneous textual exchanges, are effectively achieved by efficient designs of data storage with Big Table and dynamic handling of large volumes of concurrent messages in a typical PaaS cloud. These various designs for flexible transcoding capabilities, battery efficiency of mobile devices and spontaneous social interactivity together provide an ideal platform for mobile social TV services. We have implemented CloudMoV on Amazon EC2 and Google App Engine and verified its superior performance based on real world experiments.

DOWNLOAD


Clustering Sentence-Level Text Using a Novel Fuzzy Relational Clustering Algorithm

ABSTRACT:

In comparison with hard clustering methods, in which a pattern belongs to a single cluster, fuzzy clustering algorithms allow patterns to belong to all clusters with differing degrees of membership. This is important in domains such as sentence clustering, since a sentence is likely to be related to more than one theme or topic present within a document or set of documents. However, because most sentence similarity measures do not represent sentences in a common metric space, conventional fuzzy clustering approaches based on prototypes or mixtures of Gaussians are generally not applicable to sentence clustering. This paper presents a novel fuzzy clustering algorithm that operates on relational input data; i.e., data in the form of a square matrix of pairwise similarities between data objects. The algorithm uses a graph representation of the data, and operates in an Expectation-Maximization framework in which the graph centrality of an object in the graph is interpreted as a likelihood. Results of applying the algorithm to sentence clustering tasks demonstrate that the algorithm is capable of identifying overlapping clusters of semantically related sentences, and that it is therefore of potential use in a variety of text mining tasks. We also include results of applying the algorithm to benchmark data sets in several other domains.

DOWNLOAD


Coding for Cryptographic Security Enhancement Using Stopping Sets

ABSTRACT:

In this paper we discuss the ability of channel codes to enhance cryptographic secrecy. Toward that end, we present the secrecy metric of degrees of freedom in an attacker’s knowledge of the cryptogram, which is similar to equivocation. Using this notion of secrecy, we show how a specific practical channel coding system can be used to hide information about the cipher text, thus increasing the difficulty of cryptographic attacks. The system setup is the wiretap channel model where transmitted data traverse through independent packet erasure channels with public feedback for authenticated (Automatic Repeat request). The code design relies on puncturing nonsystematic low-density parity-check codes with the intent of inflicting an eavesdropper with stopping sets in the decoder. Furthermore, the design amplifies errors when stopping sets occur such that a receiver must guess all the channel-erased bits correctly to avoid an expected error rate of one half in the cipher text. We extend previous results on the coding scheme by giving design criteria that reduces the effectiveness of a maximum-likelihood attack to that of a message-passing attack. We further extend security analysis to models with multiple receivers and collaborative attackers. Cryptographic security is enhanced in all these cases by exploiting properties of the physical-layer. The enhancement is accurately presented as a function of the degrees of freedom in the eavesdropper’s knowledge of the cipher text, and is even shown to be present when eavesdroppers have better channel quality than legitimate receivers.

DOWNLOAD


CONTENT BASED IMAGE RETRIEVAL SYSTEM

ABSTRACT:

The content based image retrieval (CBIR) is one of the most popular, rising research areas of the digital image processing. Most of the available image search tools, such as Google Images and Yahoo! Image search, are based on textual annotation of images. In these tools, images are manually annotated with keywords and then retrieved using text-based search methods. The performances of these systems are not satisfactory. The goal of CBIR is to extract visual content of an image automatically, like color, texture, or shape. This paper aims to introduce the problems and challenges concerned with the design and the creation of CBIR systems, which is based on a free hand sketch (Sketch based image retrieval – SBIR). With the help of the existing methods, describe a possible solution how to design and implement a task specific descriptor, which can handle the informational gap between a sketch and a colored image, making an opportunity for the efficient search hereby. The used descriptor is constructed after such special sequence of preprocessing steps that the transformed full color image and the sketch can be compared. We have studied EHD, HOG and SIFT. Experimental results on two sample databases showed good results. Overall, the results show that the sketch based system allows users an intuitive access to search-tools. The SBIR technology can be used in several applications such as digital libraries, crime prevention, and photo sharing sites. Such a system has great value in apprehending suspects and identifying victims in forensics and law enforcement. A possible application is matching a forensic sketch to a gallery of mug shot images. The area of retrieve images based on the visual content of the query picture intensified recently, which demands on the quite wide methodology spectrum on the area of the image processing.

DOWNLOAD


Content Sharing in Peer-to-Peer Networks

ABSTRACT:

With the growing number of smartphone users, peer-to-peer ad hoc content sharing is expected to occur more often. Thus, new content sharing mechanisms should be developed as traditional data delivery schemes are not efficient for content sharing due to the sporadic connectivity between smartphones. To accomplish data delivery in such challenging environments, researchers have proposed the use of store-carry-forward protocols, in which a node stores a message and carries it until a forwarding opportunity arises through an encounter with other nodes. Most previous works in this field have focused on the prediction of whether two nodes would encounter each other, without considering the place and time of the encounter. In this paper, we propose discover-predict-deliver as an efficient content sharing scheme for delay-tolerant smartphone networks. In our proposed scheme, contents are shared using the mobility information of individuals. Specifically, our approach employs a mobility learning algorithm to identify places indoors and outdoors. A hidden Markov model is used to predict an individual’s future mobility information. Evaluation based on real traces indicates that with the proposed approach, 87 percent of contents can be correctly discovered and delivered within 2 hours when the content is available only in 30 percent of nodes in the network. We implement a sample application on commercial smartphones, and we validate its efficiency to analyze the practical feasibility of the content sharing application. Our system approximately results in a2 percent CPU overhead and reduces the battery lifetime of a smartphone by 15 percent at most.

DOWNLOAD


CONTINUOUS QUERY PROCESSING SYSTEM FOR LOCATION BASED SERVICES

ABSTRACT:

The emerging location-detection devices together with ubiquitous connectivity have enabled a large variety of location- based services (LBS). Location-based services are becoming popular for mobile users. The mobile users’ location plays a key role to provide the service from one side, but it other side it is dimension of their privacy, so it necessary to keep the user information anonymous to the other parties. Since one important issue in LBS is to achieve an accurate service, hence it is important to use the mobile user accurate location. Using the location accurately raises some concerns on behalf of the user’s privacy. One solution for meeting this requirement is using by the means of a anonymizer. Anonymizer uses K-anonymity cloaking the user location to K- anonymizing spatial region (K-ASR). Traditional K-anonymity method needs complex query processing algorithms at the server side and have drawback of tracking user. In this paper we have proposed a new model for mobile users to retrieve the result quickly and increases user’s privacy.

DOWNLOAD


CREDIT CARD FRAUD DETECTION SYSTEM USING GENETIC ALGORITHM

ABSTRACT:

Due to the rise and rapid growth of E-Commerce, use of credit cards for online purchases has dramatically increased and it caused an explosion in the credit card fraud. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In real life, fraudulent transactions are scattered with genuine transactions and simple pattern matching techniques are not often sufficient to detect those frauds accurately. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. Many modern techniques based on Artificial Intelligence, Data mining, Fuzzy logic, Machine learning, Sequence Alignment, Genetic Programming etc., has evolved in detecting various credit card fraudulent transactions. A clear understanding on all these approaches will certainly lead to an efficient credit card fraud detection system. This paper presents a survey of various techniques used in credit card fraud detection mechanisms and evaluates each methodology based on certain design criteria.

DOWNLOAD


Cooperative Secure Multi-Cloud Verification System

ABSTRACT:

Provable data possession (PDP) is a technique for ensuring the integrity of data in storage outsourcing. In this paper, we address the construction of an efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients’ data. We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches.

DOWNLOAD


Crime Detection in Credit Card Fraud

ABSTRACT:

Identity crime has become prominent because there is so much real identity data available on the Web, and confidential data accessible through unsecured mailboxes. It has also become easy for perpetrators to hide their true identities. This can happen in a myriad of insurance, credit, and telecommunications fraud, as well as other more serious crimes. In addition to this, identity crime is prevalent and costly in developed countries that do not have nationally registered identity numbers. Credit card fraud is an element of identity fraud. It can have far reaching effects, since the information on the card can be used to perpetrate other types of identity theft crimes. From using the signature on the back of a card that is stolen, to loaning a credit card to a friend or family member can cause someone to obtain what they need to open other credit card accounts or bank accounts in the victim’s name. Credit applications are Internet or paper-based forms with written requests by potential customers for credit cards, mortgage loans, and personal loans. Credit application fraud is a specific case of identity crime, involving synthetic identity fraud and real identity theft. This paper proposes a new multilayered detection system complemented with two additional layers: communal detection (CD) and spike detection (SD). CD finds real social relationships to reduce the suspicion score, and is tamper resistant to synthetic social relationships. It is the whitelist-oriented approach on a fixed set of attributes. SD finds spikes in duplicates to increase the suspicion score, and is probe-resistant for attributes. It is the attribute-oriented approach on a variable-size set of attributes.

DOWNLOAD


Criminal Face Detection

ABSTRACT:

Criminal record generally contains personal information about particular person along with photograph. To identify any Criminal we need some identification regarding person, which are given by eyewitness. In most cases the quality and resolution of the recorded image segments is poor and hard to identify a face. To overcome this sort of problem we are developing software. Identification can be done in many ways like finger print, eyes, DNA etc. One of the applications is face identification. The face is our primary focus of attention in social inters course playing a major role in conveying identify and emotion. Although the ability to infer intelligence or character from facial appearance is suspect, the human ability to recognize face is remarkable.

DOWNLOAD


Cut Detection in Wireless Sensor Networks

ABSTRACT:

A wireless sensor network can get separated into multiple connected components due to the failure of some of its nodes,which is called a “cut”. In this article we consider the problem of detecting cuts by the remaining nodes of a wireless sensor network.We propose an algorithm that allows (i) every node to detect when the connectivity to a specially designated node has been lost,and (ii) one or more nodes (that are connected to the special node after the cut) to detect the occurrence of the cut. The algorithm is distributed and asynchronous: every node needs to communicate with only those nodes that are within its communication range. The algorithm is based on the iterative computation of a fictitious “electrical potential” of the nodes. The convergence rate of the underlying iterative scheme is independent of the size and structure of the network.

DOWNLOAD


Data embedding using Pixel-pair matching technique

ABSTRACT:

This paper proposes a new data-hiding method based on pixel pair matching (PPM). The basic idea of PPM is to use the values of pixel pair as a reference coordinate, and search a coordinate in the neighborhood set of this pixel pair according to a given message digit. The pixel pair is then replaced by the searched coordinate to conceal the digit. Exploiting modification direction (EMD) and diamond encoding (DE) are two data-hiding methods proposed recently based on PPM. The maximum capacity of EMD is 1.161 bpp and DE extends the payload of EMD by embedding digits in a larger notational system. The proposed method offers lower distortion than DE by providing more compact neighborhood sets and allowing embedded digits in any notational system. Compared with the optimal pixel adjustment process (OPAP) method, the proposed method always has lower distortion for various payloads. Experimental results reveal that the proposed method not only provides better performance than those of OPAP and DE, but also is secure under the detection of some well-known steganalysis techniques.

DOWNLOAD


Data Transmission Using Multi-Tasking-Sockets

ABSTRACT:

This Project a new socket class which supports both TCP and UDP communication. But it provides some advantages compared to other classes that you may find here or on some other Socket Programming articles. First of all, this class doesn’t have any limitation like the need to provide a window handle to be used. This limitation is bad if all you want is a simple console application. So this library doesn’t have such a limitation. It also provides threading support automatically for you, which handles the socket connection and disconnection to a peer. It also features some options not yet found in any socket classes that I have seen so far. It supports both client and server sockets. A server socket can be referred as to a socket that can accept many connections. And a client socket is a socket that is connected to server socket. You may still use this class to communicate between two applications without establishing a connection. In the latter case, you will want to create two UDP server sockets (one for each application). This class also helps reduce coding need to create chat-like applications and IPC (Inter-Process Communication) between two or more applications (processes). Reliable communication between two peers is also supported with TCP/IP with error handling. You may want to use the smart addressing operation to control the destination of the data being transmitted (UDP only). TCP operation of this class deals only with communication between two peers.

DOWNLOAD


Decision Trees for Uncertain Data

 ABSTRACT

               Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the “complete information” of a data item (taking into account the probability density function (pdf)) is utilized.

We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted that show that the resulting classifiers are more accurate than those using value averages. Since processing pdf’s is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than

that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.

DOWNLOAD


Page 1 of 4123...Last »
RECENT PAPERS