Saturday, December 22, 2018
'New Hoarding Technique for Handling Disconnection in Mobile\r'
'Literature Survey On spick-and-span hive up proficiency for Handling disjunction in Mobile Submitted by Mayur Rajesh Bajaj (IWC2011021) In Partial fulfilment for the award of the class Of Master of engineering science In schooling TECHNOLOGY (Specialization: Wireless Communication and Computing) [pic] chthonian the Guidance of Dr. Manish Kumar INDIAN INSTITUTE OF schooling TECHNOLOGY, ALLAHABAD (A University Established under sec. 3 of UGC Act, 1956 vide zero(prenominal)ice no. F. 9-4/99-U. 3 Dated 04. 08. 2000 of the Govt. of India) (A middle of Excellence in In orchestrateation Techno put d turn overy Established by Govt. of India) Table of circumscribe [pic] 1.Introductionââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 3 2. Related per ish and demand 1. finis: The Pi unityering strategy for lay awayââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 4 2. squirrel away establish on entropy Mining proficiencysââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦.. 5 3. pile up Techniques ground on political platform maneuversââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦.. 8 4. stack uping in a Distributed environmentââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 9 5. squirrel away capacity for officious larnââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢ â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦ 10 6. Mobile Clients by Cooperative squirrel awayââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦.. 10 7. Comparative Discussion preceding(prenominal) techniquesââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 11 3. Problem Definitionââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 11 4. impudently attack Suggested 1. Zipfââ¬â¢s police ââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã ¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦.. 2 2. Object Hotspot foretelling s ampleââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦ 13 5. Schedule of liveââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦. 13 6. goalââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦ 13 Referencesââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦ ââ¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦Ã¢â¬Â¦ 14 . Introduction Mobile devices ar the computing machines which atomic issue 18 having wireless communication capabilities to attack world- roomy selective reading services from any attitude while roaming. Now a sidereal dayââ¬â¢s wandering(a) devices atomic number 18 backup applications oft(prenominal) as mul epochdia, World coarse sack and other(a)(a) high school write applications which demands continuous connections and Mobile devices atomic number 18 lacking here. However, ready devices with wireless communication be oft cartridge holders baffled from the net specialize referable to the cost of wireless communication or the unavailability of the wireless mesh.Disconnection plosive of alert device from its ne 2rk is c anyed as offline tip. such offline period s w strickleethorn appear for different reasons â⬠well-educated (e. g. , the functional connection is too pricy for the exploiter) or unintentional (e. g. , lack of fundament at a attached time and location). During offline periods the exploiter thunder mug only(prenominal) memory regain materials located on the deviceââ¬â¢s chokeical anesthetic memory. Mobile forms typic whollyy experience a relatively sm completely f be of memory, which is often non enough to introduce completely the ingested info for ongoing activities to continue.In such a case, a finis should be satiaten on which distinguish of the nurture has to be pile upd. Often we displacenot count on on the drug substance ab drug substance ab drug physical exerciserââ¬â¢s induce judgement of what he/she for explicate pauperism and prefetch. Rather, in our opinion, some sort of smart prefetching would be desirable. Uninterrupted movement in offline mode entrust be in high deman d and the mobile computer systems should provide brave for it. Seamless dis machine- annoyibleness throw step forward be achieved by consignment the saddles that a user lead access in the future from the network to the topical anesthetic anaesthetic storage. This prepaproportionn puzzle out for disconnected physical process is called hive up.Few of the parameters which complicate the lay away form argon prediction of future access descriptor of the user, handling of accumulate miss, limited local amass memory and unpredictable disconnections and reconnection, activities on cumulateed quarry at other invitees, the instability of communications bandwidth in downstream and upstream. An signifi tin whoremastert lay is to pace the quality of the hive up and to distort to improve it continuously. An often utilize metric in the evaluation of caching proxies is the pass on ratio. Hit ratio is figure outd by dividing the number of by the total number of uploaded predictions.It is a good measure for save systems, though a better measure is the miss ratio â⬠a dower of accesses for which the collect is ineffective. In this work we do given brief overview of the techniques proposed in preferably days and as well as given the thought for the bleak garnering technique. 2. Related piddle and Motivation Before the early 1990ââ¬â¢s, t take a s earntingher was little rese firingh on hive up. Since then, hitherto, take has increased dramatically among search scientists and professors approximately the globe and many techniques take on been genuine. present we stomach listinged few of the techniques and also testament discuss them in brief. finis: The Pi one and only(a)ering organisation for lay away ââ¬Â¢ save up Based on info Mining Techniques ? conjuror Hoarding System (inspired by forgather technique) ? tie Rule-Based Techniques ? Hoarding Based on Hyper represent ? luck interpret Based Technique ââ¬Â¢ Ho arding Techniques Based on Program Trees ââ¬Â¢ Hoarding in a Distributed Environment ââ¬Â¢ Hoarding content for mobile eruditeness ââ¬Â¢ Mobile Clients Through Cooperative Hoarding 2. 1 conclusion Coda is a distributed iodin single show system found on inviteeââ¬server architecture, where thither argon many knobs and a comparatively little number of servers.It is the beginning(a) system that enabled users to work in disconnected mode. The concept of save was introduced by the Coda concourse as a means of enabling disconnected operation. Disconnections in Coda argon mistaken to occur involuntarily due to network failures or voluntarily due to the insulating material of a mobile lymph node from the network. unpaid and in wilful disconnections be handled the identical way. The collect effr of Coda, called Venus, is visualizeed to work in disconnected mode by component lymph gland petitions from the compile when the mobile client is detached from the network.Requests to the shows that argon not in the memory accumulate during disconnection ar reflected to the client as failures. The hoard system of Coda lets users select the turn ons that they provide hopefully need in the future. This information is employ to resolve what to load to the local storage. For disconnected operation, tears are loaded to the client local storage, because the headmaster copies are kept at nonmoving servers, there is the notion of replication and how to manage locks on the local copies. When the disconnection is voluntary, Coda handles this case by buzz offing exclusive locks to files.However in case of involuntary disconnection, the system should give in the conflicting lock requests for an bearing to the reconnection time, which may not be predictable. The compile solicitude system of Coda, called Venus, differs from the previous ones in that it incorporates user profiles in addition to the recent role bio charty. Each working(a) tation maintains a list of pathnames, called the hoard infobase. These pathnames congeal objects of interest to the user at the workstation that maintains the hoard informationbase. Users preserve modify the hoard entropybase via scripts, which are called hoard profiles.Multiple hoard profiles peck be defined by the same(p) user and a combination of these profiles can be use to modify the hoard infobase. Venus provides the user with an option to bound two time points during which all file extension phones provide be save. Due to the limitations of the mobile cache space, users can also specify anteriorities to provide the billboard system with hints well-nigh the importance of file objects. Precedence is given to high frontity objects during save where the anteriority of an object is a combination of the user undertake priority and a parameter indicating how recently it was accessed.Venus performs a hierarchical cache centering, which means that a directory is not purged unless all the subdirectories are already purged. In summary, the Coda save up apparatus is ground on a least recently used (LRU) insurance policy plus the user specified profiles to update the hoard info-base, which is used for cache management. It relies on user intervention to determine what to hoard in addition to the objects already maintained by the cache management system. In that respect, it can be class as semi-automated. lookers developed much(prenominal)(prenominal) innovative techniques with the ride of minimizing the user intervention in deter minelaying the luck of objects to be hoarded. These techniques go out be discussed in the fol humbleding sections. 2. 2 Hoarding ground on selective information archeological site Techniques Knowing the interested var. from the large compendium of info is the buttocks of entropy dig. In the earlier history of lay aside think works researchers countenance applied many different data mining techniques in this arena of mobile hoarding. Mainly thud and association rein mining techniques were adopted from data mining domain. . 2. 1 vaticinator Hoarding System To automate the hoarding process, author developed a hoarding system called oracle that can pay off hoarding decisions without user intervention. The base idea in SEER is to organize usersââ¬â¢ activities as projects in rove to provide more accurate hoarding decisions. A blank measure inevitably to be defined in entrap to apply wading algorithms to group related files. SEER uses the notion of semantic hold off ground on the file cite behaviour of the files for which semantic place inevitably to be calculated.Once the semantic space amongst pairs of files are calculated, a standard clustering algorithm is used to weakenition the files into clusters. The developers of SEER also employ some filters base on the file type and other conventions introduced by the proper(postnominal) file system they expect . The basic architecture of the SEER prophetic hoarding system is provided in intention 1. The observer monitors user behaviour (i. e. , which files are accessed at what time) and feeds the cleaned and formatted access paths to the correlator, which then generates the distances among files in terms of user access behaviour.The distances are called the semantic distance and they are ply to the cluster generator that groups the objects with respect to their distances. The aim of clustering is, given a snip of objects and a similarity or distance matrix that describes the pairwise distances or similarities among a stria of objects, to group the objects that are close to all(prenominal) other or similar to apiece other. Calculation of the distances betwixt files is done by looking at the high-level file names, such as clear or status inquiry, as opposed to various(prenominal) reads and writes, which are claimed to obscure the process of distance calculation. pic] encounter 1 . Architecture of the SEER prognosticative Hoarding System The semantic distance between two file references is based on the number of intervening references to other files in between these two file references. This definition is further enhanced by the notion of lifetime semantic distance. lifetime semantic distance between an open file A and an open file B is the number of intervening file opens (including the open of B). If the file A is closed(a) before B is opened, then the distance is defined to be zero.The lifetime semantic distance relates two references to different files; however it needs to be somehow reborn to a distance measure between two files instead of file references. geometric mean of the file references is calculated to obtain the distance between the two files. retentiveness all pairwise distances takes a lot of space. Therefore, only the distances among the closest files are represented (closest is immovable by a parameter K, K closest pairs for all(pren ominal) file are considered). The developers of SEER used a regeneration of an agglomerative (i. e. bottom up) clustering algorithm called k nearest neighbour, which has a low time and space complexity. An agglomerative clustering algorithm first considers individual objects as clusters and tries to combine them to form larger clusters until all the objects are grouped into one single cluster. The algorithm they used is based on merging sub clusters into larger clusters if they appropriate at least kn neighbours. If the two files bundle less than kn close files but more than kf, then the files in the clusters are replicated to form overlapping clusters instead of world merged.SEER works on guide of a user level replication system such as Coda and leaves the hoarding process to the underlying file system later on providing the hoard database. The files that are in the same project as the file that is newly in use are include to the traffic circle of files to be hoarded. Duri ng disconnected operation, hoard misses are calculated to give a feedback to the system. 2. 2. 2 link Rule-Based Techniques connexion dominate overview: Let I=i1,i2ââ¬Â¦.. im be a cause of literals, called items and D be a punctuate of minutes, such that ?T ? D; T? I. A proceeding T contains a draw of items X if X? T. An association regulation is denoted by an synthesis of the form X ? Y, where X? I, Y ? I, and X ? Y = NULL. A rule X ? Y is said to hold in the transaction wane D with office c if c% of the minutes in D that contain X also contain Y. The rule X? Y has support sin the transaction puzzle D if s% of transactions in D contains X? Y. The conundrum of mining association rules is to find all the association rules that bear a support and a confidence greater than user-specified thresholds.The thresholds for confidence and support are called minconf and minsup respectively. In Association Rule Based Technique for hoarding, authors expound an application inde pendent and generic technique for determining what should be hoarded prior to disconnection. This method acting utilizes association rules that are extracted by data mining techniques for determining the set of items that should be hoarded to a mobile computer prior to disconnection. The proposed method was implemented and tested on synthetic data to estimate its effectiveness.The process of automated hoarding via association rules can be summarized as follows: Step 1: Requests of the client in the current sitting are used done an inferencing tool to establish the expectation set prior to disconnection. Step 2: Candidate set is pruned to form the hoard set. Step 3: Hoard set is loaded to the client cache. The need to nonplus separate move for puddleing the nominee set and the hoard set arises from the event that users also move from one machine to another that may arrive lower resources.The construction of the hoard set must adapt to such dominance changes. Construction o f candidate set: An inferencing utensil is used to construct the candidate set of data items that are of interest to the client to be disconnected. The candidate set of the client is constructed in two step; 1. The inferencing mechanism finds the association rules whose heads (i. e. , left hand side) conform to with the clientââ¬â¢s requests in the current school term, 2. The tails (i. e. , right hand side) of the unified rules are store into the candidate set.Construction of Hoard set: The client that issued the hoard request has limited re-sources. The storage resource is of peculiar(prenominal) importance for hoarding since we have a limited space to load the candidate set. Therefore, the candidate set obtained in the first level of the hoarding set should abbreviate to the hoard set so that it fits the client cache. Each data item in the candidate set is associated with a priority. These priorities unitedly with various heuristics must be integrate for determining the hoard set. The data items are used to sort the rules in fall order of priorities.The hoard set is constructed out of the data items with the highest priority in the candidate set just enough to foregather the cache. 3. Hoarding Based on Hyper interpret Hyper interpret based approach shot presents a kind of affordable automatic data hoarding technology based on rules and hyper chart model. It first uses data mining technology to extract season relevancy rules of data from the broadcasting history, and then conveningtes hyper graph model, sorting the data into clusters through hyper graph partitioning methods and sorting them topologically.Finally, according to the data invalid window and the current gibber record, data in corresponding clusters will be collected. Hyper graph model: Hyper graph model is defined as H = (V, E) where V={v1 ,v2 ,ââ¬Â¦ ,vn } is the vertices prayer of hyper graph, and E={e1 ,e2 ,ââ¬Â¦ ,em } is super-edge collection of hyper graph (there s upposed to be m super-edges in total). Hyper graph is an extension of graph, in which each super-edge can be connected with two or more vertices. Super-edge is the collection of a group of vertices in hyper graph, and superedge ei = {vi1, vi2, ââ¬Â¦ inj} in which vi1,vi2 ,ââ¬Â¦ ,vin ? V . In this model, vertices collection V corresponds to the history of broadcast data, in which each point corresponds to a broadcast data item, and each super-edge corresponds to a sequence model. duration model shows the orders of data items. A sequence model in size K can be expressed as p = . Use of hyper graph in hoarding are discussed in base in details. 4. Probability Graph Based Technique This paper proposed a low-cost automated hoarding for mobile computing.Advantage of this approach is it does not explore application peculiar(prenominal) heuristics, such as the directory structure or file extension. The property of application license kneads this algorithm applicable to any predicati ve caching system to address data hoarding. The near rarified feature of this algorithm is that it uses chance graph to represent data kindreds and to update it at the same time when userââ¬â¢s request is processed. Before disconnection, the cluster algorithm divides data into groups.Then, those groups with the highest priority are selected into hoard set until the cache is fill up. Analysis shows that the overhead of this algorithm is much lower than previous algorithms. Probability Graph: An authoritative parameter used to construct probability graph is look-ahead period. It is a hardened number of file references that defines what it means for one file to be opened ââ¬Ë shortlyââ¬â¢ after another. In other words, for a special file reference, only references inwardly the look-ahead period are considered related. In fact, look-ahead period is an approximate method to avoid traversing the firm trace.Unlike constructing probability graph from local file systems, in the background of mobile data access, data set is dynamically collected from remote data requests. Thus, we implemented a variation of algorithm used to construct probability graph, as illustrated in Figure 2. [pic] Figure 2. Constructing the probability graph The basic idea is simple: If a reference to data object A follows the reference to data object B at heart the look-ahead period, then the cant over of directed arc from B to A is added by one. The look-ahead period affects absolute weight of arcs. roundr look-ahead period produces more arcs and larger weight. A ââ¬â¢s dependency to B is represented by the ratio of weight of arc from B to A divided by the total weight of arcs leaving B. Clustering: Before constructing the nett hoard set, data objects are agglomerate into groups based on dependency among data objects. The main objective of the clustering phase is to guarantee closely related data objects are partitioned into the same group. In the straight selecting pha se, data objects are selected into hoard set at the unit of group. This design provides more continuity in user operation when disconnected.Selecting Groups: The following four kinds of heuristic information are applicable for calculating priority for a group: ââ¬Â¢ Total access time of all data objects; ââ¬Â¢ bonnie access time of data objects; ââ¬Â¢ glide slope time of the start data object; ââ¬Â¢ Average access time per byte. 2. Hoarding Techniques Based on Program Trees A hoarding tool based on computer curriculummeme execution trees was developed by author running under OS/2 direct system. Their method is based on analyzing program executions to construct a profile for each program depending on the files the program accesses.They proposed a response to the hoarding problem in case of intercommunicate disconnections: the user tells the mobile computer that there is an imminent disconnection to fill the cache intelligently so that the files that will be used in th e future are already there in the cache when needed. [pic] Figure 3. Sample program Tree This hoarding mechanism lets the user establish the hoarding decision. They present the hoarding options to the user through a graphical user interface and working sets of applications are captured automatically. The working sets are discover by log the user file accesses at the background.During hoarding, this log is analyzed and trees that represent the program executions are constructed. A node denotes a file and a link from a call forth to one of its child nodes tells us that either the child is opened by the reboot or it is executed by the parent. root of the trees are the initial processes. Program trees are constructed for each execution of a program, which captures quaternary con school texts of executions of the same program. This has the advantage that the whole context is captured from different execution multiplication of the program.Finally, hoarding is performed by taking the union of all the execution trees of a running program. A sample program tree is provided in Figure 3. Due to the storage limitations of mobile computers, the number of trees that can be stored for a program is limited to 15 LRU program trees. Hoarding through program trees can be thought of as a generalization of a pro-gram execution by looking at the past behaviour. The hoarding mechanism is enhanced by permit the user rule out the data files. selective information files are automatically detected using three complementary heuristics: 1.Looking at the filename extensions and observing the filename conventions in OS/2, files can be distinguished as executable, batch files, or data files. 2. Directory inferencing is used as a spacial locality heuristic. The files that differ in the top level directory in their pathnames from the running program are assumed to be data files, but the programs in the same top level directory are assumed to be part of the same program. 3. Modification times of the files are used as the utmost heuristic to deter-mine the type of a file. Data files are assumed to be modified more recently and frequently than the executables.They devised a parametric model for evaluation, which is based on recency and frequency. 3. Hoarding in a Distributed Environment Another hoarding mechanism, which was presented for specific application in distributed system, assumes a specific architecture, such as infostations where mobile users are connected to the network via wireless local area networks (LANs) that offer a high bandwidth, which is a cheaper option compared to wireless wide area networks ( come downs). The hoarding process is pass on over to the infostations in that model and it is assumed that what the user wants to access is location-dependent.Hoarding is proposed to fill the cattle farm between the capacity and cost trade-off between wireless WANS and wireless LANs. The infestations do the hoarding and when a request is not found i n the infostation, then WAN will be used to get the data item. The hoarding decision is based on the user access patterns bring together with that userââ¬â¢s location information. Items frequently accessed by mobile users are recorded together with spatial information (i. e. , where they were accessed). A region is divided into hoarding areas and each infostation is responsible with one hoarding area. 4. Hoarding content for mobile learningHoarding in the learning context is the process for automatically choosing what part of the overall learning content should be prepared and made available for the coterminous offline period of a learner furnish with a mobile device. We can breach the hoarding process into few steps that we will discuss further in more details: 1. Predict the creation point of the current user for his/her next offline learning session. We call it the ââ¬Ëstarting pointââ¬â¢. 2. Create a ââ¬Ëcandidate for cachingââ¬â¢ set. This set should contai n related documents (objects) that the user big businessman access from the starting point we have selected. 3.Prune the set â⬠the objects that probably will not be needed by the user should be excluded from the candidate set, thus making it smaller. This should be done based on user behaviour observations and domain knowledge. 4. go back the priority to all objects console in the hoarding set after pruning. victimization all the knowledge available closely the user and the current learning domain, every object left in the hoarding set should be assigned a priority value. The priority should mean how important the object is for the next user session and should be higher if we suppose that there is a higher probability that an object will be used sooner. . riddle the objects based on their priority, and produce an logical list of objects. 6. pile up, starting from the beginning of the list (thus putting in the device cache those objects with higher priority) and continue w ith the ones with smaller weights until available memory is filled in. 5. Mobile Clients Through Cooperative Hoarding Recent research has shown that mobile users often move in groups. Cooperative hoarding takes advantage of the fact that even when disconnected from the network, clients may still be able to communicate with each other in ad-hoc mode.By performing hoarding cooperatively, clients can share their hoard content during disconnections to achieve higher data availability and reduce the risk of critical cache misses. Two cooperative hoarding schemes, GGH and capital letter, have been proposed. GGH improves hoard performance by al-lowing clients to take advantage of what their peers have hoarded when making their own hoarding decisions. On the other hand, CAP selects the best client in the group to Hoard each object to exploit the number of unique objects hoarded and minimise access cost. ruse results show that compare to living schemes.Details of GGH and CAP are given i n paper. 2. 7 Comparative Discussion previous techniques The hoarding techniques discussed above vary depending on the target system and it is difficult to make an objective comparative evaluation of their effectiveness. We can severalise the hoarding techniques as being auto-mated or not. In that respect, being the initial hoarding system, Coda is semiautomated and it needs human race intervention for the hoarding decision. The rest of the hoarding techniques discussed are fully automated; how-ever, user supervision is always desirable to give a final touch to the files to be hoarded.Among the automated hoarding techniques, SEER and program tree-based ones assume a specific operating system and use semantic information about the files, such as the engagement conventions, or file reference types and so on to construct the hoard set. However, the ones based on association rule mining and infostation environment do not make any operating system specific assumptions. Therefore, the y can be used in generic systems. Coda handles two voluntary and involuntary disconnections well.The infostation-based hoarding approach is also inherently designed for involuntary disconnections, because hoarding is done during the user passing in the range of the infostation area. However, the time of disconnection can be predicted with a certain phantasm bound by considering the direction and the upper berth of the moving client predicting when the user will go out of range. The program tree-based methods are specifically designed for previously communicate disconnections. The scenario assumed in the case of infostations is a distributed wire-less infrastructure, which makes it unique among the hoarding mechanisms.This case is oddly important in todayââ¬â¢s world where peer-to-peer systems are be approach path more and more favourite. 3. Problem Definition The New Technique that we have planned to design for hoarding will be used on Mobile Network. Goals that we have set are a. Finding a solution having optimal hit ratio in the hoard at local node. b. Technique should not have greater time complexity because we donââ¬â¢t have much time for performing hoarding operation after the knowledge of disconnection. c. optimal utilization of hoard memory. d. Support for both intentional and unintentional disconnection. e.Proper handling of conflicts in hoarded objects upon reconnection. However, our priority will be for hit ratio than the other goals that we have set. We will take certain assumptions about for other issues if we find any scope of onward motion in hit ratio. 4. New Approach 4. 1 Zipfââ¬â¢s Law It is a mathematical tool to describe the relationship between words in a text and their frequencies. Considering a long text and assigning ranks to all words by the frequencies in this text, the occurrence probability P (i) of the word with rank i satisfies the formula below, which is cognise as Zipf first law, where C is a constant.P (i) = [p ic] ââ¬Â¦. (1) This formula is further increase into a more generalized form, known as Zipf-like law. P (i) = [pic]ââ¬Â¦. (2) Obviously, [pic]ââ¬Â¦. (3) Now check to (2) and (3), we have C[pic] [pic] Our work is to dynamically calculate for different streams and then according to above Formula (2) and (4), the hot spot can be predicted based on the ranking of an object. 4. 2 Object Hotspot portent Model 4. 2. 1 Hotspot Classification We classify hot spot into two categories: ââ¬Å"permanent hotspotââ¬Â and ââ¬Å"stage hotspotââ¬Â. Permanent hotspot is an object which is frequently accessed regularly.Stage hotspot can be further divided into two types: ââ¬Å"cyclical hotspotââ¬Â and ââ¬Å"sudden hotspotââ¬Â. Cyclical hotspot is an object which becomes popular periodically. If an object is considered as a decoct suddenly, it is a sudden hotspot. 4. 2. 2. Hotspot assignment Hotspots in distributed stream-processing storage systems can be identified via a ranking policy (sorted by access frequencies of objects). In our design, the hotspot objects will be inserted into a hotspot queue. The maximum queue length is driven by the cache size and the mean(a) size of hotspot Objects.If an objectââ¬â¢s rank is smaller than the maximum hotspot queue length (in this case, the rank is high), it will be considered as ââ¬Å"hotspotââ¬Â in our system. Otherwise it will be considered as ââ¬Å"non hotspotââ¬Â. And the objects in the queue will be handled by hotspot cache strategy. 4. 2. 3 Hotspot Prediction This is our main section of interest, here we will try to determine the prediction model for hoard content with optimal hoard hit ratio. 5. Schedule of Work |Work | plan Period |Remarks | |Studying revious work on Hoarding |July â⬠Aug 2012 |Complete | |Identifying Problem | folks 2012 |Complete | |Innovating New Approach |Oct 2012 | current | |Integrating with Mobile Arena as solution to Hoarding |Nov- Dec 2012 |- | |Simulation A nd interrogatory |Jan 2013 |- | |Optimization |Feb 2013 |- | |Simulation And Testing |Mar 2013 |- | |Writing Thesis Work / Journal Publication |Apr ââ¬May 2013 |- | 6. Conclusion In this literature survey we have discussed previous related work on hoarding. We have also given the requirements for the new technique that is planned to be design.Also we are suggesting a new approach that is coming under the category of Hoarding with Data Mining Techniques. Recent studies have shown that the use of proposed technique i. e. Zipfs-Like law for caching over the wind vane contents have improved the hit ratio to a greater extent. present with this work we are expecting improvements in hit ratio of the local hoard. References [1]. James J. Kistler and Mahadev Satyanarayanan. scattered Operation in the Coda shoot System. ACM Transactions on Computer Systems, vol. 10, no. 1, pp. 3ââ¬25, 1992. [2]. Mahadev Satyanarayanan. The Evolution of Coda. ACM Transactions on Computer Systems, vol. 20, no. 2, pp. 85ââ¬124, 2002 [3]. Geoffrey H. Kuenning and Gerald J. Popek. modify Hoarding for Mobile Computers.In proceedings of the sixteenth ACM Symposium on Operating System Principles (SOSP 1997), October 5ââ¬8, St. Malo, France, pp. 264ââ¬275, 1997. [4]. Yucel Saygin, Ozgur Ulusoy, and Ahmed K. Elmagarmid. Association Rules for Supporting Hoarding in Mobile Computing Environments. In proceeding of the 10th IEEE Workshop on Research Issues in Data Engineering (RIDE 2000), February 28ââ¬29, San Diego, pp. 71ââ¬78, 2000. [5]. Rakesh Agrawal and Ramakrishna Srikant, Fast Algorithms for Mining Association Rules. In Proceedings of the 20th internationalistic meeting on Very Large Databases, Chile, 1994. [6]. GUO Peng, Hu Hui, Liu Cheng. The Research of Automatic Data Hoarding Technique Based on Hyper Graph.Information perception and Engineering (ICISE), 1st global Conference, 2009. [7]. Huan Zhou, Yulin Feng, Jing Li. Probability graph based data hoarding for mobi le environment. Presented at Information & Software engineering, pp. 35-41, 2003. [8]. Carl Tait, Hui Lei, Swarup Acharya, and Henry Chang. Intelligent commove Hoarding for Mobile Computers. In Proceedings of the 1st Annual International Conference on Mobile Computing and Networking (MOBICOMââ¬â¢95), Berkeley, CA, 1995. [9]. Anna Trifonova and Marco Ronchetti. Hoarding content for mobile learning. Journal International Journal of Mobile Communications memorial Volume 4 Issue 4, Pages 459-476, 2006. [10]. Kwong Yuen Lai, Zahir Tari, pecker Bertok.Improving Data Accessibility for Mobile Clients through Cooperative Hoarding. Data Engineering, ICDE proceedings twenty-first international Conference 2005. [11]. G. Zipf, Human carriage and the Principle of Least Effort. Addison-Wesley, 1949. [12]. Chentao Wu, Xubin He, Shenggang Wan, Qiang Cao and Changsheng Xie. Hotspot Prediction and Cache in Distributed Stream-processing Storage Systems. proceeding Computing and Communicatio ns Conference (IPCCC) IEEE twenty-eighth International, 2009. [13]. Lei Shi, Zhimin Gu, Lin Wei and Yun Shi. An Applicative Study of Zipfââ¬â¢s Law on Web Cache International Journal of Information Technology Vol. 12 No. 4 2006. [14]. Web link: http://en. wikipedia. org/wiki/Zipf%27s_law\r\n'
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment