Internet applications pdf


 

Introduction. Distributed. Architectures. User Interface. Design. Tools. Introduction to Internet. Applications. Internet Applications, ID 1 / Network Applications and Network Programming. 2. Data Communications. 3. Switching Networks (Packet Switching and Circuit Switching). Projects in this category have strength in their use on networks, either the World Wide Web or LANs (Local Area Networks). Examples of Internet application.

Author:OLIVA LOFWALL
Language:English, Spanish, Hindi
Country:Uzbekistan
Genre:Health & Fitness
Pages:238
Published (Last):10.01.2016
ISBN:804-3-79686-150-7
Distribution:Free* [*Sign up for free]
Uploaded by: LORELEI

61836 downloads 85980 Views 21.70MB PDF Size Report


Internet Applications Pdf

The internet is the largest computer network in the world, connecting of a productivity suiteā€”a set of applications that help you work. CSCA Computing Basics. 2. The Internet. 1. The Internet. 2. Types of Network. 3. What Makes Internet Works? 4. Internet Equipment. 5. Internet Applications. The Internet originated in the late s when the. United States Defense Department developed. ARPAnet (Advanced Research Projects Agency network) , an.

Internet : Applications The Internet has many important applications. Of the various services available via the Internet , the three most important are e-mail, web browsing, and peer-to-peer services. E-mail, also known as electronic mail , is the most widely used and successful of Internet applications. Web browsing is the application that had the greatest influence in dramatic expansion of the Internet and its use during the s. Peer-to-peer networking is the newest of these three Internet applications, and also the most controversial, because its uses have created problems related to the access and use of copyrighted materials. E-Mail Whether judged by volume, popularity, or impact, e-mail has been and continues to be the principal Internet application. This is despite the fact that the underlying technologies have not been altered significantly since the early s. In recent years, the continuing rapid growth in the use and volume of e-mail has been fueled by two factors. The first is the increasing numbers of Internet Service Providers ISPs offering this service, and secondly, because the number of physical devices capable of supporting e-mail has grown to include highly portable devices such as personal digital assistants PDAs and cellular telephones. The volume of e-mail also continues to increase because there are more users, and because users now have the ability to attach documents of various types to e-mail messages. While this has long been possible, the formulation of Multipurpose Internet Mail Extensions MIME and its adoption by software developers has made it much easier to send and receive attachments, including word-processed documents, spreadsheets, and graphics. The result is that the volume of traffic generated by e-mail, as measured in terms of the number of data packets moving across the network, has increased dramatically in recent years, contributing significantly to network congestion.

Input traces: It is assumed that the session reconstruction tool has the log for a single user.

Rich Internet applications

Since the log on the server usually contains the trace for different users, the session reconstruction tool relies on other methods such as [ 11 ] to extract the traffic for a single user. This assumption is necessary since the tool needs to compare the generated request after performing an action with the input log. Access to the application during reconstruction: It is assumed that the reconstruction is done off-line.

This means that during the reconstruction process there is no access to the server, and the tool only exploits previously collected HTTP traces.

This assumption ensures that the tool remains effective even when the server is not available for example, because of an attack or a bug in the application. Moreover, replaying the session off-line without accessing the server provides a sandboxed environment which is especially desirable during forensic analysis.

User-Input Actions: Regarding the actions that include input values from users, we have made the following assumptions; first, it is assumed that the input values passed into the generated requests are not encoded in a non-standard way; otherwise, the session reconstruction tool cannot recover the actual values entered by the user; the second assumption is regarding the domain of user-input values.

Define Internet and Its Applications

It is assumed that the tool can produce acceptable values for a user-input action, using some preset libraries of possible inputs. This is necessary to be able to automatically input values that will not be blocked by client-side validation.

Note that this does not mean that the tool should somehow guess the correct user inputs values. These values will be found in the log. Instead the tool should be able to provide some inputs that will be usable to continue with the session. Open image in new window The algorithm that is used to extract user-interactions, is shown in Algorithm 1.

The main procedure takes care of initialization. The recursive session reconstruction procedure, SR, starts at line 8.

Define Internet and Its Applications

In this approach, the algorithm extracts all possible candidate actions of the current state of the application S n line 15 , and tries them one by one line Since the requests can be generated in different orders, the order of elements in R s does not matter for the Match function line The algorithm then appends a to the currently found action sequence, and continues to find the rest of the actions in the remaining trace line 20 6. The algorithm stops when all requests in the input trace are matched.

The session reconstruction algorithm starts from the initial state of the algorithm line 7. The output contains all solutions for the problem.

Each solution includes a set of user-client actions that matches the input trace. At each state, there may be several actions which are correct line In this case, the algorithm finds several correct solutions for the problem. However, in practice we can make the algorithm faster by adding a switch parameter findAll to stop the algorithm after finding the first solution line During the execution of the algorithm, the Client, the Robot, and the Proxy collaborate to perform several tasks; the Client lists the possible actions on the current state line 15 , the Robot triggers the action on the current state line 17 and the Proxy, responds to the requests generated during the executing the action by the client line However, in practice there may be a large number of candidate actions at a given state, so the algorithm needs a smarter way to order candidate actions from the most promising to the least promising.

The signature of an action is the traffic which has been generated when it was performed previously possibly from another state of the application For example, in Fig. It is notable that the session reconstruction algorithm does not have the signature of all actions; the signature for an action is extracted once an action is evaluated for the first time.

To apply the signature-based ordering, the session reconstruction tool should be able to identify different instances of the same action at different states. We need to find an id such that this id remains the same in different states; therefore, in each state the session reconstruction tool calculates the id for each possible action and uses this id to find the signature of the action from previous states. Then, the algorithm tries actions that do not have any signature, and finally the least promising actions are tried actions that have lower match than the threshold.

If two actions have the same match value e. Example: Consider the simple example in Fig. In this example, we assume that clicking on a product displays some information about the product, but does not add any new possible user actions to the page.

At the initial state, the priority for the two href elements is minimum since their initiating requests about. The priority-value for the remaining actions is 0. Assume that actions are tried in the order P1, P2, P3. The algorithm will try clicking on P1 and P2 to discover the first interaction i. Click P2. In addition, it learns the signature of clicking on P1 and P2. So, the correct action Click P1 is selected immediately. At the third state, also Click P3 gets a priority of 0.

At this state the correct action is selected immediately. To sum up, the 3 actions are found after trying 4 actions on the current state. At each state the algorithm extracts the list of candidate actions line 17 , and executes them one by one using the client the for loop in lines The client needs to carry out several tasks to execute an action; it needs to initiate several requests, processes the responses, and update its state.

These tasks can take a long time for the client to finish. Therefore the total runtime can often be decreased by using several clients. After extraction of candidate actions line?? The algorithm does not need to wait for the client to finish the execution of the action, and assigns the next candidate action to the next client.

In this approach, several actions can be evaluated concurrently, which potentially decreases the runtime of the algorithm. For each action, we need to extract all the information required to execute that action we call such information the parameters of the action. For a click action, the only required parameter is the element that is the target of click.

However, for actions that involve user-inputs, more parameters must be determined: First, the set of input elements, and second, the values that are assigned to these elements value parameters. We assume that the client can provide us the list of input elements at each state. To detect value parameters of user-input actions, we propose the following approach: 1.

At each state, the system performs each user-input action using an arbitrary set of values, x. These values are chosen from the domain of input elements in that user-input action. The system observes requests T after performing the user-input action.

If the next expected traffic is exactly the same as T but with different user-input values y instead of x, the system concludes that the user has performed the user-input action using y.

Example: The text-box on top of the example in the Fig. Since these two requests Fig. However, this technique is only effective if the user-input data is passed as is; if there is any encoding of the submitted data, the actual data that has been used by the user cannot be extracted from the logs.

Both the client-side and the server-side of the application can contribute to this randomness. The client-side of the application can generate different requests after performing an action from the same state, and the server-side may respond with different responses.

The responses are served by the proxy by replaying a recorded trace. Therefore, there will be no randomness in the responses during the reconstruction. However, the session reconstruction algorithm still needs to handle randomness in the client-side generated requests. If the execution of an action generates random requests, the algorithm cannot detect the correct action since executing the action generates requests which are different from the requests in input trace.

The Match function line 18 in Algorithm 1 , needs to detect the existence of randomness and flexibly find the appropriate responses to the set of requests. In this case, as we explained in Section 2.

Microsoft and Netscape dominate the market for web browsers, with Microsoft's Internet Explorer holding about three-quarters of the market, and Netscape holding all but a small fraction of the balance. During the first few years of web growth, the competition between Microsoft and Netscape for the browser market was fierce, and both companies invested heavily in the development of their respective browsers.

Changes in business conditions toward the end of the s and growing interest in new models of networked information exchange caused each company to focus less intensely on the development of web browsers, resulting in a marked slowing of their development and an increasing disparity between the standards being developed by W3C and the support offered by Internet Explorer or Netscape Navigator.

Now, the future of the web browser may be short-lived, as standards developers and programmers elaborate the basis for network-aware applications that eliminate the need for the all-purpose browser. It is expected that as protocols such as XML and the Simple Object Access Protocol SOAP grow more sophisticated in design and functionality, an end user's interactions with the web will be framed largely by desktop applications called in the services of specific types of documents called from remote sources.

The open source model has important implications for the future development of web browsers. Because open source versions of Netscape have been developed on a modular basis, and because the source code is available with few constraints on its use, new or improved services can be added quickly and with relative ease. In addition, open source development has accelerated efforts to integrate web browsers and file managers.

These efforts, which are aimed at reducing functional distinctions between local and network-accessible resources, may be viewed as an important element in the development of the "seamless" information space that Berners-Lee envisions for the future of the web.

Peer-To-Peer Computing One of the fastest growing, most controversial, and potentially most important areas of Internet applications is peer-to-peer P2P networking. Peer-to-peer networking is based on the sharing of physical resources, such as hard drives, processing cycles, and individual files among computers and other intelligent devices.

Unlike client-server networking, where some computers are dedicated to serving other computers, each computer in peer-to-peer networking has equivalent capabilities and responsibilities. There are two basic P2P models in use today. The first model is based on a central host computer that coordinates the exchange of files by indexing the files available across a network of peer computers.

This model has been highly controversial because it has been employed widely to support the unlicensed exchange of commercial sound recordings, software, and other copyrighted materials. Under the second model, which may prove ultimately to be far more important, peer-to-peer applications aggregate and use otherwise idle resources residing on low-end devices to support high-demand computations.

For example, a specially designed screensaver running on a networked computer may be employed to process astronomical or medical data. The Future The remarkable developments during the late s and early s suggest that making accurate predictions about the next generation of Internet applications is difficult, if not impossible. Two aspects of the future of the Internet that one can be certain of, however, are that network bandwidth will be much greater, and that greater bandwidth and its management will be critical factors in the development and deployment of new applications.

What will greater bandwidth yield? In the long run, it is difficult to know, but in the short term it seems reasonable to expect new communication models, videoconferencing, increasingly powerful tools for collaborative work across local and wide area networks, and the emergence of the network as a computational service of unprecedented power.

San Francisco : HarperCollins, Loshin, Pete, and Paul Hoffman. New York : Wiley, Oram, Andy, ed.

Raymond, Eric S.

Related:


Copyright © 2019 nbafinals.info. All rights reserved.
DMCA |Contact Us