dc.description.abstract |
Cloud Computing is one of the modern technologies in which the research Society has
shown their interest. In this thesis, we have discussed about Latencies and what are the
impacts of latencies on the Cloud Computing environment. A new design is proposed.
The proposed design reduced the impact disturbing the cloud computing service. So that
it improved the Communication speed in cloud-based service, and minimize the
processing delay.
To conduct this thesis, we followed the methodology which includes determining design
strategy for the study including a proposed design, defining the deliverables. The
requirements to this study were: developing the load balancing mechanism using energy
consumption load scheduling method and developing evaluation tool to measure the
performance of the resulting the proposed design. Thus, the instruments used to develop
the knowledge base to the study requirements were review of relevant literature including
similar studies and implementing the proposed design using cloudsim simulator.
The thesis work is discussed and presented in table and graph format because it is
important to understand the communication delay. so that results are compared between
the existing cloud computing and the proposed one to show the latency difference
The time taken to process a request in every server of the proposed cloud was less than
every server of the existing cloud. This indicated that its response time is low and
resulting less communication delay comparing with the existing cloud. In the proposed
cloud, Server1 consumed less energy than of Server2, 3 and 4 based energy consumption
calculation. Since server 4 needs high energy, the load given to it would move to server 1
compared to the others which is presented in the table.
The proposed cloud computing used energy consumption load scheduling mechanism
which helped to distribute the loads based on energy consumption of the servers. So, the
mechanism helped proposed cloud computing to have less communication delay.
Therefore, both having less response time and distributing the loads using servers‘ energy
consumption lead to have reduced latency |
en_US |