CHAPTER 1 INTRODUCTION 1

CHAPTER 1
INTRODUCTION
1.1 INFORMATION CONSISTENCY
Numerous developments are being introduced as the epoch of Cloud Computing, which is a cyberspace-based progress and use of supercomputer expertise. Most powerful processors which were too expensive to begin with have become cheaper by the help of pooling processing power and providing the processing power on demand. The development of high speed internet with increased bandwidth have increased the quality of services leading to better customer satisfaction which the most primitive goal of any organization.
The migration of data from the users’ computer to the remote data centers have the provided the customer with great and reliable convenience. Amazon simple storage services are the well-known examples which are one of the pioneers of cloud services. The eliminate the need of maintain the data on a local system which is a huge boost for increasing quality of service. But due to this the customers are always as the mercifulness of the cloud service provider as their downtime causes the user to be unable to access his own data. Since every coin has two sides, likewise cloud computing has its own fair share of security threats and also there may be some threats which are yet to be discovered. Considering from the user’s point of view, he wants his data to be secure therefore, data security is the most important aspect which will ultimately lead to the customer satisfaction. The users’ have limited control on their own data so the conventional cryptography measures cannot be adopted. Thus, the data stored on the cloud should be verified occasionally to ensure the data has not been modified without informing the owner. The data which is rarely used is sometimes moved to lower tier storage making it more vulnerable for attacks. On the other note, Cloud Computing not only stores the data but also provides the user with functionality like modifying the data, appending some information to it or permanently deleting the data. To assure the integrity of data various hashing algorithms can be used to create checksums which will alert the user about the data modifications.
1.2 PROBLEM DEFINITION
Firstly, traditional cryptographic primitives for the purpose of data security protection cannot be directly adopted due to the users’ loss control of data under Cloud Computing. Whenever it comes to the matter relating to cloud services the user is put at a disadvantage regarding to the security of the file. Basically, the file is stored on a server which is a pool resource that is any one with user’s credentials can access the file and if in case the attacker comes to know about the password as well as the encryption keys the attacker can modify the file contents, thus making the information stored in the file to be accessed by the unauthorized user. So, the problem is that what if someone copy’s your work and claims to be his own work. Anything we design, anything we invent is governed by the principle of whether or not it guarantees customer satisfaction.
Hence, the problem is underlying whether the customer can rest assured that his data is safe from unauthorized access or not.

1.3 PROJECT PURPOSE
In our purposed system, we provide assurance to the user that his information is safe by “implementing a system which provides security mechanisms by offering three levels of security”. Concerning about the data security part, our system is divided mainly into three modules named “IP triggering” module, “client-authentication” module and “redirecting” module. The system generates a user password and a key which is used for client authentication.
The algorithm generates two keywords 8-bit length consisting of combinations of characters, special characters, and numbers which is used for client authorization and file authorization.
Questions may arise as why do we use keys of 8-bit length only? The purpose of our system is to prevent illegal data access if the user’s credential is compromised. By testing against weak algorithms which are easier to crack we design our system to be more robust.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

1.4 PROJECT FEATURES
Our scheme would be to prevent illegal access of users’ data. A user after getting himself registered on the system will have the advantage of different layers of security. The most primitive work our system is to inform the user that his data has been accessed from an unregistered IP by using mail triggering events. For login, the attacker tries to access the file by using the credentials stolen from the victim, and upon entering is provided with a dialog box to enter a key. The attacker tries to enter the key which won’t be accepted by any means. The attacker is provided with three tries so that he can go back. After 3 tries, the attacker is provided with the access of the fake file which is implemented by the redirection module.

1.5 MODULES DESCRIPTION
1.5.1 CLIENT MODULE
In this module, the server receives a query sent from the client. Depending upon the query, the client is served the required files by the server. Before the server serves the request, authorization of client takes place. The server matches the client credentials for security. Only if it matches with the database the request is serviced and the corresponding file is served. If by any means, unauthorized user is detected redirection to the dummy file takes place.

1.5.2 SYSTEM MODULE
Three different network entities can be identified as follows:
USERS
Clients, who have information to be put away in the cloud and depend on the cloud for information calculation, comprise of both individual customers and associations.

CLOUD SERVICE PROVIDER (CSP)
A CSP, is a person who has substantial assets and skills in structuring and supervising dispersed cloud storage hosts, possesses and controls live Cloud Computing systems.

THIRD PARTY INSPECTOR(TPI)
A voluntary TPI, who expertise’s and abilities that consumers may not have, is
Trust worthy to evaluate and uncover hazard of cloud storage facilities on behalf of the consumers upon demand.

1.5.3 CLOUD DATA STORAGE MODULE
The user’s data is stored into cloud servers by the help of CSP, which are being processed in a successive manner, the user contact with the servers via CSP for accessing or retrieving his own data. In rare case scenarios, the user may feel the need for performing minute level modifications on the data. Users if provided with some security means so that they can perform data modifications on server level without the need of storing them on their own system. The optional TPI can be used for monitoring the data for the users who have trouble for maintaining time. In our purposed system, each and every communication between the user and the server is authenticated which provides reliability to our system.

1.5.4 CLOUD AUTHENTICATION SERVER
The Authentication Server (AS) implements functionality as most of the AS would with three levels of security in addition to the traditional client-authentication practice. In first addition the client authentication info is sent to the masked router. The AS used in this purposed system also has functionalities such as a ticketing personnel, regulatory approvals on the system network. The other functionalities may include such as updating of client lists, reducing client authentication time or revoking the access of a user.

1.5.5 UNAUTHORIZED DATA MODIFICATION AND dcvxvCORRUPTION MODULE
The important aspect of our purposed system is to prevent unauthorized access to the file which may result in data modification or rather corruption of data. Also it should be able to provide information regarding the unauthorized user like: time of access as well as the IP address of the unauthorized intruder.

1.5.6 ANTAGONIST MODULE

The threats can be originated from two different sources. A cloud service provider can have malicious intents who may move the data to a less secure storage and may also hide data losses which might occur due to several errors.
Also considering the other aspect, a person who possess the ability to compromise a number of cloud storage servers may perform data modification attacks while remaining undetected from the cloud service provider.

CHAPTER 2
LITERATURE SURVEY
The literature survey is taken as an important step in software development process. The time factor, economy and company strength are determined before developing the tool. Then the operating system and language are determined for developing the tool. A lot of external support is needed for developers once they start developing the tool. This support can be obtained from senior programmers, from the various books or from the websites. For developing the proposed system, the above consideration is taken into account.
In software development literature survey is considered to be the most important factor. Before developing any tool it is really important to determine many things such as economy, its advantage, disadvantage, its opportunities in the near future and many more such considerations are to be taken. Once these things are satisfied, then next steps are to determine which operating system and language can be used for developing thet ool.

2.1 CLOUD COMPUTING
Cloud computing provides an unlimited platform and infrastructure to store and execute and secure the clients or the customers data and program. As a customer it is not required that you have to own the system or the infrastructure as a whole, they can just be accessed or rented; adding as an advantage to the customer using it minimizing the tendency to decrease the expenditure.
Instead of running the data individually in one computer or even in several; they are hosted in the “cloud ‘—computers and servers assemblage accessed via the Internet. Cloud computing lets you access all the documents, data’s and application from anywhere in the world, making it free for the people to be confined in the desktop only and making it easy for the people indifferent parts of the world to collaborate.

2.2 EXITING SYSTEM
Firstly, traditional cryptographic primitives for the purpose of data security protection cannot be directly adopted due to the users’ loss control of data under Cloud Computing. Whenever it comes to the matter relating to cloud services the user is put at a disadvantage regarding to the security of the file. Basically the file is stored on a server which is a pool resource that is any one with user’s credentials can access the file and if in case the attacker comes to know about the password as well as the encryption keys the attacker can modify the file contents, thus making the information stored in the file to be accessed by the unauthorized user. So, the problem is that what if someone copy’s your work and claims to be his own work. Anything we design, anything we invent is governed by the principle of whether or not it guarantees customer satisfaction.
Hence, the problem is underlying whether the customer can rest assured that his data is safe from unauthorized access or not.

2.3 PROPOSED SYSTEM
In our purposed system, we provide assurance to the user that his information is safe by “implementing a system which provides security mechanisms by offering three levels of security”. Concerning about the data security part, our system is divided mainly into three modules named “IP triggering” module, “client-authentication” module and “redirecting” module. The system generates a user password and a key which is used for client authentication.
The algorithm generates two keywords 8 bit length consisting of combinations of characters, special characters, and numbers which is used for client authorization and file authorization.
Questions may arise as why do we use keys of 8 bit length only? The purpose of our system is to prevent illegal data access if the users’ credential are compromised. By testing against weak algorithms which are easier to crack we design our system to be more robust.
Advantages:
• Our scheme would be to prevent illegal access of users’ data.
• Inform the user that his data has been accessed from an unregistered IP.
• The attacker is provided with the access of the fake file.

2.4 SOFTWARE DESCRIPTIONS
2.4.1 JAVA
Java is an abnormal state programming dialect which is quickly developing propelled innovations in the field of PC and data fields. With the other machine, dialects we can either arrange or derive a program to run the program on the framework, however the benefit of utilizing is that it can be both accumulated and translated. In Java the program is separate into midway code called the byte codes. This is platform independent code and is decIPhered by the interpreter on the Java platform.
It has the following buzzwords:-
• Simplicity
• Object-oriented
• Platfrom independence
• Robust
• Secure
• Dynamic
• Portable
• Multithreaded
• High performance
• Architectural neutral

Package in JAVA:
Package in java is a mechanism or a process to encapsulate a group of classes, interfaces and also sub-packages.
Packages are used for multiple purpose that are:
• It prevents the naming conflicts. Example there can be two classes with the same name in any of the two packages, considering it to be Student i.e. campus member CSE Student and campus member EE Student.
• Packages can be used as data encapsulation (Data-hiding).
• It provides control access.
• It makes searching of interfaces, classes, enumerations,, and notations easier.

2.4.2 JAVA PLATFORM
Generally, platform is defined as the hardware or software environment on which our programs run. Few of the reputed ones are windows 2000, Linux, Solaris and MacOS. It might be considered as a blend of the working framework and equipment. The Java platform is a product stage that keeps running on the highest point of the other equipment based stage The Java platform has two components namely:
• The Java Virtual Machine(JVM)
• The Java Programming Application Interface(Java API)
Java Virtual Machine is the base stage for the Java stage and it is ported into various gear based stages. Java API is all around gathered into libraries of classes and interfaces. This classes is generally called packs. The going with diagram shows the vocations of Java stage and how the Java virtual and the Java API shield the program from the hardware.
2.4.3 ODBC
ODBC remains for Open Database Connectivity is a standard programming interface for database specialist co-ops and the application designers. ODBC is the de facto for the windows to get connected or interact with the database system, programmers has to use different programming languages to get connected with the database. Now, the ODBC has made the choice of the database system easier and efficient.
. Application engineers have considerably more vital things to stress over than the language structure that is expected to port their program starting with one database then onto the next when business needs abruptly change.

Through the ODBC Administrator in Control Panel, you can determine the specific database that is related with an information source that an ODBC application program is composed to utilize. Think about an ODBC information source as an entryway with a name on it. Every entryway will lead you to a specific database. For instance, the information source named Sales Figures may be a SQL Server database, while the Accounts Payable information source could allude to an Access database. The physical database alluded to by an information source can live anyplace on the LAN.
2.4.4 JDBC
Java Database Connectivity or JDBC is produced by Sun Microsystems to build up an autonomous standard database API for Java. A bland SQL database get to instrument is given which thusly gives a predictable interface to assortment of Relational Database Management System (RDMS).Database availability module utilize “module” to keep up the interface consistency. Jumpers for every stage ought to be given to the database and Java to have JDBC bolster. To get a more broad affirmation of JDBC ,Sun build JDBC’s framework as for ODBC.

JDBC GOAL
Barely any product bundles are planned without objectives as a top priority. JDBC is one that, as a result of its numerous objectives, drove the advancement of the API. These objectives, in conjunction with early commentator input, have settled the JDBC class library into a strong structure for building database applications in Java.
The objectives that were set for JDBC are essential. They will give you some knowledge with reference to why certain classes and functionalities carry on the way they do.The eight design goals of JDBC are as follows
• SQL Level API.
• SQL Conformance.
• JDBC must be implemented on the top of common database interface.
• A Java interface is provide which is consistent with the rest of the Java system.
• It should be kept simple.
• Keep the common cases simple.
• Strong, static should be used wherever possible.
Java is also unusual in that each Java program is both compiled and interpreted. With a compile you decIPher it into a midway language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Fig 2.1 JDBC Class Diagram

CHAPTER 3

REQUIREMENT ANALYSIS
3.1 FUNCTIONAL REQUIREMENTS
By the definitions of software engineering, a functional requirement is defined as a function of a software system or its corresponding components. A set of inputs, its behavior, and the resultant outputs is defined as a function. Any sort of calculations, inputs of technical details, data modification and interpretation and other various functionality which conglomerates together to exclaim what a system is supposed to accomplish makes up functional requirements. All of the cases where the system implies the functional requirements implemented in use cases is called as behavioral aspects of functional requirements.
The system here has to perform the following tasks:
• Derive the user id and password together with the secret key, verify with the matching database inputs. A match if found we can continue otherwise update it with an error message.
• An encryption algorithm is used to encrypt an old file to form an encrypted new file.
• A decryption algorithm must exists that allows to decrypt and thus derive the original file from the encrypted file.
• The owner of the file should be made alert with an update message if any information in the file is modified.

3.2 NON-FUNCTIONALREQUIREMENTS
A non-functional requirement is defined as a requirement that exclaims a criterion that is used to judge the operation of that is happening in a system, rather than predefined behaviors as put forward by systems engineering and requirements engineering. Functional requirements that define specific behavior or functions should be in perfect contrast with this nonfunctional requirement. In the system design the plan for implementing functional requirements is explained in detail. In the system architecture theplanforimplementingnon-functionalrequirementsisexplainedindetail.
“Defined Constraints”, “quality of attributes”, “quality goals standards”, “quality service requirements” and “non-behavioral requirements” are the some of the popular terms adopted in non-functional requirements.
The common quality attributes are as follows:

3.2.1 ACCESSIBILITY
A common term used to elaborate on the level of which a product, device, service, or environment is accessible by the masses is defined as accessibility.
This project that we have worked on people registered with the cloud can use the cloudforstoringandretrievalofthedatawithasecretkeythatissenttotheiremailids. Simple and efficient and user friendly is the user interface. The project and the features provided should always be available to the user without any fault. A small abstinence of any part of the software would indirectly lead to the breakdown of the system and the entire project crumbling down at a point leading to low user satisfaction. In other words the project should aim at long life cycle with break down time leading to null.

3.2.2MAINTAINABILITY
Maintainability is the comfort with which a software product can be modified as defined by software engineering. The features that are kept in mind are
• Meeting new requirements.
• Correction of defects
Basing on the user requirements and expectations new features can be provided easily by simply extending the needed appropriate files to the available project with the help of various programming languages.
The programming of this project being very simple, it is easily possible to get hold on and also correction of the existing defects and making the changes in the available project.

3.2.3 SCALABILITY
Scalability is when system is capable of handling an increased bandwidth and an increased through put caused due to adding up of new hardware components.
Under situations of low bandwidth and large number of users system should equally operate normally. Scalability in other words means the continuous availability of the system working under any circumstances without any breakdown. It should totally be prone to any sort of extreme circumstances like overflow of data, too many operations at a time, unavailability of network, server busy, client unavailable and so on. The ability of the application to function in an uninterrupted fashion without any fail when its contents or size is changed i.e. Increased or decreased is defined as scalability. Scalability to a larger size is often easier than shrinking it ta smaller size because developer make use of the full size always. Shrinking an application to a smaller size often means putting or exposing the application in a constrained environment and expecting no interruption in its functioning.

3.2.4 PORTABILITY
One of the important key concepts of high-level programming is always portability. Software code base feature that is able to reuse the existing code in lieu of creating new code and also moving the software from an environment to another is broadly defined as portability.
This project we have worked on should be capable of being executed under different operation conditions but it should always meet its minimum pre-defined configurations. System files and dependent assemblies only need configuration in such a case.
3.3 HARDWARE REQUIREMENTS
• Processor : Dual Core Processor
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor :VGA and High Resolution Monitor.
• Input Device : Standard Keyboard and Mouse.
• Ram : 256 Mb.

3.4 SOFTWARE REQUIREMENTS
• Operating system : Windows 10/8/7/XP

• Front End : JAVA, Swing(JFC),RMIJ2ME

• Back End : MS-Access

• Tool : EclIPse

CHAPTER 4
DESIGN
4.1 DESIGN GOALS
To enable secure outsourcing of file under the aforementioned model, our mechanism design should achieve the following security and performance guarantees:
4.1.1 INPUT/OUTPUT PRIVACY
No sensitive information from the customer’s private data can be derived by the cloud server during performing the encryption and transfer.
4.1.2 EFFICIENCY
The local computations done by customer should be substantially less than. The computation burden on the cloud server should be within the comparable time complexity of existing practical algorithms for encryption and decryption of files.

4.2 SYSTEM ARCHITECTURE
Here the client sends the query to the server. Based on the query the server sends the corresponding file to the client. Then the client authorization is done by checking user id and password. In the server side, it checks the client name and its password for security process. If it is satisfied and then received the queries form the client and search the corresponding files in the database. Finally, find that file and send to the client. If the server finds the intruder means, it set the alternative Path to those intruders. If any intruders tries to access any file then multiple times password is asked to them and at last the intruders are directed to fake file. Here intruders will not know that the file he obtained is fake. They think that the file they got is original one.

Fig 4.1 System Architecture

4.3 DATA FLOW DIAGRAM
The Data Flow Diagram(DFD) is also named as bubble chart diagram which is a simple graphical representation that can be used to represent a system. The system representation is done in terms of the input data to the system, the various processing carried out on these data, and the output data is generated by the system.

Fig 4.2 Data Flow Diagram

4.4 SEQUENCE DIAGRAM
The sequence diagrams are an easy way of describing the system’s behavior. It focuses on the interaction between the system and the environment. This UML diagram shows the interaction arranged in a time sequence. It has two dimensions: the vertical dimension and the horizontal dimension. The vertical dimension used in UML sequence diagram represents the time and the horizontal dimension used represents the different objects. The vertical line is also called the object’s lifeline. It represents the object’s presence during the interaction.

Fig 4.3 Sequence Diagram

4.5 USE CASE DIAGRAM
A use-case diagram is a graph of users or actors. It is a set of use cases enclosed by a system boundary which is also the participation associations between the actors and the use-cases, and generalization among the use cases.
So, the use-case is the description of the outside (actors or users) and inside(use-case) of the system’s typical behavior. An ellipse having the name is used to show the use case which is initiated by actors or users.
An Actor or a user is the one that communicates with a use-case. Name of the actors is written down and a arrow symbol is used to show the interaction between actor and use-case.

Fig 4.4 Use Case Diagram

4.6 CLASS DIAGRAM

Fig 4.5 Class Diagram

4.7 ACTIVITY DIAGRAM
An activity diagram consists of numerous states that represent several operations. The transition from one state to the other is triggered by the completion of the operation. A round box having operation name is used in the diagram. For the execution of that operation, an operation symbol is used for indication. An activity diagram shows the inner state of an object.

Fig 4.6 Activity Diagram

CHAPTER 5
IMPLEMENTATION
Among the various stages of project, the part which converts the theoretical design into a working system is known as Implementation, thus making it one of the critical phase for developing a successful system.
In Implementation phase we carefully plan as well as probe the existing system keeping in mind the constraints of the implementation.

5.1 MAIN MODULES

5.1.1 CLIENT MODULE
In this module, the server receives a query sent from the client. Depending upon the query, the client is served the required files by the server. Before the server serves the request, authorization of client takes place. The server matches the client credentials for security. Only if it matches with the database the request is serviced and the corresponding file is served. If by any means, unauthorized user is detected redirection to the dummy file takes place.

5.1.2 SYSTEM MODULE

The above figure illustrates the network architecture of the cloud data.
Figure 1. Three different network entities can be identified as follows:

USERS
Clients, who have information to be put away in the cloud and depend on the cloud for information calculation, comprise of both individual customers and associations.

CLOUD SERVICE PROVIDER (CSP)
A CSP, is a person who has substantial assets and skills in structuring and supervising dispersed cloud storage hosts, possesses and controls live Cloud Computing systems.

THIRD PARTY INSPECTOR(TPI)
A voluntary TPI, who expertise’s and abilities that consumers may not have, is
Trust worthy to evaluate and uncover hazard of cloud storage facilities on behalf of the consumers upon demand.

5.1.3 CLOUD DATA STORAGE MODULE

The user’s data is stored into cloud servers by the help of CSP, which are being processed in a successive manner, the user contact with the servers via CSP for accessing or retrieving his own data. In rare case scenarios, the user may feel the need for performing minute level modifications on the data. Users if provided with some security means so that they can perform data modifications on server level without the need of storing them on their own system. The optional TPI can be used for monitoring the data for the users who have trouble for maintaining time. In our purposed system, each and every communication between the user and the server is authenticated which provides reliability to our system.

5.1.4 CLOUD AUTHENTICATION SERVER
The Authentication Server (AS) implements functionality as most of the AS would with three levels of security in addition to the traditional client-authentication practice. In first addition the client authentication info is sent to the masked router. The AS used in this purposed system also has functionalities such as a ticketing personnel, regulatory approvals on the system network. The other functionalities may include such as updating of client lists, reducing client authentication time or revoking the access of a user.

5.1.5 UNAUTHORIZED DATA MODIFICATION AND CORRCORRUPTION MODULE
The important aspect of our purposed system is to prevent unauthorized access to the file which may result in data modification or rather corruption of data. Also it should be able to provide information regarding the unauthorized user like: time of access as well as the IP address of the unauthorized intruder.

5.1.6 ANTAGONIST MODULE
The threats can be originated from two different sources. A cloud service provider can have malicious intents who may move the data to a less secure storage and may also hide data losses which might occur due to several errors.
Also considering the other aspect, a person who possess the ability to compromise a number of cloud storage servers may perform data modification attacks while remaining undetected from the cloud service provider.

CHAPTER 6
TESTING
Themaingoaloftestingistofindouttheerrorsintheprogram.Testingistheway to find the fault or any where the code implies something else. The weakness in a working product and correct it by the testing. It is use to check the functionality of requirements, sub part of the programs, assemblies and/or a finished product it is the process of functioning software with ensuring that the Hardware and Software system meets its objective and Client expectations and does not fail in any case or in the coming future also.There are different levels of testing. Each type of testing test for some specific typetestingrequirement.
TYPES OF TESTS
6.1 UNIT TESTING
Unit testing is one of the level of Software testing. It includes the testing of the individual unit or the components of the program.The design of unit testing is where it validate the internal part of the program and logical testing whether it is functionally performing properly, and the inputs gives the proper output. The unit is the small part of the program .All decision flow and flow of the program should be tested. It is the testing ofindividualsoftware unitsoftheprogram orthesystem .It isthefirstlevel oftestingand it is done before the integration testing. This is a structural testing, that relies on knowledge of its construction of the system completely. Unit tests perform basic level of testing. Unit tests ensure that each individual flow of part performs in the given fashion accurately.It is performs by the software testers.The benefits of unit testing is where it reduce the errors in the new system or remove the defects in the developing one .It is important part of level of testing because at starting only we recognize the defects and debug it attheparticular stage. It gives thebetterunderstanding ofthecodeat thestart so thatthereisnodefectsinthelaterstages.

6.2 INTEGRATION TESTING
Integration tests is the software testing phase where the unit of the program is combined and tested as the group. And to test the given system after integration whether it is running as the one individual program and delivery the output to the system for further process. It comes after unit testing and before validation testing. Integration testing has some of the approaches as bottom-up and top-down testing.Testing is event driven based and is more concerned with the final output of screens. Integration tests explainsus thatthecomponents oftheprogram issatisfytheunittestingsuccessfully, the combination of part of the program and tested it properly is correct and accurate. Integration testing is mainly focused on the errors or the defects or the problems that comes after the combination of the unit part of the program or components.The main part istotestwhetherever unitistestedbefore combination forfurtherintegrationtesting.The integration testing is also know as ‘I& T’ (Integration and Testing), ‘StringTesting’ and sometimes’ThreadTesting’.

6.3 VALIDATION TESTING
An Validation testing is the last phase of Testing and engineering validation test (EVT) is performed at the end of the development phase and done after the integration phase.Here wearechecking forthe AreweBuilding therightproduct. It is important in determine the errors in the components and correcting them at early stage in the design cycle as early as possible, is the important way that keep the projects on time and within given budget or the total cost. Too frequent, product design and performance problems are not detected until late in the product development cycle then it cost us more.As the defects are found at the later stage then that the product is deviated from the actual one thenticketisraisedforthechangetothedevelopmentteam. Verification is a process to validate that the service, product, system gives us the Quality in the specifications, regulations or situations forced at the start of software implementation or development stage. Verification start with the requirements of the components ,implementation, design, development and the final one production. This is usually aninside method. Validation is a way to collect the data that gives us the right to assure the high degree of service provided, product quality or system met it’s requirements. This generally gives combination of work with the requirements of the end users and other productstakeholders.

6.4 SYSTEM TESTING
Testing of an Hardware or the Software in the System testing is testing that is performed on the complete bases, integrated system to finalized the system’s to met it’s specified requirements. System testing is the black box testing, and it doesn’t require the knowledge of the design from the inner or the code logic on what bases it is implemented inthesoftwaredevelopment.
As a rule, system testing is perform who is interdependent of the development team to assure the validation properly as the software has successfully passed the unit,integration, validation testing it gives us the better outcome that met the client requirementsinthesystem. System testing is a most compact way of testing; it use to recognize the defect to detectinthewholesystemorintheinterpartofthesystem. System testing processed on the whole system based on the Functional RequirementSpecification(s)(FRS)and/oraSystemRequirementSpecification(SRS). System testing doesn’t only tests design, but even the behavior and the requirements of the end user/customer. It is also important to test up system on the bases of the boundary andbeyondtheboundarysitealsointheSoftware/hardwarerequirementsspecification(s). Thetestingprocessoverviewisasfollows:

6.5 TESTING OF INITIALIZATION AND UI COMPONENTS

Serial Number of Test Case TC 03
Module Under Test User Login
DescrIPtion When the user tries to log in, details of user are verified with the DATABASE
Input User ID and Password
Output If the login details are correct, the user is logged in and user page is displayed. If the login details are incorrect, error is thrown
Remarks Test Successful

Table 6.3: Test Case for User Login

Serial Number of Test Case TC 04
Module Under Test File Upload
DescrIPtion When the user stores the file, file is stored with IP in DATABASE
Input User selects the file to be submitted
Output If the details are correct, the file is stored in the database
Remarks Test Successful

Table 6.4: Test Case for File Upload

Serial Number of Test Case TC 05
Module Under Test IP, Secret key Verification
DescrIPtion When the user wants to download file, User ID ,Password and IP are verified with the DATABASE
Input Secret Key
Output If the login details and the IP are correct, the user can download the file
Remarks Test Successful

Table 6.5: Test Case for Verifying Secret Key and IP

CHAPTER 7
SNAPSHOT

Fig 7.1 Screen Layout of Main Page

Fig 7.2 Screen Layout when ID and Password are correct

Fig 7.3 Screen Layout of User Login

Fig 7.4 Snapshot of User Login Asking Unique Key

Fig 7.5 Screen Layout for Administrator

Fig 7.6 Screen Layout to add New User

Fig 7.7 Snapshot of Available Recourses which user can Download

Fig 7.8 Screen Layout Showing Restricted IP

Fig 7.9 Screen Layout of Available Recourses

Fig 7.10 Snapshot of Hackers Information

Fig 7.11 Snapshot of Adding Recourses

Fig 7.12 Screen Layout of blocking a IP

Fig 7.13 Snapshot of Removing Blocked IP

CHAPTER 8
CONCLUSION AND FUTURE ENHANCEMENT
8.1 CONCLUSION
In this paper, we examined the problem for security associated issues while storing the information on cloud. To prevent and implement illegal access to a user’s data on cloud we devised an efficient system architecture which supports efficient operations on data like modification of data, deleting or appending the data. We implemented IP triggering which triggers a mail to the user’s registered email address on unauthorized access of data. On the second level, we have used a password login system with a key to prevent access of files other than the sole owner. Even if the key is exposed the system detects illegal access by comparing the IP against its database and if they don’t match successfully redirecting it to the dummy file thus preventing the users’ data from being corrupted or modified.
The area of cloud computing is still in bloom so still many vulnerabilities are not discovered and has decent amount of challenges. The most hopeful development which we can expect from cloud computing is to provide the user with decent amount of control of their own data. Automating the system to check for any modifications in data by using hashing algorithms and calculating checksum for checking with any modifications attack like SHA-checksum.

8.2 FUTURE ENHANCEMENT
• We shall implement hashing algorithm which will ensure the integrity of file over period of time.
• The mobile alerts functionality will make the users’ updated about any modification attacks performed on the file.
• We will implement additional layer of security by using the owner’s mac address which will be unique to each user.

CHAPTER 9
BILBOGRAPHY
9.1 ABBREVIATIONS
OOPS ? Object Oriented Programming Concepts
TCP/IP ? Transmission Control Protocol/Internet Protocol
JDBC ? Java Data Base Connectivity
EIS ? Enterprise Information Systems
BIOS ? Basic Input/Output System
RMI ? Remote Method Invocation
JNDI ? Java Naming and Directory Interface
ORDBMS ? Object Relational Database Management System
CSP ? Cloud Service Provider (CSP)
J2ME ? Java 2 Micro Edition

9.2 REFERENCES
1. Amazon.com, “Amazon Web Services (AWS),” Online at http://aws. amazon.com, 2008.
N. Gohring, “Amazon’s S3 down for several hours,” Online

2. Athttp://www.pcworld.com/businesscenter/article/142549/amazo s s3 down for several hours.html, 2008.

3. A. Juels and J. Burton S. Kaliski, “PORs: Proofs of Retrievability for Large Files,” Proc. of CCS ’07, pp. 584–597, 2007.

4. H. Shacham and B. Waters, “Compact Proofs of Retrievability,” Proc. of Asiacrypt ’08, Dec. 2008.
5. K. D. Bowers, A. Juels, and A. Oprea, “Proofs of Retrievability: Theory and Implementation,” Cryptology ePrint Archive, Report 2008/175, 2008, http://eprint.iacr.org/.

6. G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, “Provable Data Possession at Untrusted Stores,” Proc. Of CCS ’07, pp. 598–609, 2007.

7. G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and Efficient Provable Data Possession,” Proc. of SecureComm ’08, pp. 1– 10, 2008.

8. T. S. J. Schwarz and E. L. Miller, “Store, Forget, and Check: UsingAlgebraic Signatures to Check Remotely Administered Storage,” Proc.

9.3 SITES REFERRED

http://java.sun.com
http://www.sourcefordgde.com
http://www.networkcomputing.com/
http://www.roseindia.com/
http://www.java2s.com/

CHAPTER 1 INTRODUCTION 1

CHAPTER 1
INTRODUCTION

1.1 INTRODUCTION
Wireless Sensor Networks (WSN) 1-5 are exceptionally conveyed systems of little, lightweight remote hubs, sent in expansive numbers to screen the earth or framework by the estimation of physical parameters, for example, temperature, weight, or relative mugginess. Building sensors have been made conceivable by the late advances in smaller scale electromechanical frameworks Micro Electro Mechanical System (MEMS) innovation. The sensor hubs are like that of a PC with a handling unit, constrained computational force, restricted memory, sensors, a specialized gadget and a force source in type of a battery. In an ordinary application, a WSN is scattered in a locale where it is intended to gather information through its sensor hubs. The utilizations of sensor systems are unending, restricted just by the human creative ability 1 2 3.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Figure 1.1: Wireless Sensor Network 3
The sensor unit consists of sensor and ADC that is Analog to Digital Converter (ADC). The sensor unit is responsible for collecting information as the ADC requests, and returning the analog data it sensed. ADC is a translator that tells the CPU what the sensor it has sensed, and also informs the sensor unit what to do. Communication unit is tasked to receive command or query from and transmit the data from CPU to the outside world. CPU is the most complex unit. It interprets the command or query to ADC, monitors and controls power if necessary, processes received data, computes the next hop to the sink, etc. Power unit supplies power to sensor unit, processing unit and communication unit. Each node may also consist of the two optional components namely Location finding system and Mobilizer. If the user requires the knowledge of location with high accuracy then the node should pusses Location finding system and Mobilizer may be needed to move sensor nodes when it is required to carry out the assigned tasks.
1.2 ARCHITECTURE OF WIRELESS SENSOR NETWORK

The convention stack joins power and steering mindfulness, coordinates information with systems administration conventions, and conveys control effectively through the wireless medium. The convention stack comprises of the application, transport, system, information join, physical layer, power administration plane, portability administration plane and undertaking administration plane. Contingent upon the detecting undertaking, diverse sorts of utilizations programming can be based and use on the application layer. The transport layer serves to keep up the stream of information if the sensor systems application obliges it. The system layer deals with directing the information supplied by the vehicle layer. Since the earth is loud and sensor hubs can be versatile, the MAC convention must be power mindful and ready to minimize crash with neighbors telecast. The physical layer addresses the straight forward’s needs however strong balance, transmission particle and accepting procedures. What’s more, the force, portability and assignment administration planes screen the force, development and errand conveyance among the sensor hubs. These planes help the sensor hubs facilitate the detecting assignment and bring down the general force utilization 6-7.
1.3 UTILIZATIONS OF WIRELESS SENSOR NETWORKS
Wireless Sensor Network has off late, found applications in colossal ranges. In this area we show some of the noticeable zones of uses of WSN. The rundown would be exceptionally extensive in the event that we deplete every one of the ranges of WSN applications.
The military uses of sensor hubs incorporate combat zone observation and checking, controlling frameworks of astute rockets and identification of attack by weapons of mass annihilation.
The Restorative Application: Sensors can be amazingly valuable in patient analysis and observing 5. Patients can wear little sensor gadgets that screen their physiological information, for example heart rate or circulatory strain.
Natural observing: It incorporates movement, living space, Out of control fire and so forth.
Mechanical Applications: It incorporates modern detecting and diagnostics. For instance machines, processing plant, supply chains and so on.
Framework Assurance Application: It incorporates power matrices observing, water conveyance checking and so on.
Incidental Applications: Sensors will soon discover their way into a large group of business applications at home and in commercial ventures. Savvy sensor hubs can be incorporated with apparatuses at home, for example, broilers, fridges, and vacuum cleaners, which empower them to cooperate with one another and be remote-controlled.
In numerous applications, the information got by the detecting hubs should be kept secret and it must be legitimate 6. In the nonappearance of security a false or malevolent hub could capture private data, or could send false messages to hubs in the system. The significant attacks are: DoS, Worm gap attack, Sinkhole attack, Sybil attack, Specific Sending attack, Detached data gathering, Hub catching, False or malignant hub, Hi surge attack and so on.
1.4 CONFIGURATION ISSUES IN WIRELESS SENSOR NETWORKS
Since the execution of a steering convention is firmly identified with the design model, in this segment we endeavor to catch compositional issues and highlight their suggestions 8.
Network progress: There are three principle parts in a sensor system. These are the sensor hubs, sink and observed occasions. Beside the not very many setups that use versatile sensor, the vast majority of the system structural planning expect that sensor hubs are stationary. Then again supporting the portability of sink or bunch heads (passages) is in some cases regarded important.
Node Deployment: Another thought is the topological organization of the hubs which is Application ward and influences the routing’s execution convention. The arrangement is either deterministic or self sorting out. In deterministic circumstances, the sensors are physically set and information is steered through pre decided ways. However in s mythical person arranging framework the sensor hubs are scattered haphazardly makes a foundation in a notice – hoc way.
Energy Consideration: During the formation of a base, the procedures of setting up the courses are enormously impacted by vitality contemplations. Since the transmission particle force of a remote radio is corresponding to the separation squared or much higher request in the vicinity of snags, multi jump steering will devour less vitality than direct correspondence. On the other hand, multi bounce directing presents noteworthy overhead topology administration and medium access control. Direct steering would perform well illuminate if every one of the hubs are near the sink. More often than not sensors are scattered arbitrarily over a region of interest and multi bounce directing gets to be unavoidable.
Data Delivery Models: Depending on the sensor’s utilization arrange, the information conveyance model to the s ink can be persistent, occasion driven, inquiry driven and mixture. In nonstop conveyance show, every sensor sends information occasionally. In occasion driven and inquiry driven models, the transmission of information is activated when an occasion happens or a question is produced by the s ink. Some system applies a half and half system utilizing a blend of nonstop, occasion driven and inquiry driven information conveyance. The directing convention is exceptionally affected by information conveyance model, particularly with respect to the minimization of vitality utilization and course soundness 8.
1.5 CLASSIFICATION OF ATTACKS IN WSN
Passive and active attacks: Attacks can be ordered into two noteworthy classifications, concurring the interference of correspondence act, in particular inactive attacks and dynamic attacks. From this respect, when it is alluded to an inactive attack it is said that the attack get information traded in the system without interfering with the correspondence. When it is alluded to a dynamic attack it can be insisted that the attack infers the typical disturbance usefulness of the system, significance data intrusion, change, or creation. Samples of inactive attacks are listening stealthily, activity examination, and movement checking. Cases of dynamic attacks incorporate sticking, imitating, alteration, foreswearing of service DoS, and message replay.
Traffic investigation: Traffic investigation is the procedure of blocking and analyzing messages with a specific end goal to derive data from examples in correspondence.
DoS attack or DDoS attack: A Denial-of-service attack DoS attack or conveyed disavowal of-administration attack DDoS attack is an endeavour to make a computer asset occupied to its planned clients. Despite the fact that the way to do, thought processes in, and focuses of a DoS attack may differ, it for the most part comprises of the purposeful endeavors of a man or persons to keep an Internet webpage or administration from working proficiently or by any means, briefly or uncertainly. Culprits of DoS attacks ordinarily target locales or administrations facilitated on high profile web servers, for example, banks, charge card installment passages, and even root name servers 12 13.
Replay attack: A replay attack is a rupture of security in which data is put away without approval and after that retransmitted to trap the collector into unapproved operations, for example, false distinguishing proof or confirmation or a copy exchange. For instance, messages from an approved client who is signing into a system may be caught by an aggressor and dislike (replayed) the following day. Despite the fact that the messages may be encoded, and the assailant may not recognize what the real keys and passwords are, the retransmission of legitimate logon messages is adequate to obtain entrance to the system. Otherwise called a “man-in-the-center attack”, a replay attack can be avoided utilizing solid advanced marks that incorporate time stamps and consideration of extraordinary data from the past exchange, for example, the estimation of a continually increased grouping number. Inside versus outside attacks: The attacks can likewise be arranged into outer attacks and inside attacks, agreeing the attacks’ area. A few papers refer to outcast and insider attacks. Outside attacks are completed by hubs that don’t have a place with the system’s area. Inward attacks are from bargained hubs, which are entirely of the system. Inner attacks are more serious when contrasted and outside attacks subsequent to the insider knows profitable and mystery data, and has special access rights.
Attacks on distinctive layers of the Internet display: The attacks can be further ordered by five layers of the Internet model. A few attacks can be dispatched at various layers.

1.6 CONFIGURATION ISSUES OF ROUTING PROTOCOLS
At first WSNs was essentially spurred by military applications. Later on the non military personnel application area of remote sensor systems have been viewed as, for example, ecological and species observing, creation and social insurance, brilliant home and so forth. These WSNs may comprise of heterogeneous and portable sensor hubs, the system topology may be as straightforward as a star topology; the scale and thickness of a system shifts relying upon the application. To meet this general pattern towards broadening, the accompanying vital outline issues 9 of the sensor system must be considered.
Adaptation to non-critical failure: Some sensor hubs may fizzle or be obstructed because of absence of force, have physical harm or ecological impedance. The sensor’s disappointment hub ought not influence the assignment of remote sensor networks. This is the unwavering quality. Adaptation to non-critical failure is the capacity to support sensor system functionalities with no intrusion because of sensor hub disappointments.
Scalability: The quantity of sensor hubs conveyed in the detecting range may be in the request of hundreds, thousands or more and directing plans must be sufficiently versatile to react to occasions.
Production Costs: Since the sensor systems comprise of countless hubs, the expense of a solitary hub is imperative to legitimize the general expense of the system and subsequently the expense of sensors is to be kept low.
Working Environment: We can set up sensor system in the inside of vast apparatus, at the base of a sea, in a naturally or artificially debased field, in a front line past the foe lines, in a home or an extensive building, in a substantial distribution center, appended to creatures, joined to quick moving vehicles, in woodland region for living space checking and so on.
Power Consumption: Since the transmission force of a remote radio is corresponding to separation squared or considerably higher request in the vicinity of obstructions, multi-bounce steering will expend less vitality than direct correspondence. Be that as it may, multi-bounce directing presents noteworthy overhead for topology administration and medium access control. Direct steering would perform all around ok if every one of the hubs were near the sink. Sensor hubs are furnished with constrained force source (

x

Hi!
I'm Dianna!

Would you like to get a custom essay? How about receiving a customized one?

Check it out