World's most popular travel blog for travel bloggers.

 Object-oriented programming (OOP) is a computer programming model that organizes software design around data, or, rather than functions and logic. An object can be defined as a data field that has unique attributes and behavior.

The organization of an object-oriented program also makes the method beneficial to collaborative development, where projects are divided into groups.

Principles of OOP

Object-oriented programming is based on the following principles:

  • Encapsulation. The implementation and state of each object are privately held inside a defined boundary, or class. Other objects do not have access to this class or the authority to make changes but are only able to call a list of public functions, or methods. This characteristic of data hiding  provides greater program security and avoids unintended data corruption
  • Abstraction: Objects only reveal internal mechanisms that are relevant for the use of other objects, hiding any unnecessary implementation code. This concept helps developers more easily make changes and additions over time.
  • Inheritance: Relationships and subclasses between objects can be assigned, allowing developers to reuse a common logic while still maintaining a unique hierarchy. This property of OOP forces a more thorough data analysis, reduces development time and ensures a higher level of accuracy.
  • Polymorphism: Objects can take on more than one form depending on the context. The program will determine which meaning or usage is necessary for each execution of that object, cutting down the need to duplicate code.

Organizations produce and gather data as they operate. Contained in a database, data is typically organized to model relevant aspects of reality in a way that supports processes requiring this information. Knowing how this can be managed effectively is vital to any organization.

What is a Database Management System (or DBMS)?


Organizations employ Database Management Systems (or DBMS) to help them effectively manage their data and derive relevant information out of it. A DBMS is a technology tool that directly supports data management. It is a package designed to define, manipulate, and manage data in a database.

Some general functions of a DBMS:

  • Designed to allow the definition, creation, querying, update, and administration of databases
  • Define rules to validate the data and relieve users of framing programs for data maintenance
  • Convert an existing database, or archive a large and growing one
  • Run business applications, which perform the tasks of managing business processes, interacting with end-users and other applications, to capture and analyze data

Some well-known DBMSs are Microsoft SQL Server, Microsoft Access, Oracle, SAP, and others.

Components of DBMS

DBMS have several components, each performing very significant tasks in the database management system environment. Below is a list of components within the database and its environment.


Software
This is the set of programs used to control and manage the overall database. This includes the DBMS software itself, the Operating System, the network software being used to share the data among users, and the application programs used to access data in the DBMS.


Hardware
Consists of a set of physical electronic devices such as computers, I/O devices, storage devices, etc., this provides the interface between computers and the real world systems.


Data
DBMS exists to collect, store, process and access data, the most important component. The database contains both the actual or operational data and the metadata.


Procedures
These are the instructions and rules that assist on how to use the DBMS, and in designing and running the database, using documented procedures, to guide the users that operate and manage it.


Database Access Language
This is used to access the data to and from the database, to enter new data, update existing data, or retrieve required data from databases. The user writes a set of appropriate commands in a database access language, submits these to the DBMS, which then processes the data and generates and displays a set of results into a user readable form.


Query Processor
This transforms the user queries into a series of low level instructions. This reads the online user’s query and translates it into an efficient series of operations in a form capable of being sent to the run time data manager for execution.


Run Time Database Manager
Sometimes referred to as the database control system, this is the central software component of the DBMS that interfaces with user-submitted application programs and queries, and handles database access at run time. Its function is to convert operations in user’s queries. It provides control to maintain the consistency, integrity and security of the data.


Data Manager
Also called the cache manger, this is responsible for handling of data in the database, providing a recovery to the system that allows it to recover the data after a failure.


Database Engine
The core service for storing, processing, and securing data, this provides controlled access and rapid transaction processing to address the requirements of the most demanding data consuming applications. It is often used to create relational databases for online transaction processing or online analytical processing data.


Data Dictionary
This is a reserved space within a database used to store information about the database itself. A data dictionary is a set of read-only table and views, containing the different information about the data used in the enterprise to ensure that database representation of the data follow one standard as defined in the dictionary.


Report Writer
Also referred to as the report generator, it is a program that extracts information from one or more files and presents the information in a specified format. Most report writers allow the user to select records that meet certain conditions and to display selected fields in rows and columns, or also format the data into different charts.

When multiple trisection execute concurrently in an uncontrolled or unrestricted manner, then it might lead to several problems. These problems are commonly referred to as concurrency problems in database environment. The five concurrency problems that can occur in database are:

(i). Temporary Update Problem
(ii). Incorrect Summary Problem
(iii). Lost Update Problem
(iv). Unrepeatable Read Problem
(v). Phantom Read Problem 

These are explained as following below.

  1. Temporary Update Problem:
    Temporary update or dirty read problem occurs when one transaction updates an item and fails. But the updated item is used by another transaction before the item is changed or reverted back to its last value.

    Example:

    In the above example, if transaction 1 fails for some reason then X will revert back to its previous value. But transaction 2 has already read the incorrect value of X.



  2. Incorrect Summary Problem:
    Consider a situation, where one transaction is applying the aggregate function on some records while another transaction is updating these records. The aggregate function may calculate some values before the values have been updated and others after they are updated.

    Example:

    In the above example, transaction 2 is calculating the sum of some records while transaction 1 is updating them. Therefore the aggregate function may calculate some values before they have been updated and others after they have been updated.

  3. Lost Update Problem:
    In the lost update problem, update done to a data item by a transaction is lost as it is overwritten by the update done by another transaction.

    Example:

    In the above example, transaction 1 changes the value of X but it gets overwritten by the update done by transaction 2 on X. Therefore, the update done by transaction 1 is lost.

  4. Unrepeatable Read Problem:
    The unrepeatable problem occurs when two or more read operations of the same transaction read different values of the same variable.

    Example:

    In the above example, once transaction 2 reads the variable X, a write operation in transaction 1 changes the value of the variable X. Thus, when another read operation is performed by transaction 2, it reads the new value of X which was updated by transaction 1.

  5. Phantom Read Problem:
    The phantom read problem occurs when a transaction reads a variable once but when it tries to read that same variable again, an error occurs saying that the variable does not exist.

    Example:

    In the above example, once transaction 2 reads the variable X, transaction 1 deletes the variable X without transaction 1’s knowledge. Thus, when transaction 2 tries to read X, it is not able to it.

 A strong entity is not dependent of any other entity in the schema. A strong entity will always have a primary key. are represented by a single rectangle. The relationship of two strong entities is represented by a single diamond.

Various strong entities, when combined together, create a strong entity set.


A weak entity is dependent on a strong entity to ensure the its existence. Unlike a strong entity, a weak entity does not have any primary key. It instead has a partial discriminator key. A weak entity is represented by a double rectangle.
The relation between one strong and one weak entity is represented by a double diamond.





Difference between Strong and Weak Entity:

S.NOStrong EntityWeak Entity
1.Strong entity always has primary key.While weak entity has partial discriminator key.
2.Strong entity is not dependent of any other entity.Weak entity is depend on strong entity.
3.Strong entity is represented by single rectangle.Weak entity is represented by double rectangle.
4.Two strong entity’s relationship is represented by single diamond.While the relation between one strong and one weak entity is represented by double diamond.
5.Strong entity have either total participation or not.While weak entity always has total participation.

 Data Replication is the process of storing data in more than one site or node. It is useful in improving the availability of data. It is simply copying data from a database from one server to another server so that all the users can share the same data without any inconsistency. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others.

Data replication encompasses duplication of transactions on an ongoing basis, so that the replicate is in a consistently updated state and synchronized with the source.However in data replication data is available at different locations, but a particular relation has to reside at only one location.

There can be full replication, in which the whole database is stored at every site. There can also be partial replication, in which some frequently used fragment of the database are replicated and others are not replicated.

Types of Data Replication –

  1. Transactional Replication – In Transactional replication users receive full initial copies of the database and then receive updates as data changes. Data is copied in real time from the publisher to the receiving database(subscriber) in the same order as they occur with the publisher therefore in this type of replication, transactional consistency is guaranteed. Transactional replication is typically used in server-to-server environments. It does not simply copy the data changes, but rather consistently and accurately replicates each change.
  2. Snapshot Replication – Snapshot replication distributes data exactly as it appears at a specific moment in time does not monitor for updates to the data. The entire snapshot is generated and sent to Users. Snapshot replication is generally used when data changes are infrequent. It is bit slower than transactional because on each attempt it moves multiple records from one end to the other end. Snapshot replication is a good way to perform initial synchronization between the publisher and the subscriber.
  3. Merge Replication – Data from two or more databases is combined into a single database. Merge replication is the most complex type of replication because it allows both publisher and subscriber to independently make changes to the database. Merge replication is typically used in server-to-client environments. It allows changes to be sent from one publisher to multiple subscribers.

 A tree whose elements have at most 2 children is called a binary tree. Since each element in a binary tree can have only 2 children, we typically name them the left and right child.

A Binary Tree node contains following parts.

  1. Data
  2. Pointer to left child
  3. Pointer to right child

 

AVL Tree-

  • AVL trees are special kind of binary search trees.
  • In AVL trees, height of left subtree and right subtree of every node differs by at most one.
  • AVL trees are also called as self-balancing binary search trees.

Example-

Following tree is an example of AVL tree-

This tree is an AVL tree because-

  • It is a binary search tree.
  • The difference between height of left subtree and right subtree of every node is at most one.

Following tree is not an example of AVL Tree-

AVL Tree Operations-

Like BST Operation commonly performed operations on AVL tree are-

  1. Search Operation
  2. Insertion Operation
  3. Deletion Operation

Case-01:

  • After the operation, the balance factor of each node is either 0 or 1 or -1.
  • In this case, the AVL tree is considered to be balanced.
  • The operation is concluded.

Case-02:

  • After the operation, the balance factor of at least one node is not 0 or 1 or -1.
  • In this case, the AVL tree is considered to be imbalanced.
  • Rotations are then performed to balance the tree.

AVL Tree Rotations-

Kinds of Rotations-

There are 4 kinds of rotations possible in AVL Trees-

  1. Left Rotation (LL Rotation)
  2. Right Rotation (RR Rotation)
  3. Left-Right Rotation (LR Rotation)
  4. Right-Left Rotation (RL Rotation)

Cases Of Imbalance And Their Balancing Using Rotation Operations-

Case-01:

Case-02:

Case-03:

Case-04:

 splay tree is a binary search tree with the additional property that recently accessed elements are quick to access again. Like  self-balancing binary search tree

All normal operations on a binary search tree are combined with one basic operation, called splaying.

Advantages

  1. Good performance for a splay tree depends on the fact that it is self-optimizing.

2. Frequently accessed nodes will move nearer to the root where they can be accessed more quickly.

3. Comparable performance: Average-case performance is as efficient as other trees.

4. Small memory footprint: Splay trees do not need to store any bookkeeping data.

Disadvantages

  1. The most significant disadvantage of splay trees is that the height of a splay tree can be linear.
  2. this will be the case after accessing all n elements in non-decreasing order.
  3. Since the height of a tree corresponds to the worst-case access time, this means that the actual cost of a single operation can be high.
  4. The representation of splay trees can change even when they are accessed in a 'read-only' manner.

Zig step: In this step is done when p is the root.The tree is rotated on the edge between x and p. Zig steps exist to deal with the parity issue, will be done only as the last step in a splay operation, and only when x has odd depth at the beginning of the operation.

Splay tree zig.svg

Zig-zig step: this step is done when p is not the root and x and p are either both right children or are both left children. The picture below shows the case where x and p are both left children. The tree is rotated on the edge joining p with its parent g, then rotated on the edge joining x with p. Note that zig-zig steps are the only thing that differentiate splay trees from the rotate to root method introduced by Allen and Munro[4] prior to the introduction of splay trees.

Zigzig.gif

Zig-zag step: this step is done when p is not the root and x is a right child and p is a left child or vice versa. The tree is rotated on the edge between p and x, and then rotated on the resulting edge between x and g.

Zigzag.gif

 

Searching-

  • Searching is a process of finding a particular element among several given elements.
  • The search is successful if the required element is found.
  • Otherwise, the search is unsuccessful.

Searching Algorithms-

Searching Algorithms are a family of algorithms used for the purpose of searching.

The searching of an element in the given array may be carried out in the following two ways-

  1. Linear Search
  2. Binary Search

Linear Search-

  • Linear Search is the simplest searching algorithm.
  • It traverses the array sequentially to locate the required element.
  • It searches for an element by comparing it with each element of the array one by one.
  • So, it is also called as Sequential Search.

Linear Search Algorithm is applied when-

  • No information is given about the array.
  • The given array is unsorted or the elements are unordered.
  • The list of data items is smaller.

Linear Search Algorithm-

Consider-

  • There is a linear array ‘a’ of size ‘n’.
  • Linear search algorithm is being used to search an element ‘item’ in this linear array.
  • If search ends in success, it sets loc to the index of the element otherwise it sets loc to -1.

Time Complexity Analysis-

Linear Search time complexity analysis is done below-

Best case-

In the best possible case,

  • The element being searched may be found at the first position.
  • In this case, the search terminates in success with just one comparison.
  • Thus in best case, linear search algorithm takes O(1) operations.

Worst Case-

In the worst possible case,

  • The element being searched may be present at the last position or not present in the array at all.
  • In the former case, the search terminates in success with n comparisons.
  • In the later case, the search terminates in failure with n comparisons.
  • Thus in worst case, linear search algorithm takes O(n) operations.

 

CASE Tools

CASE tools are set of software application programs, which are used to automate SDLC activities. CASE tools are used by software project managers, analysts and engineers to develop software system.

There are number of CASE tools available to simplify various stages of Software Development Life Cycle such as Analysis tools, Design tools, Project management tools, Database Management tools, Documentation tools are to name a few.

Use of CASE tools accelerates the development of project to produce desired result and helps to uncover flaws before moving ahead with next stage in software development.

Components of CASE Tools

CASE tools can be broadly divided into the following parts based on their use at a particular SDLC stage:

  • Central Repository - CASE tools require a central repository, which can serve as a source of common, integrated and consistent information. Central repository is a central place of storage where product specifications, requirement documents, related reports and diagrams, other useful information regarding management is stored. Central repository also serves as data dictionary.

    • Upper Case Tools - Upper CASE tools are used in planning, analysis and design stages of SDLC.

    • Lower Case Tools - Lower CASE tools are used in implementation, testing and maintenance.

    • Integrated Case Tools - Integrated CASE tools are helpful in all the stages of SDLC, from Requirement gathering to Testing and documentation.

    CASE tools can be grouped together if they have similar functionality, process activities and capability of getting integrated with other tools.

    Scope of Case Tools

    The scope of CASE tools goes throughout the SDLC.

    Case Tools Types

    Now we briefly go through various CASE tools

    Diagram tools

    These tools are used to represent system components, data and control flow among various software components and system structure in a graphical form. For example, Flow Chart Maker tool for creating state-of-the-art flowcharts.

    Process Modeling Tools

    Process modeling is method to create software process model, which is used to develop the software. Process modeling tools help the managers to choose a process model or modify it as per the requirement of software product. For example, EPF Composer

    Project Management Tools

    These tools are used for project planning, cost and effort estimation, project scheduling and resource planning. Managers have to strictly comply project execution with every mentioned step in software project management. Project management tools help in storing and sharing project information in real-time throughout the organization. For example, Creative Pro Office, Trac Project, Basecamp.

    Documentation Tools

    Documentation in a software project starts prior to the software process, goes throughout all phases of SDLC and after the completion of the project.

    Documentation tools generate documents for technical users and end users. Technical users are mostly in-house professionals of the development team who refer to system manual, reference manual, training manual, installation manuals etc. The end user documents describe the functioning and how-to of the system such as user manual. For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.

    Analysis Tools

    These tools help to gather requirements, automatically check for any inconsistency, inaccuracy in the diagrams, data redundancies or erroneous omissions. For example, Accept 360, Accompa, CaseComplete for requirement analysis, Visible Analyst for total analysis.

    Design Tools

    These tools help software designers to design the block structure of the software, which may further be broken down in smaller modules using refinement techniques. These tools provides detailing of each module and interconnections among modules. For example, Animated Software Design

    Configuration Management Tools

    An instance of software is released under one version. Configuration Management tools deal with –

    • Version and revision management
    • Baseline configuration management
    • Change control management

    CASE tools help in this by automatic tracking, version management and release management. For example, Fossil, Git, Accu REV.

    Change Control Tools

    These tools are considered as a part of configuration management tools. They deal with changes made to the software after its baseline is fixed or when the software is first released. CASE tools automate change tracking, file management, code management and more. It also helps in enforcing change policy of the organization.

    Programming Tools

    These tools consist of programming environments like IDE (Integrated Development Environment), in-built modules library and simulation tools. These tools provide comprehensive aid in building software product and include features for simulation and testing. For example, Cscope to search code in C, Eclipse.

    Prototyping Tools

    Software prototype is simulated version of the intended software product. Prototype provides initial look and feel of the product and simulates few aspect of actual product.

    Prototyping CASE tools essentially come with graphical libraries. They can create hardware independent user interfaces and design. These tools help us to build rapid prototypes based on existing information. In addition, they provide simulation of software prototype. For example, Serena prototype composer, Mockup Builder.

    Web Development Tools

    These tools assist in designing web pages with all allied elements like forms, text, script, graphic and so on. Web tools also provide live preview of what is being developed and how will it look after completion. For example, Fontello, Adobe Edge Inspect, Foundation 3, Brackets.

    Quality Assurance Tools

    Quality assurance in a software organization is monitoring the engineering process and methods adopted to develop the software product in order to ensure conformance of quality as per organization standards. QA tools consist of configuration and change control tools and software testing tools. For example, SoapTest, AppsWatch, JMeter.

    Maintenance Tools

    Software maintenance includes modifications in the software product after it is delivered. Automatic logging and error reporting techniques, automatic error ticket generation and root cause Analysis are few CASE tools, which help software organization in maintenance phase of SDLC. For example, Bugzilla for defect tracking, HP Quality Center.


 7 Different Types of White Box testing techniques | White box Testing Tools

Whitebox testing is one of the popular kind, which has attracted a lot of users because of the functionality. There are Different Types of White Box testing techniques available to use. Hence, finding the right kind of activity helps you in saving a whole lot of time every day.

It is a known fact that every web application and software requires testing activity. There are different kinds of testing and it is chosen based on the actual requirements.

A proper testing activity before launching helps you overcome any kinds of errors. Errors are classified into major and minor depending on the web application. An effective process of condition coverage allows testers to enhance quality.

What is White Box testing?

It is important for every tester to know and understand the process before starting to enjoy quality results. Hence, testing is generally practiced depending on the necessity. It is necessary to have a set of independent paths while testing because it helps in organizing the process.

A proper white box testing definition helps you understand the objective. White box testing is one of the popular activities performed by testers because of various reasons. It allows professionals to test case the design, internal structure, and coding. Hence, an organized testing activity gives a wide range of information before the launch.

Why use white box testing in software testing?

Every software producer prefers to have a glitch or error-free software because of obvious reasons. The best part of white-box testing is that the tester will have access to view the code in the software. When there is enough access to see the raw script, it is easier for the tester to find out errors in a quick time.

White box testing is one of the mandatory steps followed across the world. It concentrates on authenticating the input and output flow from the application. Hence, it helps in improving the functionalities. Functionalities include design, security, and usability from time to time.

What do you verify in White Box Testing?

It is important to understand the contents of White box testing to determine the value of it. White box testing is considered as the first step of testing activity. This gives you most of the minor errors without compromising on the quality.

A perfect example of white box testing explains you the importance of verification. White Box testing is the first step of the testing process. Hence, it is generally performed by developers before submitting the project. White Box testing is also known as Clear box testing, structural testing, code-based testing, open box testing and so on. Most of the traditional testers prefer calling as transparent box testing or glass box testing.

The following parameters are generally verified in white box testing

  • Output and input flow
  • Security elements
  • Usability
  • Design and so on.

When to perform white box testing

The white box is largely based on checking the internal functionality of the application. Hence, it sticks around elements related to internal testing.

White box testing examples helps you perform white box testing. The white box testing methodology is highly used in web applications because it allows them to add several functions. Most of the functions are pre-defined because it helps them to suit the requirements.

White Box testing is commonly performed in the initial stage of the testing or in the final stage of the development. Most of the times, developers complete the steps because it helps testers to save a lot of time.

7 Different types of white-box testing

  1. Unit Testing
  2. Static Analysis
  3. Dynamic Analysis
  4. Statement Coverage
  5. Branch testing Coverage
  6. Security Testing
  7. Mutation Testing

Unit Testing

Unit Testing is one of the basic steps, which is performed in the early stages. Most of the testers prefer performing to check if a specific unit of code is functional or not. Unit Testing is one of the common steps performed for every activity because it helps in removing basic and simple errors.

Static Analysis

As the term says, the step involves testing some of the static elements in the code. The step is conducted to figure out any of the possible defects or errors in the application code.

The static analysis is an important step because it helps in filtering simple errors in the initial stage of the process.

Dynamic Analysis

Dynamic Analysis is the further step of static analysis in general path testing. Most of the people prefer performing both static and dynamic at the same time.

The dynamic analysis helps in analyzing and executing the source code depending on the requirements. The final stage of the step helps in analyzing the output without affecting the process.

Statement Coverage

Statement coverage is one of the pivotal steps involved in the testing process. It offers a whole lot of advantages in terms of execution from time to time.

The process takes place to check whether all the functionalities are working or not. Most of the testers use the step because it is designed to execute all the functions atleast once. As the process starts, we will be able to figure out the possible errors in the web application.

Branch Testing Coverage

The modern-day software and web applications are not coded in a continuous mode because of various reasons. It is necessary to branch out at some point in time because it helps in segregating effectively.

Branch coverage testing gives a wide room for testers to find quick results. It helps in verifying all the possible branches in terms of lines of code. The step offers better access to find and rectify any kind of abnormal behavior in the application easily.

Security Testing

It is a known fact that security is one of the primary protocol, which needs to be in place all the time. Most of the companies prefer having a regular security testing activity because of obvious reasons. It is essential to have a process in place to protect the application or software automatically.

Security testing is more like a process because it comes with a lot of internal steps to complete. It verifies and rectifies any kind of unauthorized access to the system. The process helps in avoiding any kind of breach because of hacking or cracking practices.

Security testing requires a set of techniques, which deal with a sophisticated testing environment.

Mutation Testing

The last step in the process and requires a lot of time to complete effectively. Mutation testing is generally conducted to re-check any kind of bugs in the system. 

The step is carried out to ensure using the right strategy because of various reasons. It gives enough information about the strategy or a code to enhance the system from time to time.