World's most popular travel blog for travel bloggers.

 There are three stages in this process. These are:

 Qualification,
 Adaptation (also known as wrapping),
 Composition (all in the context of components).

Component qualification: examines reusable components. These are identified by characteristics in their interfaces, i.e., the services provided, and the means by which consumers access these services. This does not always provide the whole picture of whether a component will fit the requirements and the architectural style. This is a process of discovery by the Software Engineer. This ensures a candidate component will perform the function required, and whether it is compatible or adaptable to the architectural style of the system. The three important characteristics looked at are performance, reliability and usability.

Component Adaptation is required because very rarely will components integrate immediately with the system. Depending on the component type(e.g., COTS or inhouse), different strategies are used for adaptation (also known as wrapping). The most common approaches are:

White box wrapping: The implementation of the component is directly modified in order to resolve any incompatibilities. This is, obviously, only possible if the source code is available for a component, which is extremely
unlikely in the case of COTS.

 Grey box wrapping: This relies on the component library providing a component extension language or API that enables conflicts to be removed or masked.

Black box wrapping: This is the most common case, where access to source code is not available, and the only way the component can be adapted is by pre / post-processing at the interface level.

It is the job of the software engineer to determine whether the effort required to wrap a component adequately is justified, or whether it would be “cheaper” (from a software engineering perspective) to engineer a custom component which removes these conflicts. Also, once a component has been adapted it is necessary to check for compatibility for integration and to test for any unexpected behavior which has emerged due to the modifications made.

 Ans : The following are the similarities and the differences between the Cleanroom software engineering development and OO software engineering paradigm.

Similarities
 Lifecycle - both rely on incremental development
 Usage - cleanroom usage model similar to OO use case
 State Machine Use - cleanroom state box and OO transition diagram
 Reuse - explicit objective in both process models

Key Differences
 Cleanroom relies on decomposition and OO relies on composition
 Cleanroom relies on formal methods while OO allows informal use case definition and testing
 OO inheritance hierarchy is a design resource whereas cleanroom usage hierarchy is system itself
 OO practitioners prefer graphical representations while cleanroom practitioners prefer tabular representations
 Tool support is good for most OO processes, but usually tool support is only found in cleanroom testing, not design.

 A formal method in software development is a method that provides a formal language for describing a software artifact (e.g., specifications, designs, source code) such that formal proofs are possible, in principle, about properties of the artifact so expressed.

Mathematics supports abstraction and therefore is a powerful medium for modeling. Because they are exact, mathematical specifications are unambiguous and can be validated to uncover contradictions and incompleteness. It allows a developer to validate a specification for functionality. It is possible to demonstrate that a design matches a specification, and that some program code is a correct reflection of a design

How is the mathematics of formal languages applied in software development? What engineering issues have been addressed by their application? Formal methods are of global concern in software engineering. They are directly applicable during the requirements, design, and coding phases and have important consequences for testing
and maintenance.

They have influenced the development and standardization of many programming languages, the programmer’s most basic tool. They are important in ongoing research that may change standard practice, particularly in the areas of
specifications and design methodology. They are entwined with lifecycle models that may provide an alternative to the waterfall model, namely rapid prototyping, the Cleanroom variant on the spiral model, and “transformational” paradigms.

The concept of formalism in formal methods is borrowed from certain trends in 19th and 20th century mathematics. Formal methods are merely an adoption of the axiomatic method, as developed by these trends in mathematics, for software engineering. Mastery of formal methods in software requires an understanding of this mathematics background. Mathematical topics of interest include formal logic, both the propositional calculus and predicate logic, set theory, formal languages, and automata such as finite state machines.

 The OECD an international organization working in the area of data privacy and information security, established an ad hoc process of meetings (the first was on 1-2 July 1997 and second on 22 October 1997) on approaches being taken in major industrial countries for the regulation of content conduct on the Internet. The meeting acknowledged the primary role of the private sector in regulating the Internet. However, at the joint OECD/Business and Industry Advisory Committee forum held on 25 March 1998 in Paris, the OECD resolved to do no further work in this area. On 19 April 2006, OECD task force on spam has recommended that Governments and industry should step up their coordination to combat the global problem of spam. It calls on governments to establish clear national anti-spam policies and give enforcement authorities more power and resources. Co-ordination and co-operation between public and private sectors are critical, the report notes.

• It is faster than stream cipher.
• If any block contains any transmission error then it will not have affect on other blocks.
• It is not sufficient in hardware but may be used to connect keyboard to CPU (central process unit)
• Block ciphers can be easier to implement in software, because there is no bit manipulation like in stream cipher which is time consuming process and treats data in computer-sized blocks
• Block Cipher is more suitable in trading applications.
• Short blocks at the end of a message can also be added with blank or zero status.

 If you need to line up the cells next to each other you can resize and move the layout cells as you need. You can change the size of a layout cell by using one of its resize handles.

You cannot click and drag a cell to move it to a new position. If you need to move a layout cell to reposition it in a document, follow these steps

  1. Click the border of a layout cell to select it.
  2. To move the layout cell, do one of the following:

    1- Use the arrow keys.
    2- Hold down Shift and use the arrow keys to move a layout cell 5 pixels at a time.

• With no arguments and with no return value.
• With no arguments and with return value
• With arguments and with no return value
• With arguments and with return value

 Besides three storage class specifications namely, Automatic, External and Static, there is a register storage class. Registers are special storage areas within a computer’s CPU. All the arithmetic and logical operations are carried out with these registers.

For the same program, the execution time can be reduced if certain values can be stored in registers rather than memory. These programs are smaller in size (as few instructions are required) and few data transfers are required. The reduction is there in machine code and not in source code. They are declared by the proceeding declaration
by register reserved word as follows:

register int m;

Points to remember:

• These variables are stored in registers of computers. If the registers are not available they are put in memory.
• Usually 2 or 3 register variables are there in the program.
• Scope is same as automatic variable, local to a function in which they are declared.
• Address operator ‘&’ cannot be applied to a register variable.
• If the register is not available the variable is though to be like the automatic variable.
• Usually associated integer variable but with other types it is allowed having same size (short or unsigned).
• Can be formal arguments in functions.
• Pointers to register variables are not allowed.
• These variables can be used for loop indices also to increase efficiency.

 In case of single file programs static variables are defined within functions and individually have the same scope as automatic variables. But static variables retain their values throughout the execution of program within their previous values.

Points to remember:
• The specifier precedes the declaration. Static and the value cannot be accessed outside of their defining function.
• The static variables may have same name as that of external variables but the local variables take precedence in the function. Therefore external variables maintain their independence with locally defined auto and static variables.
• Initial value is expressed as the constant and not expression.
• Zeros are assigned to all variables whose declarations do not include explicit initial values. Hence they always have assigned values.
• Initialization is done only is the first execution

 These are not confined to a single function. Their scope ranges from the point of declaration to the entire remaining program. Therefore, their scope may be the entire program or two or more functions depending upon where they are declared.


Points to remember:
• These are global and can be accessed by any function within its scope.
Therefore value may be assigned in one and can be written in another.
• There is difference in external variable definition and declaration.
• External Definition is the same as any variable declaration:

            • Usually lies outside or before the function accessing it.

• It allocates storage space required.
• Initial values can be assigned.
• The external specifier is not required in external variable definition.
• A declaration is required if the external variable definition comes after the function definition.
• A declaration begins with an external specifier.
• Only when external variable is defined is the storage space allocated.
• External variables can be assigned initial values as a part of variable definitions, but the values must be constants rather than expressions.
• If initial value is not included then it is automatically assigned a va

 The variables local to a function are automatic i.e., declared within the function. The scope of lies within the function itself. The automatic defined in different functions, even if they have same name, are treated as different. It is the default storage class for variables declared in a function.


Points to remember:
• The auto is optional therefore there is no need to write it.
• All the formal arguments also have the auto storage class.
• The initialization of the auto-variables can be done:

           • in declarations
           • using assignment expression in a function

• If not initialized the unpredictable value is defined.
• The value is not retained after exit from the program.

 LCDs are the screens of choice for portable computers and lightweight screens. They consume very little electricity and have advanced technologically to quite good resolutions and color support. They were developed by the company RCA in the 1960s. LCDs function simply by blocking available light so as to render display patterns.

LCDs can be of the following types:

  1. Reflective LCDs: Display is generated by selectively blocking reflected light.
  2. Backlit LCDs : Display is due to a light source behind LCD panel.
  3. Edgelit LCDs : Display is due to a light source adjacent to the LCD panel.

 

LCD Technology
The technology behind LCD is called Nematic Technology because the molecules of the liquid crystals used are nematic i.e. rod-shaped. This liquid is sandwiched between two thin plastic membranes. These crystals have the special property that they can change the polarity and the bend of the light and this can be controlled by grooves in the plastic and by applying electric current.

Passive Matrix
In a passive matrix arrangement, the LCD panel has a grid of horizontal and vertex conductors and each pixel is located at an intersection. When a current is received by the pixel, it becomes dark. This is the technology which is more commonly used,

Active Matrix
This is called TFT (Thin Film Transistor) technology. In this there is a transistor at every pixel acting as a relay, receiving a small amount and making it much higher to activate the pixel. Since the amount is smaller, it can travel faster and hence response times are much faster. However, TFTs are much more difficult to fabricate and are costlier

 We have discussed about resolutions and vertical and horizontal refresh rates in the section on Video Cards. Let us refer to them from the monitor point of view. So, we have the following definitions (from the manual of a monitor available in the market):

Horizontal Frequency: The time to scan me line connecting the right edge to the left edge of the screen horizontally is called the horizontal cycle and the inverse number of the Horizontal cycle is called Horizontal Frequency. The unit is KH Kilo Hertz),

Verticals Frequency: Like a Fluorescent lamp, the screen has to repeat the same Image many times per second to display an image to the user, The frequency of this repetition is called Vertical Frequency or Refresh Rate.

If the resolution generated by the video card and the monitor resolution is properly matched, you get a good quality display. However, the actual resolution achieved is a physical quality of the monitor. In color systems, the resolution is limited by Convergence (Do the beam of the 3 colors converge exactly on the same dot? ) and the Dot Pitch. In monochrome monitors, the resolution is only limited by the highest frequency signals the monitor can handle.

 3D Accelerator is no magic technology. It is simply an accelerator chip that has built-in ability to carry out the mathematics and the algorithms required for 3-D image generation and rendering. A 3-D imaging is simply an illusion, a projection of 3-D reality on a 2-D screen. These are generated by projection and perspective effects, . depth and lighting effects, transparency effects and techniques such as Ray-Tracing (Tracing the path of light rays emitting from a light source), Z-buffering (a buffer strong the

 The other popular way Ls to attach a disk drive to a PC via a SCSI interface. The common drive choice for servers or high-end workstations with drive capacities ranges from 100MB to 20GB and rotation speed 7200RPM, It is w common 110 interface between the adapter and disk drives or any other peripheral, I.@., CP; ROMs drives, @pe drives, printers etc.

 DE devices art connected to the PC motherboard via a 34-wire ribbon cable, The common drive used today for workstations has capacities of 40MB to 100MB and rotation speed 72QORPM. The controller is embedded on the disk drive Itself, It is an interface between the disk controller and an adopter located on the motherboard. It has good access time of 20ms and data transfer rates of about 1 Mbps under ideal conditions, Drives are reasonably cheap, The latest version of the IDE specification enables four IDE channels: each one Is capable of supporting two JDE devices.

 The debugger is a program that allows the user to test and debug the object file. The user can employ this program to perform the following functions.

1. Make changes in the object code.
2. Examine and modify the contents of memory.
3. Set breakpoints, execute a segment of the program and display register contents after the execution.
4. Trace the execution of the specified segment of the program and display the register and memory contents after the execution of each instruction.
5. Disassemble a section of the program, i.e., convert the object code into the source code or mnemonics.

In summary, to run an assembly program you may require your computer:
1. A word processor like notepad
2. MASM, TASM or Emulator
3. LINK.EXE, it may be included in the assembler
4. DEBUG.COM for debugging if the need so be.

 Loader is a program which assigns absolute addresses to the program. These addresses are generated by adding the address from where the program is loaded into the memory to all the offsets. Loader comes into action when you want to execute your program. This program is brought from the secondary memory like disk. The file name extension for loading is .exe or .corn, which after loading can be executed by the CPU

 For modularity of your programs, it is better to break your program into several sub routines. It is even better to put the common routine, like reading a hexadecimal number, writing hexadecimal number, etc., which could be used by a lot of your other programs into a separate file. These files are assembled separately. After each file has been successfully assembled, they can be linked together to form a large file, which constitutes your complete program. The file containing the common routines can be linked to your other program also. The program that links your program is called the linker.

 It was an old method that required the programmer to translate each opcode into its numerical machine language representation by looking up a table of the microprocessor instructions set, which contains both assembly and machine language instructions. Manual assembly is acceptable for short programs but becomes very inconvenient for large programs. The Intel SDK-85 and most of the earlier university its were programmed using manual assembly.

 Assembly language is used primarily for writing short, specific, efficient interfacing modules/ subroutines. The basic idea of using assembly is to support the HLL with some highly efficient but non - portable routines. It will be worth mentioning here that UNIX mostly is written in C but has about 5-10% machine dependent assembly code. Similarly in telecommunication application assembly routine exists for enhancing efficiency

We use a Computer Network for the following reasons

a) Resource sharing: A network is needed because of the desire to share the sharable programs, data, and equipment available to anyone on the network without regard to the physical location of the resource and the user. You can also
share processing load on various networked resources.

b) High reliability: A network may have alternative sources of supply (e.g., replicated files, multiple CPUs, etc.). In case of one resource failure, the others could be used and the system continues to operate at reduced performance. This is
a very important property for military, banking, air traffic control, and many other applications.

c) Cost-benefit advantage: A network may consist of many powerful small computers, one per user. You can keep the data and applications on one or more shared and powerful file server machines. This is called the client-server model.
Such model offers a much better price/performance ratio than old mainframes. At present many server services have been moved to Internet based resources set up by a third party and shared by many (called cloud). This allows users to use powerful server applications and data services without maintaining servers. Such system may bring down the cost further. However, such models still have several issues that are being debated.

d) Scalability: The ability to increase system performance gradually by adding more processors (incremental upgrade).

e) Powerful communication medium: Networks make cooperation among far-flung groups of people easy where it previously had been impossible.
In the long run, the use of networks to enhance human-to-human communication may prove more important than technical goals such as improved reliability.

One of the most popular application of network is the World Wide Web which is an application of Internet. Let us introduce you to internet in the next subsection.

Internet is characterized by the Client Server Computing that consists of three basic components:

1. The Web client which may be the web browser;
2. The web server that services the request of the web client; and
3. The network that connects the client and the servers through a network such as LAN, WAN or the Internet.

For exchanging information between the client and the server, an application level protocol Hypertext Transfer Protocol (HTTP) is used. This protocol uses the services of TCP/IP for communication of reliable information over the Internet. However, HTTP is not suitable for all kinds of applications especially that requires large amount of data transfer in real time, for example, Voice over IP (VoIP) application which requires real time transfer of voice data. For such applications, different application level protocols have been designed, for example, for VoIP a protocol named Realtime Transport protocol (RTP) has been designed. Such protocols instead of reliable TCP may run over unreliable User Datagram Protocol (UDP)

The HTTP protocol allows us to access a web page by a web client running a browser. A Web page is a document or resource of information that may be available on a Web Server.

 A Web browser is software application that enables you to find, retrieve, and display information available on the World Wide Web (WWW). Browser also allows you to traverse information resources on the WWW. As you know that the information on the Web is organized and formatted using tags of a Markup language called Hypertext Markup Language or HTML. A web browser is converts the HTML tags and their content into a formatted display of information. Thus, a web browser allows you to see the rich web contents from a website. Some of the popular web browsers are - Internet Explorer, Mozilla Firefox, Apple Safari, Google Chrome, and Opera.

Before we discuss about advantages and disadvantages of e-learning, you should know that e-learning is just another model for learning. It is not that it can replace all other forms of learning models. However, it provides several opportunities that may be of benefit for creating certain learning instances. Some of these opportunities are:

1. It allows possibilities of course material but that require constant support of a course team.

2. The level of participation of student in learning may improve as it provides anytime, anywhere learning, but in any case the student has to be motivated by the course team from time to time.

3. E-learning does improve the IT skills of individuals and may improve their time management skills.

4. The content like recorded lectures may be viewed by a student at any time, however, the interactive support that requires teacher at the other end may still be available in slotted time only

5. It allows you to measure student activities very easily, but beware too much of Applications II interference in student style of learning is not advisable.

6. E-learning gives flexibility in curriculum design and reuse of contents, however, he expert team has to work constantly to make that happen.

7. The general understanding of e-learning as cost effective mechanism is often misleading. Please note that first e-learning is about teaching-learning process. Any good teaching-learning process is rigorous and requires substantial costs.

 Developing E Learning contents is a specialized activity. The quality of e-learning relates to achievements of the objectives of the content by the learners. Better quality e-learner content can be created if you follow a proper process of e-content generation.

Analysis Phase: Analysis requires identifying the learning objectives for the Applications II development of content for the target audience. This phase also lists the financial, technological and time constraints for the e-learning project. It also enables identification of the gap between the expected knowledge of the target audience and what they should know after going through the course. This facilitates the design phase.

Design Phase: In most organizations the design phase involves development of a storyboard that may include a concept flow, text, graphics, video, audio, animation if needed. In this phase you may also design the basic questions that must be answered by the learner after going through the learning content. You may also design the interface and interactivity during this step.

Implementation Phase: Implementation phase brings the design to live course material. You may take the help of various experts for this phase including content expert, graphic expert, interaction designer, web designer etc.

Verification Phase: during the Verification phase the contents so produced can be tested to determine if it is conveying what it is expected to convey. It may also be used to check the usability features of the product. You may perform verifications by e-learning experts or sample of target audience.

In this sorting algorithm, multiple swapping's take place in one pass. Smaller elements move or ‘bubble’ up to the top of the list, hence the name given to the algorithm.

In this method, adjacent members of the list to be sorted are compared. If the item on top is greater than the item immediately below it, then they are swapped. This process is carried on till the list is sorted.

The detailed algorithm follows:

Algorithm: BUBBLE SORT
1. Begin
2. Read the n elements
3. for i=1 to n
for j=n downto i+1
if a[j] <= a[j-1]
swap(a[j],a[j-1])
End // of Bubble Sort

Total number of comparisons in Bubble sort :
= (N-1) +(N-2) . . . + 2 + 1
= (N-1)*N / 2 =O(N^2)

This inefficiency is due to the fact that an item moves only to the next position in each pass.

Complexity refers to the rate at which the required storage or consumed time grows as a function of the problem size. The absolute growth depends on the machine used to execute the program, the compiler used to construct the program, and many other factors. We would like to have a way of describing the inherent complexity of a program (or piece of a program), independent of machine/compiler considerations. This means that we must not try to describe the absolute time or storage needed. We must instead concentrate on a “proportionality” approach, expressing the complexity in terms of its relationship to some known function. This type of analysis is known as asymptotic analysis. It may be noted that we are dealing with complexity of an algorithm not that of a problem. For example, the simple problem could have high order of time complexity and vice-versa

 All decision problems fall into sets of comparable complexity, called complexity classes.

The complexity class P is the set of decision problems that can be solved by a deterministic machine in polynomial time. This class corresponds to set of problems which can be effectively solved in the worst cases. We will consider algorithms belonging to this class for analysis of time complexity. Not all algorithms in these classes make practical sense as many of them have higher complexity. These are
discussed later.

The complexity class NP is a set of decision problems that can be solved by a nondeterministic machine in polynomial time. This class contains many problems like Boolean satisfiability problem, Hamiltonian path problem and the Vertex cover problem.

 The second method of representing a two-dimensional array in memory is the column major representation. Under this representation, the first column of the array occupies the first set of the memory locations reserved for the array. The second column occupies the next set and so forth. The schematic of a column major representation

Consider the following two-dimensional array:
a b c d
e f g h
i j k l
To make its equivalent column major representation, we perform the following process:

Transpose the elements of the array. Then, the representation will be same as that of the row major representation.
By application of above mentioned process, we get {a, e, i, b, f, j, c, g, k, d, h, i}

Col 0Col 1Col 2.........Col i

 Internal Modems: Internal Modems plug into expansion slots in your PC. Internal Modems are cheap and efficient. Internal Modems are bus-specific and hence may not fit universally.

External Modems: Modems externally connected to PC through a serial or parallel port and into a telephone line at the other end. They can usually connect ' to any computer with the right port and have a range of indicators for troubleshooting.

Pocket Modems: Small external Modems used with notebook P.Cs

PC-Card Modems: PC and Modems are read with PCMCIA slots found in notebooks. They are like external Modems which fit into an internal slot, Thus, they give the advantage of both external and internal modems but are more
expensive,

 The technology behind LCD is called Nematic Technology because the molecules of the liquid crystals used are nematic i.e. rod-shaped. This liquid is sandwiched between two thin plastic membranes. These crystals have the special property that they can change the polarity and the bend of the light and this can be controlled by grooves in the plastic and by applying electric current.

 Interlacing is a technique in which instead of scanning the image one-line-at-a-time if is scanned alternately, j.e., alternate lines are scanned at each pass. This achieves a doubling of the frame rate with the same amount of signal input. interlacing is used to keep bandwidth (amount of signal) dawn. Presently, only the !3514/A display adapters use interlacing. Since Interlaced

  DPJ (Dop Per Inch) ie a measure for th~ actual sharpness of the onscre: a image. This depends on bath the resolution and the size of the image. Pyactica1,expeyjence shows that a smaller screen has a sharper image at the same resolution than does a larger screen. This is because it will require more dots per inch to display the same number of pixels. A 15-invh monitor is 12-inches horizontally. 4 10-inch'monitor is 8 inches horizontally. To display a VGA image (640U480) the 15-inch monitor will require 53DPI and the 10-inch monitor

 3;-D Accelerator is no magic technology. It is simply an accelerator chip that has built-in ability to carry out the mathematics and the algorithms required for 3-D image generation and rendering. A 3-D imaging is simply an illusion, a projection of 3-D reality on a 2-D screen. These are generated by projection and perspective effects, . depth and lighting effects, transparency effects and techniques such as Ray-Tracing (Tracing the path of light rays emitting from a light source), Z-buffering (a buffer :toring the

 The following conventions must be used for pseudo-code.

  1. Give a valid name for the pseudo-code procedure. (See sample code for insertion sort at the end).
  2. Use the line numbers for each line of code.
  3. Use proper Indentation for every statement in a block structure.
  4. For a flow control statements use if-else. Always end an if statement with an end-if. Both if, else and end-if should be aligned vertically in same line.

Ex : If (conditional expression)
statements (see the indentation)
else statements

end - if

5. Use : = or " <--- " operator for assignments.

Ex: i = j or i <--- J

n = 2 to length [A] or n<--- 2 to length [A]

  1. Array elements can be represented by specifying the array name followed by the index in square brackets. For example, A[i] indicates the ith element of the array A.
  1. For looping or iteration use for or while statements. Always end a for loop with end-for and a while with end-while.
  2. The conditional expression of for or while can be written as shown in rule (4). You can separate two or more conditions with “and”
  3. If required, we can also put comments in between the symbol

    /* and */.

A simple pseudo-code for insertion sort using the above conventions:

INSERTION - SORT (A)
1 . for j <--- 2 to length [A]
2. key <---A [j]
3. i <--- j - 1 _ /* insert A[j] into sorted sequence A [1...j - 1] */
4. while i > 0 and A[i] > key
5. A [i+1] <--- A[i]
6. i <--- i - 1
7. end - while
8. A[i+1] <--- key
9. end - for

Pseudo-code (derived from pseudo-code) is a compact and informal high level description of a computer programming algorithm that uses the structural conventions of some programming language. Unlike actual

computer language such as C,C++ or JAVA, Pseudo-code typically omits details that are not essential for understanding the algorithm, such as functions (or subroutines), variable declaration, semicolons, special words and so on. Any version of pseudo-code is acceptable as long as its instructions are unambiguous and is resembles in form. Pseudo-code is independent of any programming language. Pseudo-code cannot be compiled nor executed,
and not following any syntax rules.

Flow charts can be thought of as a graphical alternative to pseudo-code. A flowchart is a schematic representation of an algorithm, or the step-by-step solution of a problem, using some geometric figures (called flowchart symbols) connected by flow-lines for the purpose of designing or documenting a program.

The purpose of using pseudo-code is that it may be easier to read than conventional programming languages that enables (or helps) the programmers to concentrate on the algorithms without worrying about all the syntactic
details of a particular programming language. In fact, one can write a pseudocode for any given problem without even knowing what programming language one will use for the final implementation.

Example: The following pseudo-code “finds the maximum of three numbers”.

Input parameters : a , b , c
Output parameter : x

Find Max (a,b,c,x)

{
x=a
if (b>x) / if b is larger than x then update x
x=b
if (c>x) / if b is larger than x then update x
x=c
}

The first line of a function consists of the name of the function followed parentheses, in parentheses we pass the parameters of the function. The parameters may be data, variables, arrays, and so on, that are available to the
function. In the above algorithm, The parameters are the three input values, a,b, and c and the output parameter, , that is assigned the maximum of the three input values a,b, and c

 There are basically 5 fundamental techniques which are used to design an algorithm efficiently:

  1. Divide-and-Conquer
  2. Greedy method
  3. Dynamic Programming
  4. Backtracking
  5. Branch-and-Bound

In this section we will briefly describe these techniques with appropriate examples.

  1. Divide & conquer technique is a top-down approach to solve a problem. The algorithm which follows divide and conquer technique involves 3
  • steps:
    Divide the original problem into a set of sub problems.
    Conquer (or Solve) every sub-problem individually, recursive.
    Combine the solutions of these sub problems to get the solution of
    original problem.

Greedy technique is used to solve an optimization problem. An Optimization problem is one in which, we are given a set of input values, which are required to be either maximized or minimized (known as objective function) w. r. t. some constraints or conditions. Greedy algorithm always makes the choice (greedy criteria) that looks best at the moment, to optimize a given objective function. That is, it makes a locally optimal choice in the hope that this choice will lead to a overall globally optimal solution. The greedy algorithm does not always guaranteed the optimal solution but it generally produces solutions that are very close in value to the optimal.

Dynamic programming technique is similar to divide and conquer approach. Both solve a problem by breaking it down into a several sub problems that can be solved recursively. The difference between the two is that in dynamic programming approach, the results obtained from solving smaller sub problems are reused (by maintaining a table of
results) in the calculation of larger sub problems. Thus dynamic programming is a Bottom-up approach that begins by solving the smaller sub-problems, saving these partial results, and then reusing them to solve larger sub-problems until the solution to the original problem is obtained. Reusing the results of sub-problems (by maintaining a table of results) is the major advantage of dynamic programming because it avoids the recomputations (computing results twice or more) of the same problem .Thus Dynamic programming approach takes much less time than naïve or
straightforward methods, such as divide-and-conquer approach which solves problems in top-down method and having lots of re-computations. The dynamic programming approach always gives a guarantee to get a optimal solution

The term “backtrack” was coined by American mathematician D.H. Lehmer in the 1950s. Backtracking can be applied only for problems which admit the concept of a “partial candidate solution” and relatively quick test of whether it can possibly be completed to a valid solution. Backtrack algorithms try each possibility until they find the right one. It is
a depth-first-search of the set of possible solutions. During the search, if an alternative doesn‟t work, the search backtracks to the choice point, the place which presented different alternatives, and tries the next alternative.
When the alternatives are exhausted, the search returns to the previous choice point and try the next alternative there. If there are no more choice points, the search fails.

Branch-and-Bound (B&B) is a rather general optimization technique that applies where the greedy method and dynamic programming fail B&B design strategy is very similar to backtracking in that a state-spacetree is used to solve a problem. Branch and bound is a systematic method for solving optimization problems. However, it is much slower. Indeed, it often leads to exponential time complexities in the worst case. On the other hand, if applied carefully, it can lead to algorithms that run reasonably fast on average. The general idea of B&B is a BFS-like search for the optimal solution, but not all nodes get expanded (i.e., their children generated). Rather, a carefully selected criterion determines
which node to expand and when, and another criterion tells the algorithm when an optimal solution has been found. Branch and Bound (B&B) is the most widely used tool for solving large scale NP-hard combinatorial optimization problems.

 

 EFS designed to be implemented by a user, and is designed to be transparent; it can be used where it was not initially intended. EFS allow for Recovery Agents and the default Recovery Agent is the Administrator. These agents have configured public keys that are used to enable file recovery process. But, the system is designed in such a way that only the file recovery is possible and the recovery agent cannot learn about the user's private key.


Data Recovery for those companies and organizations that have the requirement of accessing data if an employee leaves, or the encryption key is lost.

The policy for implementing Data Recovery is defined at a Domain Controller. And this policy will be enforced on every computer in that domain. In case EFS is implemented on a machine that is not part of domain, the system, will automatically generate and save Recovery Keys.

 IPSec is a framework for ensuring secure private communications over IP networks. IPSec provides security for transmission of critical and sensitive information over unprotected networks such as the Internet. lpsec VPNs use the services defined within Ipsec to ensure confidentiality, Integrity, and authenticity of data communications over the public network, like Internet. IPSec operates at the network layer, protecting arid authenticating IP packets between participating IPSec devices. The IPSec provides the following network security services.

1. Data Confidentiality - The IPSec sender can encrypt packets before transmitting them across a network.
2. Data Integrity - The receiver can authenticate packets sent by the IPSec sender to ensure that the data has not been altered during transmission.
3. Data Origin Authentication - The IPSec receiver can authenticate the source of the IPSec packets sent. This service is dependent upon the data integrity service.
4. Anti-Replay - The IPSec receiver can detect and reject replayed packet.

In Windows 2000, you have two options for IPSec implementation, Transport Mode, and L2TP Tunnel Mode. Transport mode is designed for securing communication, between nodes on an internal network. L2TP Tunnel Mode is designed for securing communication's between 

  The security systems and methods are for securing operating system and data on physical hard disk. This security system is of no use if an attacker is able to sniff network packets.

Network Address Translation (NAT), is used to mask internal IP addresses with the IP address of the external Internet connection. Networks require NAT in their security policies to add an additional security "layer" between the Internet and the intranet. NAT function by taking a request from an internal client and making that request to the Internet on behalf of the internal client. In this configuration clients on the internal network, on local LAN, are not required to have a public IP address, thus conserving public IP addresses. The internal clients can be provided with an IP address from the private network blocks. Private IP addresses are not routed on the Internet and the address ranges are:

Private IP Addresses
10.0.0.0 - 10.255.255.255
172.16.0.0 - 192.168.255.255
192.168.0.0 - 192.168.255.255

However, Microsoft has designated a range for private addressing, 169.254.0.0 - 169.254.255.255.

NAT is an integral part of Routing and Remote Access Services (RRAS), as well as part of Internet Connection Sharing (ICS). The version of NAT used by ICS is scaled down form file full version, and does not allow for the level of configuration that the RRAS NAT allows. ICS is for a small office or for a home network, where there is one Internet connection that is to be shared by the entire network. All users connect via a single interface, usually connected via a modern, DSL, or cable access point.

When data /files are moved from one folder to the another, what will happen to the security permission that were set to secure these files. When files that are secured on an NTFS partition, how their security settings may be altered if those files are moved. In other words, if a file is defined as having everyone - allow -Read & Execute permissions, what will happen to those permission if the file is moved to another folder? The rules in Windows 2000 regarding copying and moving files are the same as they were in Windows NT 4.0 and by default, a file will keep the permissions that are assigned to it when moving the file to another folder on the same NTFS partition. If the file is moved to another NTFS partition the file will inherit file permission of the destination folder or partition. If a file is copied to any location, it will inherit the permissions of the destination folder or partition 

 When installed, Windows 2000 creates a set of folders to store program and data files. Windows folders and subfolders correspond to DOS directories and subdirectories but system folders do not.

Some of the system folders are:

  1. Desktop
  2. My Documents
  3. My Computer
  4. My Network Places
  5. Recycle Bin
  6. Internet Explorer

Descriptions of these folders are given below:

Desktop

The desktop includes:

My Documents, My Computer, and My Network Places system folders. Here Files and folders can be saved and created.

If the user creates folders, save files on the desktop – then these are stored in Desktop render user’s own user profile.

My Documents
This icon is a short-cut of the actual folder that the user uses for data files.

My Computer
This is responsible for the displaying of:

• All local drives
• Shared network drives
• Mapped drives
• Control Panel icon

This is a completely virtual folder, i.e., no file can be created or saved in this. My Computer folder is a system folder.

My Network Places
This is another virtual folder; it is responsible for providing access to all the network resources. Here you find the list of rights/privileges for all the jobs on your system.

Job list includes:

  • Accessing this computer from the Network
  • Backup files and directories
  • Restore files and directories (yes, it is a different right/privilege)
  • Load and unload device drivers --> Configure hardware, reserved for Administrators.

You can view in detail the list of groups with each right/privilege of networked computers. It provides the same functionality as was provided by the Network Neighborhood in Windows 95/98.

Recycle Bin
This folder is used to store files that are temporarily deleted from the system and has options for permanent deletion or restoring of files to their original locations.

Internet Explorer
Viewing Folders as Web Pages

Windows 2000 provides an opportunity to display each folder as a web page.

This feature can be activated/deactivated for all folders using the option Web View on
General tab of the folder options dialog box.

If you check enable web content in folders then info pane is available at all times for
all folders.

If a use windows classic folder is selected then only a simple list of icons can be
viewed without web content.

Four special attributes are associated with every file and folder for controlled access

On new files that are created by users these four attributes are always off.
These special attributes are:

  1. System
  2. Archive
  3. Read Only
  4. Hidden

Windows Explorer
It is an all purpose system utility, it lets the user organise files in folders, allows for searching for documents and also data editing.

Windows explorer supports two views:

  1. Single folder view
  2. Two-pane explorer view.

    Using the single folder view the contents of the current drive or folder can be viewed, whereas using two-pane explorer view all the drives, folders and resources on the user’s computer and the network can be viewed in a tree structure. Two-pane view is also possible.

    Arranging files and folders
    Contents of folder window can be sorted by name, type, size or date. To sort files within a folder, pull down view menu and choose arrange Icons and choose any among the following options:
    a. By name
    b. By type
    c. By size
    d. By Date

Even the width of folder panes can be changed by pointing to the vertical dividing line between the panes. When the mouse pointer changes to a two-headed arrow, click and drag

 In VBScript there are two kinds of procedures; the Sub procedure and the Function procedure.

Sub Procedures
A Sub procedure is a series of VBScript statements, enclosed by Sub and End Sub statements, that perform actions but don't return a value. A Sub procedure can take arguments (constants, variables, or expressions that are passed by a calling procedure). If a Sub procedure has no arguments, its Sub statement must include an empty set of parentheses ().
The following code shows a Sub procedure uses two intrinsic, or built-in, VBScript functions, MsgBox and InputBox, to prompt a user for some information. It then displays the results of a calculation based on that information. The calculation is performed in a Function procedure created using VBScript. The Function procedure is shown after the following discussion.

Sub ConvertTemp()
temp = InputBox("Please enter the temperature in degrees F.", 1)
MsgBox "The temperature is " & Celsius(temp) & " degrees C."
End Sub

Function Procedures
A Function procedure is a series of VBScript statements enclosed by the Function and End Function statements. A Function procedure is similar to a Sub procedure, but can also return a value. A Function procedure can take arguments (constants, variables, or expressions that are passed to it by a calling procedure). If a Function procedure has no arguments, its Function statement must include an empty set of parentheses (). A Function returns a value by assigning a value to its name in one or more statements of the procedure. The return type of a Function is always a Variant.

In the following example, the Celsius function calculates degrees Celsius from degrees Fahrenheit. When the function is called from the ConvertTemp Sub procedure, a variable containing the argument value is passed to the function. The
result of the calculation is returned to the calling procedure and displayed in a message box.

Sub ConvertTemp()
temp = InputBox("Please enter the temperature in degrees F.", 1)
MsgBox "The temperature is " & Celsius(temp) & " degrees C."
End Sub
Function Celsius(fDegrees)
Celsius = (Degrees - 32) * 5 / 9
End Function

 A constant is a meaningful name that takes the place of a number or string and never changes. VBScript defines a number of intrinsic constants. You can get detailed information about these intrinsic constants from the VBScript Language Reference

Creating Constants

You create user-defined constants in VBScript using the Const statement. This lets you create string or numeric constants with meaningful names and allows you to assign them literal values. For example:

Const MyString = "This is my string."
Const MyAge = 49

Note that the string literal is enclosed in quotation marks (" "). Quotation marks are the most obvious way to differentiate string values from numeric values. Date literals and time literals are represented by enclosing them in number signs (#). For example:

Const CutoffDate = #6-1-97#

You may want to adopt a naming scheme to differentiate constants from variables. This will save you from trying to reassign constant values while your script is running. For example, you might want to use a "vb" or "con" prefix on your constant names, or you might name your constants in all capital letters. Differentiating constants from variables eliminates confusion as you develop more complex scripts.

 A variable is a convenient placeholder that refers to a computer memory location where you can store program information that may change while your script is running. For example, you might create a variable called Click count to store the number of times a user clicks an object on a particular Web page. Where the variable is stored in computer memory is unimportant. What is important is that you only have to refer to a variable by name to see its value or to change its value. In VBScript, variables are always of one fundamental data type, Variant.

Declaring Variables
You declare variables explicitly in your script using the Dim statement, the Public statement, and the Private statement. For example:
Dim DegreesFahrenheit

Dim Top, Bottom, Left, Right
You can also declare a variable implicitly by simply using its name in your script. 'That is not generally a good practice because you could misspell the variable name in one or more places, causing unexpected results when your script is
running. For that reason, the Option Explicit statement is available to require explicit declaration of all variables

Naming Restrictions
Variable names follow the standard rules for naming anything in VBScript. A variable name:

1- Must begin with an alphabetic character.
2- Cannot contain an embedded period.
3- Must not exceed 255 characters.
4- Must be unique in the scope in which it is declared.

Scope and Lifetime of Variables

The scope of a variable is determined by where you declare it. When you declare a variable within a procedure, only code within that procedure can access or change the value of that variable. It has local scope and is called a procedure-level variable. If you declare a variable outside a procedure, you make it visible to all the procedures in your script. This is a script-level variable, and it has script-level scope.

How long a variable exists defines its lifetime. The lifetime of a script-level variable extends from the time it is declared until the time the script is finished running. At procedure level, a variable exists only as long as you are in the procedure. When the procedure exits, the variable is destroyed. Local variables are ideal as temporary storage space when a procedure is executing. You can have local variables of the same name in several different procedures because each is recognized only by the procedure in which it is declared.

Assigning Values to Variables
Values are assigned to variables creating an expression as follows: the variable is on the left side of the expression and the value you want to assign to the variable is on the right, with the '=' sign being the assignment operator. For example:

B = 200

Scalar Variables and Array Variables
Most of the time, you just want to assign a single value to a variable you have declared. A variable containing a single value is a scalar variable. At other times, it is convenient to assign more than one related value to a single variable. Then you can create a variable that can contain a series of values. This is called an array variable. Array variables and scalar variables are declared in the same way, except that the declaration of an array variable uses parentheses ( ) following the variable name. In the following example, a single-dimension array containing II_ekments is declared:

Dim A(10)

Although the number shown in the parentheses is 10, all arrays in VBScript are counted from base 0, so that this array actually contains 1 1 elements. In such an array, the number of array elements is always the number shown in parentheses plus one. This kind of array is called a fixed-size array.

You assign data to each of the elements of the array using an index into the array. Beginning at zero and ending at 10, data can be assigned to the elements of an array as follows:

A(0) = 256
A(1) = 324
A(2) = 100
........
A(10) = 55

Similarly, the data can be retrieved from any element using an index into the
particular array element you want. For example:
...........
Somevariable = A(8)
...........

Arrays are not limited to a single dimension. You can have as many as 60 dimensions, although most people cannot comprehend more than three or four dimensions. Multiple dimensions are declared by separating an array's size
numbers in the parentheses with commas.'In the following example, the MyTable variable is a two-dimensional array consisting of 6 rows and 1 1 columns: Dim MyTable(5, 10)

In a two-dimensional array, the first number is always the number of rows;
the second number is the number of columns.

You can also declare an array whose size changes while your script is running. This is called a dynamic array. The array is initially declared within a procedure using either the Dim statement or using the ReDim statement. However, for a
dynamic array, no size or number of dimensions is placed inside the parentheses.

For Example :

Dim MyArray()
ReDim AnotherArrayO

To use a dynamic array, you must subsequently use ReDim to determine the number of dimensions and the size of each dimension. In the following example, ReDim sets the initial size of the dynamic array to 25. A subsequent ReDim
statement resizes the array to 30, but uses the Preserve keyword to preserve the contents of the array as the resizing takes place.

ReDim MyArray(25)
........
ReDim Preserve MyArray(30)

There is no limit to the number of times you can resize a dynamic array, but you should know that if you make an array smaller than it was, you lose the data in the eliminated elements.

VBScript is a member of Microsoft's Visual Basic family of development products. Other members include Visual Basic (Professional and Standard Editions) and Visual Basic for Applications, which is the scripting language for Microsoft Excel. VBScript is a scripting language for HTML pages on the World Wide Web and corporate intranets. VBScript is powerful and has almost all the features of Visual Basic. One of the things you should be concerned about is the safety and security of client machines that access your Web site. Microsoft took this consideration into account when creating VBScript. Potentially dangerous operations that can be done in Visual Basic have been removed from VBScript, including the capability to access dynamic link libraries directly and to access the file system on the client machine. 

 Advantages of FDM:

  1. The users can be added to the system by simply adding another pair of transmitter modulator and receiver demodulators.
  2. FDM system support full duplex information (Both side simultaneous Communication) flow which is required by most of application.

Disadvantages of FDM:

  1. In FDM system, the initial cost is high. This may include the cable between the two ends and the associated connectors for the cable.
  2. A problem with one user can sometimes affect the others.
  3. Each user requires a precise carrier frequency for transmission of the signals.

 ST Connectors: ST stands for Straight Tip. Slotted bayonet type connector with long ferrule, a common connector for multi-mode fibers. The ST connector has been the main stay of optical fiber connectors for many years. It can be found in almost every communications room worldwide, but used mainly in data communications systems. The simple to use bayonet locking mechanism reduces the risks of accidental disconnection of fiber connections.

SC (Standard Connector) Connectors: Push/pull connector that can also be used with duplex fiber connection. The SC connector comprises a polymer body with ceramic ferrule barrel assembly plus a crimp over sleeve and rubber boot. These connectors are suitable for, 900µm and 2-3mm cables. The connector is precision made to demanding specifications. The combination of a ceramic ferrule with precision polymer housing provides consistent long-term mechanical and optical performance.

MT Connector: The MT-RJ connector is a development of the now legendary MT ferrule. MT stands Multi-fiber Connector. The MT ferrule in its various designs has the ability to connect anything from 2 fibers in the MTRJ to 72 fibers in the latest versions of the MPO connector.

 Fiber optic cable connectors are hardware installed on fiber cable ends to provide cable attachment to a transmitter, receiver or other cable. In order for information to be transmitted efficiently, the fiber cores must be properly aligned. They are usually devices that can be connected and disconnected repeatedly

 The BNC connector (Bayonet Neill–Concelman) is miniatures quick connect/disconnect RF connector used for coaxial cable. It features two bayonet lugs on the female connector; mating is achieved with only a quarter turn of the coupling nut. BNCs are ideally suited for cable termination for miniature-to-subminiature coaxial cable (e.g., RG-58, 59, to RG-179, RG-316). It is used with radio, television, and other radio-frequency electronic equipment, test instruments, video signals, and was once a popular computer network connector. BNC connectors are made to match the characteristic impedance of cable at either 50 ohms or 75 ohms. It is usually applied for frequencies below 3 GHz and voltages below 500 Volts.

 RJ stands for registered jack. RJ45 is a standard type of connector for network cables. RJ45 connectors are most commonly seen with Ethernet cables and networks. RJ45 connectors feature eight pins to which the wire strands of a cable interface electrically. Standard RJ-45 pin-outs define the arrangement of the individual wires needed when attaching connectors to a cable. RJ-45 connectors are of two types: male RJ-45 and female RJ-45.

 1. A class member function can be declared to be pure virtual by just specifying the keyword „virtual‟ in front and putting „=0‟ at the end of the function declaration

2. Pure virtual function itself do nothing but acts as a prototype in the base class and gives the responsibility to a derived class to define this function.

3. As pure virtual functions are not defined in the base class thus a base class can not have its direct instances or objects that means a class with pure virtual function acts a n abstract class that cannot be instantiated but its concrete derived classes can be.

4. We cannot have objects of the class having pure virtual function but we can have pointers to it that can in turn hold the reference of its concrete derived classes.

5. Pure virtual functions also implements run time polymorphism as the normal virtual functions do as binding of functions to the appropriate objects here is also delayed up to the run time, that means which function is to invoke is decided at the run time.

6. Pure virtual functions are meant to be overridden.

7. Only the functions that are members of some class can be declared as pure virtual that means we cannot declare regular functions or friend functions as pure virtual.

8. The corresponding functions in the derived class must agree be compatible with the pure virtual function‟s name and signature that means both must have same name and signature.

9. For abstract class, pure virtual function is must.

10. The pure virtual functions in an abstract base class are never implemented. Because no objects of that type are ever created, there is no reason to provide implementations, and the ADT (Abstract Data Type) works purely as the definition of an interface to objects which derive from it.

11. It is possible, however, to provide an implementation to a pure virtual function. The function can then be called by objects derived from the ADT, perhaps to provide common functionality to all the overridden functions.

 1. The function call takes slightly longer due to the virtual mechanism, and it also makes it more difficult for the compiler to optimize because it doesn't know exactly which function is going to be called at compile time. 

2. In a complex system, virtual functions can make it a little more difficult to figure out where a function is being called from. 

3. Virtual functions will usually not be inlined. 

4. Size of object increases due to virtual pointer.

  The virtual functions must be the members of some class.

 A class member function can be declared to be virtual by just specifying the keyword „virtual‟ in front of the function declaration. The syntax of declaring a virtual function is as follows:

virtual <retuen type><function name><(argument list)>
{//Function Body}

 Virtual Functions enables derived (sub) class to provide its own implementation for the function already defined in its base (super) class.

 Virtual Functions give power to the derived class functions to override the function in its base class with the same name and signature.

 Virtual Functions can‟t be static members.

 Only the functions that are members of some class can be declared as virtual that means we can‟t declare regular functions or friend functions as virtual.

 A virtual function can be a friend of another class.

 A virtual function in a base class must be defined, even though it may not be used.

 If one will call the virtual function with the pointer having the reference to the base class object then the function of the base class will be called for sure.

 The corresponding functions in the derived class must agree with the virtual function’s name and signature that means both must have same name and signature.

 C++ provides a solution to invoke the exact version of the member function, which has to be decided at runtime using virtual functions. They are the means by which functions of the base class can be overridden by the functions of the derived class. The keyword virtual provides a mechanism for defining virtual functions. When declaring the base class

member function, the keyword virtual is used with those functions, which are to be bound dynamically.

The general syntax to declare a virtual function uses the following format:

class class_name //This denotes the base class of C++ virtual function
{
public:
virtual return_type member_function_name(arguments) //This denotes the C++
virtual function
{


}
};

Virtual functions should be define in the public section of a class to realize its full potential benefits. When such a declaration is made, it allows to decide which function to be used at runtime, based on the type of object, pointed to by the base pointer rather than the type of the pointer. The examples of virtual functions provided in this unit illustrate
the use of base pointer to point to different objects for executing different implementations of the virtual functions.

 

  1. The biggest advantage of polymorphism is creation of reusable code by programmer's
    classes once written, tested and implemented can be easily reused without caring
    about what‟s written in the case.

2. Polymorphic variables help with memory use, in that a single variable can be used to
store multiple data types (integers, strings, etc.) rather than declaring a different
variable for each data format to be used.

3. Applications are Easily Extendable: Once an application is written using the concept
of polymorphism, it can easily be extended, providing new objects that conform to the
original interface. It is unnecessary to recompile original programs by adding new
types. Only re-linking is necessary to exhibit the new changes along with the old
application. This is the greatest achievement of C++ object-oriented programming. In
programming language, there has always been a need for adding and customizing. By
utilizing the concept of polymorphism, time and work effort is reduced in addition to
making future maintenance easier.

4. It provides easier maintenance of applications

5. It helps in achieving robustness in applications.

 Block Input / Output functions read/write a block (specific number of bytes from/to a file. A block can be a record, a set of records or an array. These functions are also defined in standard library

• fread( )
• fwrite( )

These two functions allow reading and writing of blocks of data. Their syntax is:

int fread(void *buf, int num_bytes, int count, FILE *fp);
int fwrite(void *buf, int num_bytes, int count, FILE *fp);

In case of fread(), buf is the pointer to a memory area that receives the data from the
file and in fwrite(), it is the pointer to the information to be written to the file.
num_bytes specifies the number of bytes to be read or written. These functions are
quite helpful in case of binary files. Generally these functions are used to read or write
array of records from or to a file. The use of the above functions is shown in the
following program.

 If the file contains data in the form of digits, real numbers, characters and strings, then character input/output functions are not enough as the values would be read in the form of characters. Also if we want to write data in some specific format to a file, then it is not possible with the above described functions. Hence C provides a set of formatted input/output functions. These are defined in standard library and are discussed below:

fscanf() and fprintf()

These functions are used for formatted input and output. These are identical to scanf()
and printf() except that the first argument is a file pointer that specifies the file to be
read or written, the second argument is the format string. The syntax for these
functions is:

int fscanf(FILE *fp, char *format,. . .);
int fprintf(FILE *fp, char *format,. . .);

Both these functions return an integer indicating the number of bytes actually read or
written.

 If we want to read a whole line in the file then each time we will need to call character input function, instead C provides some string input/output functions with the help of which we can read/write a set of characters at one time. These are defined in the standard library and are discussed below:

fgets( )
• fputs( )

These functions are used to read and write strings. Their syntax is:

int fputs(char *str, FILE *stream);
char *fgets(char *str, int num, FILE *stream);

The integer parameter in fgets( ) is used to indicate that at most num-1 characters are
to be read, terminating at end-of-file or end-of-line. The end-of-line character will be
placed in the string str before the string terminator, if it is read. If end-of-file is
encountered as the first character, EOF is returned, otherwise str is returned. The
fputs( ) function returns a non-negative number or EOF if unsuccessful.

 ANSI C provides a set of functions for reading and writing character by character or one byte at a time. These functions are defined in the standard library. They are listed and described below:

• getc()

• putc()

getc( ) is used to read a character from a file and putc( ) is used to write a character to a file. Their syntax is as follows:

int putc(int ch, FILE *stream);

int getc(FILE *stream);

The file pointer indicates the file to read from or write to. The character ch is formally called an integer in putc( ) function but only the low order byte is used. On success putc( ) returns a character(in integer form) written or EOF on failure. Similarly getc( ) returns an integer but only the low order byte is used. It returns EOF when end-of-file is reached. getc( ) and putc( ) are defined in as macros not functions.

fgetc() and fputc()
Apart from the above two macros, C also defines equivalent functions to read / write
characters from / to a file. These are:

int fgetc(FILE *stream);
int fputc(int c, FILE *stream);

To check the end of file, C includes the function feof( ) whose prototype is:

int feof(FILE *fp);

It returns 1 if end of file has been reached or 0 if not. The following code fragment
explains the use of these functions.

 After opening the file, the next thing needed is the way to read or write the file. There are several functions and macros defined in header file for reading and writing the file. These functions can be categorized according to the form and type of data read or written on to a file. These functions are classified as:

• Character input/output functions

• String input/output functions

• Formatted input/output functions 61

• Block input/output functions.

 In Java all drawing takes place via a Graphics object. This is an instance of the class 

java.awt.Graphics.

Initially the Graphics object you use will be passed as an argument to an applet’s
paint() method. The drawing can be done Applet Panels, Frames, Buttons, Canvases
etc.

Each Graphics object has its own coordinate system, and methods for drawing strings,
lines, rectangles, circles, polygons etc. Drawing in Java starts with particular Graphics
object. You get access to the Graphics object through the paint(Graphics g) method of
your applet.

Each draw method call will look like
g.drawString("Hello World", 0, 50);

Where g is the particular Graphics object with which you’re drawing.
For convenience sake in this unit the variable g will always refer to a pre-existing
object of the Graphics class. It is not a rule you are free to use some other name for
the particular Graphics context, such as myGraphics or applet-Graphics or anything
else.

 Graphics Context and Graphics Class

• Enables drawing on screen
• Graphics object manages graphics context
• Controls how objects is drawn
• Class Graphics is abstract
• Cannot be instantiated
• Contributes to Java’s portability
• Class Component method paint takes Graphics object.

The Graphics class is the abstract base class for all graphics contexts. It allows an application to draw onto components that are realized on various devices, as well as on to off-screen images.

Constructors:
public ServerSocket(int port): creates a server socket on a specified port with a queue
length of 50. A port number 0 creates a socket on any free port.

public ServerSocket(int port, int QueLen): creates a server socket on a specified port
with a queue length of QueLen.

public ServerSocket(int port, int QueLen, InetAddress localAdd): creates a server
socket on a specified port with a queue length specified by QueLet.On a multihomed
host, locaAdd specifies the IP address to which this socket binds.

Methods:
public Socket accept(): listens for a connection to be made to this socket and accepts
it. public void close():closes the socket.

 

 Constructors

public Socket(InetAddress addr, int port): creates a stream socket and connects it to
the specified port number at the specified IP address public Socket (String host, int port): creates a stream socket and connects it to the specified port number at the specified host

Methods:
InetAddress getInetAddress() : Return Inet Address of object associated with Socket

int getPort() : Return remote port to which socket is connected
int getLocalPort() Return local port to which socket object is connected.
public InputStream getInputStream(): Get InputStream associated with Socket
public OutputStream getOutputStream():Return OutputStream associated with socket
public synchronized void close() :closes the Socket.

The following are the basic steps be followed to develop a distributed application using RMI:

• Design and implement the components of your distributed application.

• Compile sources and generate stubs.

• Make classes network accessible.

• Start the application. 

In this case of clustering, the hierarchical decomposition is done with the help of bottom-up strategy where it starts by creating atomic (small) clusters by adding one data object at a time and then merges them together to form a big cluster at the end, where this cluster meets all the termination conditions. This procedure is iterative until all the data points are brought under one single big cluster.

Basic algorithm of agglomerative clustering

  1. Determine the proximity matrix.
  2. Assume that each data point belongs to a cluster.
  3. Do it again.
  4. Combine the two groups that are the closest together.
  5. Make changes to the proximity matrix.
  6. Continue until just one cluster remains.

 

  • Hierarchical clustering does not have a mathematical goal.
  • All the methodologies applied for calculating the similarity index between clusters does
    not apply fully in every situation, each technique has its own merits and demerits.
  • Due to high space and time complexity. This clustering algorithm is not applicable for
    huge data.
  •  Hierarchical Clustering analysis is an algorithm that groups the data points with similar properties and these groups are termed “clusters”. As a result of hierarchical clustering, we get a set of clusters, and these clusters are always different from each other. Clustering of this data into clusters is classified as:

     Agglomerative Clustering (involving decomposition of cluster using bottom-up strategy)
     Divisive Clustering (involving decomposition of cluster using top-down strategy)

    Hierarchical clustering Technique in terms of space and time complexity:

     Space complexity: When the number of data points is large, the space required for the Hierarchical Clustering Technique is large since the similarity matrix must be stored in RAM. The space complexity ismeasured by the order of the square of n.

    Space complexity = O(n²) where n is the number of data points.

     Time complexity: The time complexity is also very high because we have to execute n iterations and update and restore the similarity matrix in each iteration. The order of the cube of n is the time complexity.
    Time complexity = O(n³) where n is the number of data points.

      The accounting cycle is a process of identifying, analyzing, and recording the accounting transactions of a company. It has 7 to 8 steps in process that begins when a transaction occurs and ends with its inclusion in the financial statements. The following is a depiction of the steps in the accounting cycle. Let us discuss them in brief:

    1. Financial transactions occur, such as selling inventory, buying raw
      materials, or making lease payments, for example.

    2. Those transactions are noted in the appropriate financial journal, depending
    on what the money was spent on or originated from. Debits are used to
    indicate money spent and credits are used for money that is received.

    3. The transactions are then posted to the account in the general ledger, which
    is the list of all the business’ financial accounts that it impacts, such as rent
    or wages or marketing

    4. At the end of the accounting period, you run a trial balance to see if all the
    numbers balance.

    5. The next step is trying to find the cause of the imbalance and correcting it.

    6. At the end of the accounting period, adjustment entries must be posted to
    accounts for accruals and deferrals.

    7. Once the accounts are balance, financial statements are prepared.

    8. At the end of the period, the books are closed out and new revenue and
    expense accounts created with zero balances. These are used for the next
    accounting period.

    A trial balance can trace the mathematical inaccuracy of the general ledger. However, there are a number of errors that cannot be detected by the Trial balance: Let us discuss those errors in brief:

    Error of complete omission: The transaction was not entered into the system.

    Error of original entry: The double-entry transaction includes the wrong amounts on both sides.

    Error of reversal: When a double-entry transaction was entered with the correct amounts, but the account to be debited is credited and the account to be credited is debited

    Error of Principle: The entered transaction violates the fundamental principles of accounting. For example, the amount entered was correct and the appropriate side was chosen, but the type of an account was wrong (e.g., expense account instead of liability account).

    Error of Commission: The transaction amount is correct, but the account debited or credited is wrong. It is similar to the principle error described above, but commission error is usually a result of oversight, while principle error is a
    consequence of a lack of knowledge of accounting principles.

     

     Ledger: Accounts are prepared on the basis of entries made in the journal. The book that contains the accounts is called a ‘ledger’. A ledger is also called secondary book, as the entries in the ledger are made subsequent to the journal. 

     Journal: Transactions are first entered in this book to show which account should be debited, and which should be credited. Journal is also called primary book, as it is a book of first entry. Transactions are recorded in it in chronological order.

     Ans : As per the accounting equation, the broad categories of the account are:

    Assets: Includes all the resources which the firm has.

    Liabilities: Amounts that the firm owes to outsiders

    Capital: Amounts that the firm owes to the owners and proprietor who have
    invested in the firm.

    Expenses: Amounts that have been spent, or even lost, in carrying on operations.

    Incomes: Amounts earned by the firm.

    Accounts may be classified in another manner:

    Personal Account : Personal accounts relate to personal, debtors, or creditors.
    Example ABC & Co., Ram Account, etc.

    Real Account : Accounts which relate to assets of the firm. For example,
    Machinery, Furniture, Cash, Plant, Land etc.

    Nominal Account : Accounts, which related to expenses, losses, gains, revenue etc. like wages, salary, interest, commission etc. The net result of all the nominal accounts is reflected as profit or loss which is transferred to
    the capital account. Nominal accounts are, therefore, temporary.

    On the basis of the above, three classifications of accounts, three basic rules about
    recording transactions are:

    1. Personal account
      Debit the receiver and credit the giver

    2. Real Account
    Debit what comes in and credit what goes out

    3. Nominal Account
    Debit all expenses/losses and credit all income/gains