Hi Group 3, add or modify your content in this page by Friday Oct 30 at 2359.

Chapter 6 - Foundations of System Development
Brief Contents
I. Introduction of System Development

II. Structured Development Approaches
i. Structured development
ii. 4GLs + Software Prototyping
iii. Computer-Aided Software Engineering (CASE)
iv. Object-Oriented Development (OO)

III. System Integration
i. ERP Systems
ii. Middleware

IV. Systems as Planned Organizational Change
i. Business process reengineering (BPR)
ii. Business process management (BPM)
iii. Total quality management (TQM)

V. 3-ties System Architecture

VII. Review Questions
i. What are the goals of the traditional system development life cycle approach?
ii. Define the components of a computer-aided software engineering system.
iii. What is a platform inter-organizational system? Give a few examples.
iv. What are five steps in building a Web Services?
v. Describle the different types of IT-enabled organizational change.

I. Introduction of System Development
What is system development?
A system development is a general term applied to project management, a variety of structured, organized processes for developing information technology and embedded software systems. A system development approach refers to the framework that is used to structure, plan and control the process of developing an information system. There are several system development approaches, such as structured development, Fourth-Generation Languages (4GLs), Software Prototyping, Computer-Aided Software Engineering (CASE), Object-Oriented (OO) Development, Client-Server Computing. Since one system development approach is not necessarily suitable for use by all projects, each of the available approaches is best suited to specific types of projects, based on various technical, organizatioal, project and team considerations. Each system development approach has own System Development Life Cycle (SDLC) methodology, such as the Waterfall model and the Spiral Model, etc.

Systems development is the process of defining, designing, testing, and implementing a new software application or program. It could include the internal development of customized systems, the creation of database systems, or the acquisition of third party developed software. Written standards and procedures must guide all information systems processing functions. The organization’s management must define and implement standards and adopt an appropriate system development life cycle methodology governing the process of developing, acquiring, implementing, and maintaining computerized information systems and related technology.

Why an organization needs system development?
There are three major reasons caused an organization needs system development (that means make new system to replace old system as a whole, or retrofitting old system):
(1) old systems do not operate is matching with the one which expected;
(2) growth of organization like requirement of information which progressively wide, data-processing volume which progressively mount, and change of new accountancy principle cause have to compile of new system, becuase old system is not effecive again.
(3) in a completitive marketplace, efficiency of information determines success or failure of company to reach opportunities.

The System Development Life Cycle (SDLC) is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems.
In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system: the software development process


SDLC is logical process used by a systems analyst to develop an information system, including requirements, validation, training, and user ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.



These days, the term "systems development" means much more than the design and development of a single software application. It has more to do with the accomplishment of a critical objective by a business or governmental organization, which more than likely requires the careful planning and coordination of multiple applications under construction. Systems development also considers all business resources required for successful results: hardware, people, processes, and software. Furthermore, it represents a new approach to long-range planning that stands in sharp contrast to the gradual evolution of applications over time -- an evolution which has led to often difficult integration requirements as organizations try to squeeze more performance and capabilities from their legacy systems.


Why System Development Approaches is important?
It is because the development of reliable, stable software is very labor-intensive and expensive business, and the risk of software development is high, also the rapid growth of the software industry, so there are many examples that we can find on the Internet, which documented reports of software project failures, it also highlighted the need for disciplined approaches to develop a software system. Finally it's difficulties in developing a large-scale software system, because it always need a long development time and a large team which will case a technology changes, requirement changes problems, and communication problems. So System Development Approaches is important.

II. Structured Development Approaches

Structured development
Information system of an organization as same as company's products, has a life cycle. System Development Life Cycle (SDLC) refers to information system (IS) from conception to retirement. There are five major stages: (1) System identification, selection, and planning; (2) System analysis; (3) System Design; (4) System Implemnetation, and (5) System maintenance. Structured development appeared in 1970s, it goes with SDLC and handle the complexitites of system design and development by more discipline, higher reliability and fewer errors, and more efficient use of the resources. The objective of structured development is to make the IS process more standard and efficient.

Figure: System Development Life Cycle (SDLC)
References: Leonard M. Jessup, Joseph S. Valacich, "Information Systems Foundations", 2000. P.420

Traditional System Development Life Cycle (SDLC) is the oldest method for building information systems, it intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle.
Ref.: http://www.texasittraining.com/

Structured Development follows the "Waterfall" model
Figure: System Development Methodologies - Waterfall model. Framework type: Linear
Stage 1: System identification, system selection and system planning
requires feasibility analysis, includes technical feasibility and economic feasibility. Project plan and statement of work are necessary in this stage.
(Key idea for this stage: think about why the organization needs system development)
Stage 2: System analysis
involves modeling organizational data; modeling organizational processes; and modeling organizational processing logic. It is not involving any details of system implementation.
(Key idea for this stage: think about what does the system need)
Stage 3: System design
requires designing forms and reports; designing interfaces and dialogues; designing databases and fles; designing processing and logic.
(Key idea for this stage: think about how to meets the sytem's requirements)
Stage 4: System implementation
involves software programming and testing; system conversion, documentation, training and support.
Stage 5: System maintenance
maintenance process - (1) obtain maintenance request; (2) transform requests into changes; (3) design changes; and (4) implement changes.

SDLC Stages
Key Participants
Tools / Techniques
(1) System Identification, Selection and Planning
* Project manager
* TCO, Project management software
(2) System Analysis
* System analyst, users
* Interviews, observing users at work, DFD
(3) System Design
* System analyst or designer
* System flowchart, structure chart
(4) System Implementation
* Development team, users
* Direct cutover, parallel conversion, pilot testing, staged conversion
(5) System Maintenance
* Internal IS staff, external consultant

The table shows the key participants, tools and techniques in each SDCL stage.
References: References: Leonard M. Jessup, Joseph S. Valacich, "Information Systems Foundations", 2000

When building a large or complex system, SDLC is a suitable approach. It was because the development work-flow will clearly separate different processes and step-by-step. Each stage requiring completion before the next stage can begin and each stage will provide clear information for next step. Finally, the system will more stable and match user requirement.[g3-9202]

System Development Life Cycle (SDLC) is a type of methodology used to build the information systems, such as planning, analysis, design, and implementation. There are several SDLC Models in existence. The oldest model is the waterfall model. The waterfall model is a sequential software development process, in which progress is seen as flowing downwards like a waterfall through the phases of Requirements analysis and definition, System and software design, Implementation and unit testing, Integration and system testing, Operation and maintenance. This model is mostly used for large systems engineering projects where a system is developed at several sites. Only appropriate when the requirements are well-understood and changes will be fairly limited during the design process. The waterfall model has the following properties:
  • Numbers of documents are approved in each phase
  • The following phase should not start until the previous phase has finished
  • In practice, these stages overlap and feed information in each other
  • Not a simple linear model, but a sequence of iterations of the development activities

Classical Waterfall model approach
Figure: 5 phrases of Waterfall model

  • System’s services, constraints and goals are established by consultation with system users
  • Defined these in detail and serve as system specification

  • System design process:
    Partitions the requirements to either hardware or software systems Setup an overall system architecture
  • Software design process:
    Identifies and describes the fundamental software system abstractions and their relationships

  • Software design is realized as a set of programs or program units
  • Unit testing verifies that each unit meets its specification

  • Program units are integrated and tested as a complete system
  • Ensure that the software requirement have been met
  • If everything fine, deliver the system to the customer

  • System installed and put into practical use
  • Maintenance
    - Correct error which were not discovered in earlier stages of the life cycle
    - Improve the implementation of the system units
    - Enhance the system’s service as new requirement are discovered

Ref: http://en.wikipedia.org/wiki/Waterfall_model

The purpose of SDLC is to divide IS development life cycle into structured and methodical phases. SDLC can be divided into ten phases during which defined IT work products are created or modified. Each phase is interdependent to others. Depend on the size and the complexity of the project, the number of phases may be combine or overlap.

Figure: Definition of 10 phases of SDLC

Pros and Cons of SDLC:

Easy to control
Increase development time
Easy to handle large project
High development cost
Details steps for each phases
Difficulty of accommodating change after the process is underway
Evaluate costs and completion target
Iterations are costly and involve significant rework
Documentation is produced at each phase and that it fits with other engineering process models
Inflexible partitioning of the project into distinct stages makes it difficult to respond to changing customer requirements
Development and design standards.
After a number of iteration, it is normal to freeze parts of the development , ignore the error and omission
Easy to maintenance

Ref: http://en.wikipedia.org/wiki/Systems_Development_Life_Cycle

The Systems Development Life Cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project from an initial feasibility study through maintenance of the completed application. Various SDLC methodologies have been developed to guide the processes involved including the waterfall model (the original SDLC method), rapid application development (RAD), joint application development (JAD), the fountain model and the spiral model.


Feasibility: study is used to determine if the project should get the go-ahead. If the project is to proceed, the feasibility study will produce a project plan and budget estimates for the future stages of development

Analysis: includes a detailed study of the business needs of the organization. Options for changing the business process may be considered

Design: focuses on high level design like, what programs are needed and how are they going to interact, low-level design, interface design

Implementation: translate design into code

Testing: programs are written as a series of individual modules, these subject to separate and detailed test. The system is then tested as a whole. The separate modules are brought together and tested as a complete system

Maintenance: inevitably the system will need maintenance. Software will definitely undergo change once it is delivered to the customer

There are some organizations apply SDLC but modify its specifics. For example, The National Aeronautics and Space Administration (NASA). NASA has taken extra steps in the development of there information systems and has added three steps to the generic five step model. NASA's 8-step SDLC: (1) Concept and initiation; (2) Requirements; (3) Architectural design; (4) Detailed design; (5) Implementation; (6) Integration and Test; (7) Acceptance and delivery; and (8) Sustaining engineering and operations.
Figure: Comparison of NASA SDLC and generic SDLC
Why do some organizations apply SDLC but modify its specifics?
(1) Each organization has specific demands to operate at max. efficiency;
(2) SDLC is just a standard guideline for implementing systems and sometimes doesn't cover all issues;
(3) by modifying SDLC to adhere to specific applications, time and mondy can be saved and max. efficiency can be easier reached.
References: Leonard M. Jessup, Joseph S. Valacich, "Information Systems Foundations", 2000. P.421
NASA homepage http://www.nasa.gov/
NASA - Wikipedia http://en.wikipedia.org/wiki/NASA

4GLs + Software Prototyping

4GL is a programmming language or programming environment designed with a specific purpose in mind, such as the development of commercial business software.

Characteristics and Functions of Forth-Generation Languages:
(1) Database management;
(2) Data dictionary;
(3) Links to other database management systems;
(4) Interactive query facilities;
(5) Report generator;
(6) Non-procedural language;
(7) Selection and sorting;
(8) Word processor, text and graphics editor;
(9) Programming interface and reusable code;
(10) Reusable software components and repositiories;
(11) Backup and recovery;
(12) Screen generator;
(13) Macros library;
(14) Software development library;
(15) Security and privacy safeguards;
(16) Data anaylsis and modeling tools.
References: 4GLs- Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Fourth-generation_programming_language [g3-0386]

Software prototyping occurred in the early 1980s.
"A prototype is a software system that is created quickly - often within hours, days or weeks, rather than months or years." Franz Edelman, who described the process of software prototyping as "a quick and inexpensive process of system developing."
Characteristics of software prototyping:
(1) Not a standalone, complete development methodology, but rather than an approach to handling selected portions of a larger, more traditional development
methodology (i.e. incremental, spiral or rapid application development (RAD)).
(2) Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
(3) User is involved throughout the process, which increases the likelihood of user acceptance of the final implementation.
(4) Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users'equirements.
(5) While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.
(6) A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.

Advantages of prototyping:
(1) "Addresses the inability of many users to specify their information needs, and the difficulty of systems analysts to undersatnd the user's environment, by providing
the user with a tentative system for experimental purposes at the earliest possible time." (Janson and Smith, 1985)
(2) "Can be used to realistically model important aspects of a system during each phase of the traditional life cycle." (Huffaker, 1986)
(3) Improves both user participation in system development and communication among project stakeholders.
(4) Especially useful for resolving unclear objectives; developing and validating user requirements; experimenting with or comparing various design solutions or
investigating both performance and the human computer interface.
(5) Potential exists for exploiting knowledge gained in an early iteration as later iterations are developed.
(6) Helps to easily identify confusing r difficult functions and missing functionality.
(7) May generate specifications for a production application.
(8) Encourages innovation and flexible designs.
(9) Provides quick implementation of an incomplete, but functional, appplication.

Disadvantages of prototyping:
(1) Prototype may not have enough checks and balances incorporated.
(2) Approval process and control is not strict.
(3) Requirements may frequently change significantly.
(4) Identification of non functional elemets is difficult to document.
(5) Developers may prototype too quickly, without enough time to analysis system structures and may misunderstand customers' objectives.
References: Software prototyping- Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Software_prototyping#Advantages_of_prototyping [g3-0386]

prototype.jpgFigure: The prototyping process
References: Leonard M. Jessup, Joseph S. Valacich, "Information Systems Foundations", 2000. P.471

Prototype is not necessary to be a fucntional program, we can draw on the paper to design the User Interface or design the layout with some action button (e.g. text box, radio buton, pull down menu, etc.) inside but without function. The main purpose for let user understand the program desgin, get user requirement and the program work flow. This proccess is very improtant for Customer and Developer to get common consensus.

Figure: Waterfall-prototyping SDLC model; Framework: Linear
The Waterfall-prototyping SDLC model is among the early adaptations of the waterfall model. The prototyping model addresses some of the shortcomings of the waterfall model. The most important stage in this model is the development of the prototype in an iterative model till the prototype simulates the entire business requirements of the client application.

Figure: Prototyping SDLC model; Framework: Iterative
Iterative prototyping SDLC model is most suitable for a project which for development of an online system requiring extensive user dialog, or for a less well defined expert and decision support system.
References: http://www.synlog.net/images/Waterfall-2.jpg

Reference: 1GL, 2GL and 3GL
1GL: A first-generation programming language is a machine-level programming language. Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the computer system.The main benefit of programming in a first-generation programming language is that the code a user writes can run very fast and efficiently, since it is directly executed by the CPU. However, machine language is a lot more difficult to learn than higher generational programming languages, and it is far more difficult to edit if errors occur.

2GL: Second-generation programming language is a generational way to categorize assembly languages. The term was coined to provide a distinction from higher level third-generation programming languages (3GL) such as COBOL and earlier machine code languages. Second-generation programming languages have the some properties, such as:
The code can be read and written by a programmer.
To run on a computer it must be converted into a machine readable form, a process called assembly.
The language is specific to a particular processor family and environment.

3GL: Third-generation programming language (3GL) is a refinement of a second-generation programming language. Whereas a second generation language is more aimed to fix logical structure to the language, a third generation language aims to refine the usability of the language in such a way to make it more user friendly. Most "modern" languages (BASIC, C, C++, C#, Pascal, and Java) are also third-generation languages.


Computer-Aided Software Engineering (CASE)
A technique for using computers to help with one or more phases of the software life-cycle, including the systematic analysis, design, implementation and maintenance of software. Adopting the CASE approach to building and maintaining systems involves software tools and training for the developers who will use them.

It made use on the analysis of complex electrode configurations is an important task in the design of high-voltage apparatus or test-setups. Computer simulations in the design phase can reveal weaknesses and possible faults before expensive prototypes are built-up. Multi-purpose field analysis programs are available, providing graphical user interfaces coupled with computer aided design (CAD) environments. Though those aided tools, we can get the clear infrastructure in the analysis and design phase. Since integrated environments are still expensive and dependent on high-performance computer equipment. Most field analysis programs can be fed with scripts describing the problem to be investigated. This allows interpretation of the field analysis task as a software programming task. The matter of software programming is well known as software engineering and a large amount of strategies and computer tools exist that help to facilitate project management and to speed-up project schedules. As the result, it can help to reduce the high costing during the development.

CASE Benefits
The key benefits of CASE are increases in software quality and development productivity. Through explicit design notation schemes, CASE provides a basis for communicating complex requirements and design information among developers and users. The result is better vision and understanding of the business problem and how the system works, and a clearer understanding of the system's design. With their disciplined, highly structured engineering approach and emphasis on rigid design rules, CASE tools verify consistency and completeness at early stages of the development process.

CASE Trends
For some time, many organization have been using computer aided tools for construction, testing, documentation, and system management. A significant part of today's development effort is spent on designing and building code. CASE is beginning to shift the concentration of effort to requirements analysis and specification phases. This shift should be substantial, and the quality of application systems will almost certainly be improved. Meeting the user's needs and intentions is expected to become a simpler process; and the code and test effort will be reduced, thereby shortening the overall development cycle.
In addition, overall maintenance and inherent costs should be reduced. The typical increase in enhancement effort, seen today as older systems require change, will be reduced because the explicit design notation of CASE will make it easier for maintenance programmers to understand existing systems.

The UML framework of views


Two key ideas of Computer-aided Software System Engineering (CASE) are:
-The harboring of computer assistance in software development and or software maintenance processes.
-An engineering approach to the software development and or maintenance.

There are some case tools, such as Data modeling tools, Source code generation tools and
Unified Modeling Language, etc.


CASE (computer-aided software engineering) is the use of a computer-assisted method to organize and control the development of software, especially on large, complex projects involving many software components and people. Using CASE allows designers, code writers, testers, planners, and managers to share a common view of where a project stands at each stage of development. CASE helps ensure a disciplined, check-pointed process. A CASE tool may portray progress (or lack of it) graphically. It may also serve as a repository for or be linked to document and program libraries containing the project's business plans, design requirements, design specifications, detailed code specifications, the code units, test cases and results, and marketing and service plans.


Common CASE risks
- Inadequate Standardization : Linking CASE tools from different vendors (design tool from Company X, programming tool from Company Y) may be difficult if the products do not use standardized code structures and data classifications. File formats can be converted, but usually not economically. Controls include using tools from the same vendor, or using tools based on standard protocols and insisting on demonstrated compatibility. Additionally, if organizations obtain tools for only a portion of the development process, they should consider acquiring them from a vendor that has a full line of products to ensure future compatibility if they add more tools.
- Unrealistic Expectations : Organizations often implement CASE technologies to reduce development costs. Implementing CASE strategies usually involves high start-up costs. Generally, management must be willing to accept a long-term payback period. Controls include requiring senior managers to define their purpose and strategies for implementing CASE technologies.
- Quick Implementation : Implementing CASE technologies can involve a significant change from traditional development environments. Typically, organizations should not use CASE tools the first time on critical projects or projects with short deadlines because of the lengthy training process. Additionally, organizations should consider using the tools on smaller, less complex projects and gradually implementing the tools to allow more training time.
-Weak Repository Controls : Failure to adequately control access to CASE repositories may result in security breaches or damage to the work documents, system designs, or code modules stored in the repository. Controls include protecting the repositories with appropriate access, version, and backup controls.

CASE Environments
Computer-Aided Software Engineering (CASE) appeared in the 1980s with the undertake to automate structured techniques. It aims to enhance software quality, shorten the time for systems development and reduces maintenance costs. According to Carma McClure's suggestion, CASE environment includes:
(1) Information repository - it is an important part of CASE, since it stores and organizes all the information needed to create, modify and develop a software system. The information repository is used to link the active data dictionary during execution, therefore, changes in one section should require changes in the other.
(2) Front-end tools - the key requirement for these tools is good graphics for drawing diagrams of program structures, data entities and their relationships to each other. Moreover, automatic design analysis for checjing the consistency and completeness of a design is also an important part of Front-end tools.
(3) Back-end tools - used to automatically generating source code.
(4) Development workstation - process all required graphical operations in Computer-Aided Software Engineering developed systems.
References: Computer-Aided Software Engineering - wikipedia http://en.wikipedia.org/wiki/Computer-aided_software_engineering

Classification of CASE tools
There are four classes of CASE tools: (1) Life-cycle support; (2) Integration dimension; (3) Construction dimension; and (4) Knowledge based CASE dimension.
Life-cycle support includes upper CASE tools and lower CASE tools. Upper CASE tools support strategic, planning and construction of conceptual level product and ignore the design area. They support traditional diagrammatic languages, such as data flow diagram, Entity-Relationship (ER) diagrams and structure charts, etc. Lower CASE tools conentrate on the back end activities of the software life cycle and hence supoort activities like physical design, debugging, construction, testing, integration of software components, maintenance, reengineering and reverse engineering activities.
Integration dimension involves CASE framework, Integrated Computer-Aided Software Engineering (ICASE) tools and Integrated Project Support Environment (IPSE). ICASE tools concentrate on produce a completed program based on the diagrams developed by systems analysts, and they can generate tables for a database based on detailed system specifications. IPSE supports software developemnt, usually integrated in a coherent framework, like a software engineering environment. They are used to produce systems with a longer effective operational life, speed up the development process and result in systems that are more flexible and adaptable to changing business conditons and results in excellent documentation.
References: http://www.cs.uct.ac.za/mit_notes_devel/SE/Latest/html/ch02s07.html

Object-Oriented Development (OO)

Object-oriented programming (OOP) is a programming paradigm that uses “objects” – data structures consisting of datafields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance. It was not commonly used in mainstream software application development until the early 1990s [g3-3241]

Object Oriented Development can reduce development time,reduce the time and resources required to maintain existing applications, increase code reuse, and provide a competitive advantage to organizations that use it
Benefit of OOD:
Faster Development
Reuse of Previous work
Increased Quality
Modular Architecture
Better Mapping to the Problem Domain

Advandages of using Object-Oriented Development Approach
This approach models a software system as a collection of collaborating object which is developed and under use for a long time, so it is very reliable. These objects and its data interact with other object through messages that send and received by the object and manipulate the object's data in the process. This approach also allows the software engineer to develop consistent models of software system more easily, since the same set of models are used throughout the whole development process. Moreover, there is no time wasted by transforming and updating models in different stages, and changes to an object oriented system are localized in the objects themselves, if there are some components that need to change, only need to change the related object, which is save the developer's development time. Therefore, the structure of a system which developed using object-oriented approach is more stable than that by the structured approach.


Object Oriented Development (OOD) has been touted as the next great advance in software engineering. It promises to reduce development time, reduce the time and resources required to maintain existing applications, increase code reuse, and provide a competitive advantage to organizations that use it. While the potential benefits and advantages of OOD are real, excessive hype has lead to unrealistic expectations among executives and managers. Even software developers often miss the subtle but profound differences between OOD and classic software development.

Therefore, OOD offers significant benefits in many domains, but those benefits must be considered realistically. There are many pitfalls that await those who venture into OOD development. These pitfalls threaten to undermine the acceptance and use of object-oriented development before its promise can be achieved. Due to the excitement surrounding OOD, expectations are high and delays and failures, when they come, will have a greater negative impact.

Object-Oriented Languages

There are two major object-oriented programming languages in use today. These are:

C++ is an object-oriented version of C. It is compatible with C (it is actually a superset), so that existing C code can be incorporated into C++ programs. C++ programs are fast and efficient, qualities which helped make C an extremely popular programming language. It sacrifices some flexibility in order to remain efficient, however. C++ uses compile-time binding, which means that the programmer must specify the specific class of an object, or at the very least, the most general class that an object can belong to. This makes for high run-time efficiency and small code size, but it trades off some of the power to reuse classes.

Java is the latest, flashiest object-oriented language. It has taken the software world by storm due to its close ties with the Internet and Web browsers. It is designed as a portable language that can run on any web-enabled computer via that computer's Web browser. As such, it offers great promise as the standard Internet and Intranet programming language.

Object-Oriented applying in Client-Server

The below diagram shows that how the user requests "Employee" information to process and submit the "Pay Raise" records to the server.



Client-Server computing

Client-Server computing is a architecture during early 1990s. It can provide a flexible on hardware requirement. The workload of the system can split into client side, server side or middle server (application server).
Client-Server computing is separate into 3 different logical layers. First is “Presentation Layer” (PL), which includes I/O (Like Keyboard, Mouse, Display unit …), second is “ Application Layer” (AL), which include Business Logic (Like Business rule, computations, process …),the last one is Data Management Layer” (DML), which include Data Base Management System (Like SQL Server, Oracle, Access…)
Under Client-Server computing, we can use 3 different kind of development method.
First is 2-Tier Client-Server on Thin-Client approach.
This development style will separate a Server side and Client side. Server will hold a role of “Application Layer” and “Data Management Layer”, Client will hold a role of “Presentation Layer” only. In this development style, all calculation procedure will process in server side, then client side can use lower powerful computer.

Second is 2-Tier Client-Server on Fat-Client approach
This development style is similar first one, but the role is little bit different. Server will hold a role of “Data Management Layer” only; Client will be a “Application Layer” and “Presentation Layer”. This style will reduce the loading on the Server side, but powerful computer in Client side is necessary.

Third is 3-Tier Client-Server
This development style is a separate whole system into 3 parts, Client, Application Server & Database Server. Each part will hold a different role, Client will hold “Presentation Layer”; Application Server will be “Application Layer” and Database Server is a “Data Management Layer”. That will separate the workload on different area.

3-tier Client Server is the mostly use architecture for today as well, it distribute the work load in 3 layer (Presentation, Application and Data layer) as show in the below diagram, all the computation process run in back end (Application and Data layer), user side (Presentation layer) is transparent.

But some disadvantage was found in Client-Server also, that is some Networking Issue.

“Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become overloaded. Contrast that to a P2P network, where its aggregated bandwidth actually increases as nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.
The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download.”
More information for Client-Server Computer at below links:


Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model


-A client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network.

-It is greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change.

-All data is stored on the servers, which generally have far greater security controls than most clients.

-Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.
Since data storage is centralized, updates to that data are easier to administer.

-Traffic congestion on the network has been an issue since the inception of the client-server
paradigm.As the number of simultaneous client requests to a given server increases, the server can become overloaded.

-Under client-server, should a critical server fail, clients' requests cannot be fulfilled.


Dynamic Systems Development Method (DSDM) is a software development methodology originally based upon the Rapid Application Development methodology. DSDM is an iterative and incremental approach that emphasises continuous user involvement.
Its goal is to deliver software systems on time and on budget while adjusting for changing requirements along the development process. DSDM is one of a number of Agile methods for developing software, and it forms a part of the Agile Alliance.


III. System Integration
ERP Systems
ERP (Enterprise Resource Planning) is system, which integrate by different modules. That can help a company to standardize the working procedure, simplify the system architecture and design (Single set of application, vendor and database). But the implementation successful rate is very low, it was because users necessary to change their operation flow, culture, organization structure and operation role.


Figure: Information integration through ERP system

ERP stands for Enterprise Resource Planning. ERP is a way to integrate the business data, functions and processes of an organization or multiple organizations into one single system. In general, ERP systems will have many components including hardware and software. In order to achieve integration, most ERP systems use a unified database to store data for different functions found throughout the organization.
The term ERP originally referred to how a large organization planned to use organizational wide resoruce. In the past, ERP systems were used in large more industrial types of companies. However, the use of ERP has changed and is extremely comprehensive. Today, the term can refer to any type of company, no matter what industry it falls in. In fact, ERP systems are used in almost any type of organizations.
Nowadays, ERP systems can cover a wide range of functions and integrate them into one unified database. For example, functions such as Human Resource (HR), Supply Chain Management (SCM), Customer Relations Management (CRM), Finance Resource Management (FRM) and Manufacturing Resource Planning (MRP) functions were all once stand alone software applications, usually housed with their own database and network.
References: http://www.extolcorp.com/solution/sea_ecerp.html [g3-0386]

ERP delivers a single database that contains all data for the various software modules that typically address areas such as: Manufacturing, Financials, Project management, Human resources, Customer relationship management, Data services and Access control.


ERP (enterprise resource planning) is an industry term for the broad set of activities that helps a business manage the important parts of its business. The information made available through an ERP system provides visibility for key performance indicators (KPIs) required for meeting corporate objectives. ERP software applications can be used to manage product planning, parts purchasing, inventories, interacting with suppliers, providing customer service, and tracking orders. ERP can also include application modules for the finance and human resources aspects of a business. Typically, an ERP system uses or is integrated with a relational database system.


MiddlewareMiddleware is a distributed software layer, or “platform” which abstracts over the complexity and heterogeneity of the underlying distributed environment with its multitude of network technologies, machine architectures, operating systems and programming languages. It provides services to faciliate multiple process and more machines to interact at the same time and cross-platform. In addition, it provides a simple, consistent, and intergrated distributed programming environment. There are 3 different types of middleware, transaction-oriented, message-oriented, object-oriented, and procedural middleware. Sun Java RMI, and Sun Enterprise Java Bean are common examples. The following picture shows the position of the middleware.


Source: http://en.wikipedia.org/wiki/Middleware

Middleware services can help you achieve significant benefits:
  • Create an agile infrastructure that enables business process integration
  • Increase business flexibility and decrease IT complexity
  • Reduce the costs of business integration
  • Simplify integration through lifecycle methodologies and tool expertise
  • Improve the management of infrastructures
  • Improve time to value and make the most of existing technology skills
  • Increase visibility and improve management and quality of IT services
  • Create an IT infrastructure designed to address regulatory requirements
  • Increase the value of existing IT investments
  • Enhance information availability, quality and value


Middleware is a software that functions as a conversion or translation layer. Middleware is especially integral to modern information technology based on XML,SOAP,Web services and service-oriented architecture. ther e are several type of middleware are using today

Type of middleware:
Transaction processing monitor
Messaging middleware
Distributed processing
Database middleware
Common interface
Application server middleware
Universal computing
Network login
Enterprise integration

Type of middleware:
  1. Message Oriented Middleware. This is a large category and includes asynchronous store and forward application messaging capabilities as well as integration brokers that perform message transformation and routing or even business process coordination.
  2. Object Middleware. This category consists largely of Object Request Brokers that were mentioned on one of the earlier definitions.
  3. RPC Middleware. This type of middleware provides for calling procedures on remote systems, hence the name Remote Procedure Call. Unlike message oriented middleware, RPC middleware represents synchronous interactions between systems and is commonly used within an application.
  4. Database Middleware. Database middleware allows direct access to data structures and provides interaction directly with databases. There are database gateways and a variety of connectivity options. Extract, Transform, and Load (ETL) packages are included in this category.
  5. Transaction Middleware. This category as used in the Middleware Resource Center includes traditional transaction processing monitors (TPM) and web application servers. One could make the case for splitting the category.

Example of Middleware:
Open Database Connectivity (ODBC) enables applications to make a standard call to all the databases that support the ODBC interface.

Enterprise Application Integration (EAI)
Enterprise Application Integration (EAI) is one type of popular middleware. EAI tools usually use a message broker to transfer data between applications. They allow users to specify business processes and make data integration subject to rules that govern those processes. As an example, a rule might state that data moves automatically from the purchasing application to the accounts receivable application only after the appropriate person has signed off on the purchase. Organizations obtain a central module plus the interfaces needed to connect the applications. To handle sole integraton needs, EAI vendors provide custom programming to modify the EAI modules to fit the organization's specific needs. There are many types of EAI tools on the market, such as Sun Microsystems; each approaching the problem of integration from a different angle and presenting a different solution. Generally, there are four overarching purposes for which EAI tools can be used to improve efficiency:
(1) Data / information integration: EAI tools usually comes with built in application programming interfaces (APIs) by which it can effectively communicate with otherwise incompatible legacy systems, removing the need for multi-point-to-point connections between applications.
(2) Process integration: EAI tools provides the opportunity to bridge the gap between the applications. Wheras data integration standardlses data acrosee an enterprise, process integration standardises access to technology and resources.
(3) Vendor independence: EAI tools is designed to allow for the future integration of new applications. By extracting rules and business policies from current data and applications and implementing them in the EAI system, it becomes possible to apply these rules to new applications added in the future with little breakdown.
(4) Common facade: many EAI tool packages provide the option of a complete front-end solution. There are many benefits to be found in providing a single access point can help reduce the complexity of many business processes within an enterprise. Moreover, a single interface will remove the necessity of training users to operate a range of different applications.
References: http://en.wikipedia.org/wiki/Enterprise_application_integration

EAI is one type of middleware, it uses a centralized, coordinated approach to integration.
EAI tool as a message broker to transfers data between different systems / applications.
References: http://www.stayinfront.com/images/EAI_2.jpg

The future of middleware
Middleware trends to link large and distributed mainframe systems with mobile devices in real time. For example, some mobile payments involving more than one bank and a large chain of retail stores using Radio Frequency Identification Devices (RFIDs) to locate items in store shelves. Middleware offers fast, scalable and disposable solutions. However, it leads the users require to make careful planning, costing and performance monitoring and evaluating. Because uncontrolled proliferation of middleware might lead to unexpected expenses (such as high cost) and slower overall system performance. Moreover, the incorrect and complex network of middleware can make whole system to be terminated. It can bring a great losses to the organization.
Middleware - Wikipedia http://en.wikipedia.org/wiki/Middleware
Leonard M. Jessup, Joseph S. Valacich, "Information Systems Foundations", 2000.

Methods of integration

There are lots of methods to integrate the system, below is to gives you for a reference from the web.

Vertical Integration is process of integrating subsystems according to their functionality by creating functional entities also referred to as silos. The benefit of this method is that the integration is performed quickly and involves only the necessary vendors, therefore, this method is cheaper in the short term. On the other hand, cost-of-ownership can be substantially higher than seen in other methods, since in case of new or enhanced functionality, the only possible way to implement (scale the system) would be by implementing another silo. Reusing subsystems to create another functionality is not possible.
Star Integration or also known as Spaghetti Integration is a process of integration of the systems where each system is interconnected to each of the remaining subsystems. When observed from the perspective of the subsystem which is being integrated, the connections are reminiscent of a star, but when the overall diagram of the system is presented, the connections look like spaghetti, hence the name of this method. The cost varies due to the interfaces which subsystems are exporting. In a case where the subsystems are exporting heterogeneous or proprietary interfaces, the integration cost can substantially rise. Time and costs needed to integrate the systems increase exponentially when adding additional subsystems. From the feature perspective, this method often seems preferable, due to the extreme flexibility of the reuse of functionality.
Horizontal Integration or Enterprise Service Bus (ESB) is an integration method in which a specialized subsystem is dedicated to communication between other subsystems. This allows cutting the number of connections (interfaces) to only one per subsystem which will connect directly to the ESB. The ESB is capable of translating the interface into another interface. This allows cutting the costs of integration and provides extreme flexibility. With systems integrated using this method, it is possible to completely replace one subsystem with another subsystem which provides similar functionality but exports different interfaces, all this completely transparent for the rest of the subsystems. The only action required is to implement the new interface between the ESB and the new subsystem.
The horizontal scheme can be misleading, however, if it is thought that the cost of intermediate data transformation or the cost of shifting responsibility over business logic can be avoided.

ref by: http://en.wikipedia.org/wiki/System_integration


IV. Interorganizational System Development
An Interorganizational System (IOS) is one which allows the flow of information to be automated between organizations in order to reach a desired supply-chain management system, which enables the development of competitive organizations. This supports forecasting client needs and the delivery of products and services. IOS helps to better manage buyer-supplier relationships by encompassing the full depths of tasks associated with business processes company-wide. In doing these activities, an organization is able to increase the productivity automatically; therefore, optimizing communication within all levels of an organization as well as between the organization and the supplier. For example, each t-shirt that is sold in a retail store is automatically communicated to the supplier who will, in turn, ship more t-shirts to the retailer.
Organizations might pursue an IOS for the following reasons:

  1. Reduce the risk in the organization
  2. Pursue economies of scale
  3. Benefit from the exchange of technologies
  4. Increase competitiveness
  5. Overcome investment barriers
  6. Encourage global communication

Characteristics of Interorganizational Systems
-At least two parties to create an IOS, thus the partners in the venture must have a willingness to cooperate and the ability to perform the work.
-Standards play a major role in permitting many IOS efforts to get off the ground.
-Education of potential partners is often more of a hurdle than the technology.
-Coordination of joint systems often entails using a third party.
-The various efforts need to be synchronized.
-Work processes are often re-evaluated.
-Technical issues are minor compared to the relationship issues.
-IOS often requires more openness than traditional system development.


Interorganizational Systems (IOS) include electronic data interchange (EDI), supply chain management (SCM), electronic funds transfer, electronic forms, electronic messaging, and shared databases. Such systems provide the foundation for electronic business (e -business) -- a matter of great economic concern in today’s world.

GM and other automobile manufactures use a inter-organization system to control suppliers and part stock, the system interacts with multiple vendors to insure that the people making the cars have all the parts they need at any given time.

Re: S Gregor


Interorganizational information system (IOS) standards development in industrial groups is proving to be an extremely productive and effective endeavour. With countless stakeholders, varying opinions and firms that are more accustomed to competition than cooperation, many industrial groups have leveraged the use of a non-profit, voluntaryconsensus, standards development consortium to act as a separate entity and lead agent towards industry-wide standardization initiatives. We employ common empirical data collection techniques (management interviews, observations in consortia work groups, meeting minutes, consortia charters and others) and provide a comparative analysis of the consortia and the IOS standards development process across nine such industries. The results are summarized and a formal IOS Standards Development Cycle is introduced based on a synthesized understanding from across the industries. The IOS Standards Development Cycle in industrial groups is found to include the following steps: (1) Choreography and Modularity, (2) Reach Consensus and Prioritize, (3) Standardize and Document, (4) Review and Test (5) Implement and Deploy and (6) Certification and Compliance. We define, provide illustrations and highlight effective practices found in each step. Comparisons are made to other development processes and a discussion is provided regarding the value and role of private consortia in IOS standards development.

V. Systems as Planned Organizational Change

Business process reengineering (BPR)

Business process reengineering is a kind of business redesingn, transformation, change of business in radical way, in order to achieve dramatic improvement in cost, quality, speed, and service. It combines a strategy of making major improvements to business processes so that a company can become a much stronger and more successful competitor in the marketplace. Business Process Reengineering Cycle can be break down into 4 pharses, identify processes, review,update analysis problem as-is, design to be and design and development to be. The proponents of business process reengineering, Micahel Hammer, James A. Champy suggest the company waste too much time of passing task from one department to another, they believe appoint a team to handle all process is far more efficient.
Source: http://en.wikipedia.org/wiki/Business_process_reengineering[g3-1580]

Business Process Reengineering (BPR) is the analysis and redesign of workflow within and between organizations. BPR's heyday was in the early 1990s. Michael Hammer and James Champy, the authors of "Reengineering the Corporation" suggested seven principles of BPR, these principles are used to streamline the work process and thereby achieve significant levels of improvement in quality, time management and cost.
The seven basic principles of BPR are as follows:
(1) Organize around outcomes, not tasks;
(2) Identify all the processes in an organization and prioritize them in order of redesign urgency;
(3) Integrate information processing work into the real work that produces the information;
(4) Treat geographically dispersed resources as though they were centralized;
(5) Link parallel activities in the workflow instead of just integrating their results;
(6) Put the decision point where the work is performed and build control into the process;
(7) Capture information once and at the source.
Varun Grover, Willian J. Kettinger, "Business process change, Reengineering concepts, methods and technologies", 1995. P.249 - P.250

Business process management (BPM)

Business process managment is one of the continuous process managment, it suggests to focus on aligning all aspects of an organization with the wants and needs of the clients. It promotes to improve business efficiency, innovation, flexibility, and integration with technology continuously. It forcus on people and technology. The Business process managment life cycle is design, modeling, exceution, monitoring and optimization.

Business Process Managment Life-cycle

​ Source: http://en.wikipedia.org/wiki/Business_process_management

Business process management (BPM) is a management approach focused on aligning all aspects of an organization with the wants and needs of clients. It is a holistic management approach that promotes business effectiveness and efficiency while striving for innovation, flexibility, and integration with technology. Business process management attempts to improve processes continuously. It could therefore be described as a "process optimization process." It is argued that BPM enables organizations to be more efficient, more effective and more capable of change than a functionally focussed, traditional hierarchical management approach.
Ref: http://en.wikipedia.org/wiki/Business_process_management

BPM is a subset of infrastructure management, the administrative area of concern dealing with maintenance and optimization of an organization's equipment and core operations.

The major benefits of Business Process Management (BPM) as the foloows:
(1) Improves process quality;
(2) Improves customer satisfaction;
(3) Generates continuous process improvement;
(4) Reduces costs;
(5) Improves the customer experience;
(6) Improves business agility
References: http://www.bpmenterprise.com/content/c060125a.asp

Total quality management (TQM)

TQM stands for total quality management. TQM is a management approach for an organization, centered on quality, based on the participation of all its members. It focuses on long-term success through identifying and prioritizing customer requirements, setting and aligning goals, and providing deliverables that warrant customer satisfaction (as well as customer delight). It also measures results to continually provide value and benefits to all members of the organization and to society.

Total Quality Management (TQM) is a management approach to long-term success through customer satisfaction. In a TQM effort, all members of an organization participate in improving processes, products, services and the culture in which they work.
The methods for implementing this approach come from the teachings of such quality leaders as Philip B. Crosby, W. Edwards Deming, Armand V. Feigenbaum, Kaoru Ishikawa and Joseph M. Juran.
A core concept in implementing TQM is Deming 14 points, a set of management practices to help companies increase their quality and productivity:
1. Create constancy of purpose for improving products and services.
2. Adopt the new philosophy.
3. Cease dependence on inspection to achieve quality.
4. End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier.
5. Improve constantly and forever every process for planning, production and service.
6. Institute training on the job.
7. Adopt and institute leadership.
8. Drive out fear.
9. Break down barriers between staff areas.
10. Eliminate slogans, exhortations and targets for the workforce.
11. Eliminate numerical quotas for the workforce and numerical goals for management.
12. Remove barriers that rob people of pride of workmanship, and eliminate the annual rating or merit system.
13. Institute a vigorous program of education and self-improvement for everyone.
14. Put everybody in the company to work accomplishing the transformation.

Reference Link: http://www.asq.org/learn-about-quality/total-quality-management/overview/overview.html

TQM Deming Wheel

Plan -- Prepare and plan in a structured way by learning from the past and setting benchmarks for change.
Do -- If your goal is far-reaching, start small and evaluate your results before going wider. Check -- Analyze the results of what you have done and find out how to apply what you have learned to future activities.Act -- Do what you need to do to make your process better and easier to replicate.

Approach to quality that emphasizes continuous improvement, a philosophy of "doing it right the first time" and striving for zero defects and elimination of all waste. It is a concept of using quality methods and techniques to strategic advantage within firms.

Principles of TQM
The key principles of TQM are as following:3
  • Management Commitment
    1. Plan (drive, direct)
    2. Do (deploy, support, participate)
    3. Check (review)
    4. Act (recognize, communicate, revise)
  • Employee Empowerment
    1. Training
    2. Suggestion scheme
    3. Measurement and recognition
    4. Excellence teams
  • Fact Based Decision Making
    1. SPC (statistical process control)
    2. DOE, FMEA
    3. The 7 statistical tools
    4. TOPS (FORD 8D - Team Oriented Problem Solving)
  • Continuous Improvement
    1. Systematic measurement and focus on CONQ
    2. Excellence teams
    3. Cross-functional process management
    4. Attain, maintain, improve standards
  • Customer Focus
    1. Supplier partnership
    2. Service relationship with internal customers
    3. Never compromise quality
    4. Customer driven standards

Reference Link: http://www.isixsigma.com/library/content/c031008a.asp

Total Quality Management (TQM) views quality as the heart of the production process, TQM is emphasis on constant improvement of the product and preventing errors rather than relying on postproduction inspection to reject faulty items and correct mistakes. A feature of TQM is that closer links are forged between top management and shop-floor operators. Operatives are encouraged to take more decisions and accept more responsibility. As a consequence, middle management and the formal structures that go with layers of management are being reduced or eliminated.
TQM organizations are customer-service orientated, they often collect, analyze, act on customer information, and integrate customer knowledge with other information and use the planning process to make action throughout the organization to manage day to day activities and achieve future goals.
References: TQM - wikipedia http://en.wikipedia.org/wiki/Total_quality_management

What is six sigma?

Six Sigma is defined as a type of business improvement methodology. Its main objective is to implement a vigorous process to systematically eliminate defects and inefficiency. It was originally developed by Motorola in the early 1980's and because of its proficiency has become extremely popular in many corporate and small business environments around the world.

Six Sigma's main purpose or objective is to deliver high performance, value and reliability to the customer. It is regarded and used around the world as one of the major themes for TQM (Total Quality Management).

Reference Link: http://www.tech-faq.com/six-sigma.shtml


Six Sigma projects follow two project methodologies.
These methodologies, comprising five phases each, bear the acronyms DMAIC and DMADV.
  • DMAIC is used for projects aimed at improving an existing business process.
  • DMADV is used for projects aimed at creating new product or process designs.

The DMAIC project methodology has five phases:
  • Define high-level project goals and the current process.
  • Measure key aspects of the current process and collect relevant data.
  • Analyze the data to verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered.
  • Improve or optimize the process based upon data analysis using techniques like Design of experiments.
  • Control to ensure that any deviations from target are corrected before they result in defects. Set up pilot runs to establish process capability, move on to production, set up control mechanisms and continuously monitor the process.


The DMADV project methodology, also known as DFSS ("Design For Six Sigma"), features five phases:
  • Define design goals that are consistent with customer demands and the enterprise strategy.
  • Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities, production process capability, and risks.
  • Analyze to develop and design alternatives, create a high-level design and evaluate design capability to select the best design.
  • Design details, optimize the design, and plan for design verification. This phase may require simulations.
  • Verify the design, set up pilot runs, implement the production process and hand it over to the process owners.


New information systems of Organizational Change.
New information systems can be powerful instrument of organizational change, enabling organizational change, enabling organizations to redesign their structure, scope, power relationship, workflows, products, and service.

Information technology can promote various degrees of organizational change, ranging from incremental to far-reaching form illustration show four kinds structural organization change that are enabled by information technology: (1) automation, (2) rationalization, (3)reengineering, and (4)paradigm shifts. Each carries different rewards and risks.


The development history of the TQM:

The Americans start to adapt the TQM since they noticed the Japanese quantity advantages since 1980s. Then they developed Six Sigma in 1990s. Abrahamson agured on the topic of fashionable managment, quality circles in 1996. So, the Total Quality Managment started to use Six Sigma to measure.


VI. 3-ties System Architecture
Multi-tier architecture was very popular in part of the software engineering nowadays. It was a simple client-server relationship architecture that have presentation, application processing and data management in separately. The most widespread was “3-tuer architecture”
Independent module includes user interface, functional process logic, information data storage and access, we often called then platforms.
Presentation tier often refers to user interface, it was the topmost level of the application. It used to displays related information. For example, shopping cart contents, program control console. It only communicates with user and output/transfer the related result to other tiers. Most important is it will not perform any logical procedure.
Application tier was used to handling the logical thing, it received presentation tier output result, and control the application’s functionality by performing detailed processing.
Data tier was consists of Database or Information storage. After the Application tier was computed, data was passed to this tier. So information can be stored or retrieve by Application tier. Data was kept in this architecture can increase the scalability and performance well.


3-Tier client-server architectures have 3 essential components:
  • A Client PC
  • An Application Server
  • A Database Server
3-Tier Architecture Considerations:
  • Client program contains presentation logic only
    • Less resources needed for client workstation
    • No client modification if database location changes
    • Less code to distribute to client workstations
  • One server handles many client requests
    • More resources available for server program
    • Reduces data traffic on the network
Advantages of 3 Tier:
  • Development Issues:
    • • Complex application rules easy to implement in application server
    • • Business logic off-loaded from database server and client, which improves performance
    • • Changes to business logic automatically enforced by server – changes require only new application server software to be installed
    • • Application server logic is portable to other database server platforms by virtue of the application software
  • Performance:
    • • Superior performance for medium to high volume environments
Disadvantages of 3 Tier:
  • Development Issues:
    • • More complex structure
    • • More difficult to setup and maintain
  • Performance:
    • • The physical separation of application servers containing business logic functions and database servers containing databases may moderately affect performance.

Ref: Channu Kambalyal


The three-tier architecture aims to solve a number of recurring design and development problems, hence to make the application development work more easily and efficiently. The interface layer in the three-tier architecture offers the user a friendly and convenient entry to communicate with the system while the application logic layer performs the controlling functionalities and manipulating the underlying logic connection of information flows; finally, the data modeling job is conducted by the database layer, which can store, index, manage and model information needed for all application.

ref by: A Three-Tier System Architecture Design and Development for Hurricane Occurrence Simulation

VII. Review Questions
Question 1 : What are the goals of the traditional system development life cycle approach?

The goals of the traditional SDLC approach includes:
1. More discipline
- It eliminate personal variations by establishing standards for processes and documentation. As a result, the discipline of programmers will increase productivity and help to deal with a more complex program.

2. More modularized
- The developers divide the software into independent modules when the applications gets bigger size. This divide and conquer approach helps to reduce the complexity of the development process.

3. Higher reliability and fewer errors
- This goal is to find errors as early as possible. It is required to redo the part that was found errors or mistakes.

4. More efficient use of resources
- By imposing a time and cost control system, it contributes to cost savings, increase productivity and better allocation of human resources.

Source: McNurline

The goal of systems analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces and drawing diagrams to analyze the situation. Analyze project goals, break down functions that need to be created, and attempt to engage users so that definite requirements can be defined. Requirement Gathering sometimes require individual/team from client as well as service provider side to get a detailed and accurate requirements.


Except the above goals which had been mentioned, there are any other purposes of traditional SDLC approah, they are problems solving, reach for opportunities and fulfiling directives gives. It reaches the goals via provide an explicit guidelines allow the use of less-experienced staff for system development, as all step are clearly outlined. Even junior staff members who have never managed a project can follow the "recipes" (1.System identification, selection, and planning; 2. System analysis; 3. System Design; 4. System Implemnetation and 5. System maintenance) in the SDLC to produce adequate systems. Reliance on individual expertise is reduced. Use of an SDLC can have the added benefit of providing training for junior staff, again because the sequence of steps and the tasks to be performed in each step are clearly defined. Last but not least, traditional SDLC approach is to promote consistency among projects, which can reduce the cost of anoter.

Question 2 : Define the components of a computer-aided software engineering system.

The components of a CASE includes:
1. An information repository
- It is the “heart” of a CASE system. It acts as a database that store and organize all the information. This information link to the active data dictionary so that any change will be reflected during program execution.

2. Front-end tools
- This is a graphical representations tools for drawing diagrams of program structures, data entities and their relationships, data flows, screen layouts. It is a tool that automatic design analysis for checking the consistency and completeness of a design.

3. Back-end tools
- It actually mean code generator to generating source code.

4. Development workstation
- A powerful workstation will be better for CASE developed systems to manipulate all the graphics representation.

Source: McNurline

Question 3 : What is a platform inter-organizational system? Give a few examples.

A platform of inter-organization system provides the infrastructure for the operation of a business ecosystem, a region, or an industry.
Examples: (1) American Airlines’ development of its SABRE computer reservation system. (2) Platform developers of Sony, Nintendo and Microsoft for the video game industry.

Source: McNurline

Inter-organizational Systems support the formation of company networks and collaborative production of goods and services.
They improve the profitability and efficiency tapping out the full potential of business application systems and modern communication infrastructure (e.g. Internet or mobile communication). They have an impact on company and industry structures. Additionally, they create the necessary flexibility, which enables companies to quickly adapt on changing business situations.

Example American Airlines
Sabre was developed in order to help American Airlines improve the way in which the airline booked reservations.
http://en.wikipedia.org/wiki/Sabre_(computer_system )

Question 4 : What are five steps in building a Web Service?

Step 1 Expose the Code
- Using an XML wrapper with currency conversion web service to plug into a credit card processing application.

Step 2 Write a Service Description
- Currency conversion web service description is written using Web Services Definition Language (WSDL).

Step 3 Publish the Service
- The currency conversion description is post along with its URL by using Universal Discovery, Description, and Integration (UDDI).

Step 4 Find a Currency Conversion Web Service
- The Web service sends a request in the form of an XML document in a Simple Object Access Protocol (SOAP) envelope to one or more registries. The currency conversion web service will search the UDDI registry for this request and reply an XML document in a SOAP envelope to the sender.

Step 5 Invoke a Web Service
- The web service can bind to and send request to a selected conversion service using XML with SOAP envelope for request and reply.
Source: McNurline

Question 5 : Describe the different types of IT-enabled organization change.

There are four types of IT-enabled organization change.
1. Automation
- It helps employees with performing their tasks more efficiently and effectively. For example, (1) Calculating paychecks and payroll statement (2) Instant access to customer records for a bank teller (3) An airline reservation system for travel agents.

2. Rationalization of procedures
- It is the streamlining of standard operating procedures. It helps to minimize the bottlenecks in production as automation.

3. Business reengineering
- More powerful change in which business processes are analyzed, simplified, and redesigned. Organization can improve speed, service and quality by using information technology. This change reorganizes work flow so that some repeated steps can be combined to cut waste. Therefore, business reengineering requires a new vision of how to reorganize processes.

4. Paradigm shift
- It involves rethinking the nature of the business, defining a new business model, and changing the nature of the organization.

In general, the slow moving or slow changing strategies produces small returns but little risk. Otherwise, fast moving or more comprehensive change produces high returns but have more chances of failure.
For Paradigm Shift, it has a chance that the organization change carries (1) low risk but high return (2) high risk but low return. However, the chance is small relatively.

Source: Laudon

What is the typical Tasks in the Development Process Life Cycle?
Professional system developers and the customers they serve share a common goal of building information systems that effectively support business process objectives. In order to ensure that cost-effective, quality systems are developed which address an organization’s business needs, developers employ some kind of system development Process Model to direct the project’s life cycle. Typical activities performed include the following:1
· System conceptualization
· System requirements and benefits analysis
· Project adoption and project scoping
· System design
· Specification of software requirements
· Architectural design
· Detailed design
· Unit development
· Software integration & testing
· System integration & testing
· Installation at site
· Site testing and acceptance
· Training and documentation
· Implementation
· Maintenance