1.0 Introduction In this report I will be concentrating on the failure of software systems. To understand why software systems fail we need to understand what are software systems. Software systems are a type of information system. This is because a software system is basically a means for hardware to process information. Flynns definition of an information system is: “An information system provides procedures to record and make available information, concerning part of an organization, to assist organization-related activities.” Humans have been processing information manually for thousands of years, but with the vast increase of demand for knowledge this century has meant that a new method of information processing has been needed. Software systems have provided a new means that is much faster and efficient. As a result a huge number of organisations have become software dependent.
Some of these systems are used to safeguard the lives of many people. This means that if these systems were to fail they could lead to devastating consequences. Here are some examples of where software systems are used heavily and could be very dangerous if they were to fail – aviation, hospitals, space exploration, nuclear power stations and communications. I will be looking at some examples of actual software failure in these fields to explain the reasons why systems fail. 2.0 Reasons for Systems Failure If software systems failure can be so dangerous why can they not be completely eliminated? According to Parnas, “The main reason is that software can never be guaranteed to be 100% reliable. Software systems are discrete-state systems that do not have repetitive structures. The mathematical functions that describe the behaviour of software systems are not continuous, and traditional engineering mathematics do not help in their verification.” In other words some software can be so large that thorough testing can be almost impossible and so bugs in the software can go unnoticed.
The Essay on Information Systems and Information Technology
A combination of hardware, software, infrastructure and trained personnel organized to facilitate planning, control, coordination, and decision making in an organisation Contrast IS and IT: What is the difference between Information Systems and Information Technology? In: Computer Terminology, Technology, Information Technology[Edit categories] Answer: Information Systems is a large umbrella ...
An example of this was when an Atlas-Agena rocket veered off-course when it was ninety miles up. Ground control had to destroy the $18.5 rocket. The reasons for this – a missing hyphen. However there are many more reasons for software systems failure, and most of them are due to human negligence that leads to software failure. There are two types of software systems failure. These are in the design stage of the software or in the implementation of the software.
These are the main reasons for systems failure. poor software design – Fundamental flaws in the design of the software. Incorrect requirements specifications – The brief is inconsistent or missing vital information. Political / Commercial pressures – This can lead to developers skipping parts of the system to save time or money. There are also cases of rivalry between sub-contractors, which damages the design of the system. Incorrect analysis and assumptions – Predictions based on incorrect assumptions of the real world or its behaviour. Not properly tested software implemented in a high risk environment – This is almost guaranteed to lead to systems failure. Poor user-interface – Makes it difficult or even impossible for the user to operate the software system.
Incorrect fit between software and hardware – Incorrect specification of the hardware type in the brief, or upgrading the hardware without upgrading the software (or vice-versa).
The Research paper on Denver International Airport Baggage-Handling System
... would be a number of freeze dates for mechanical design, software design, and permanent power. *A lack of efficient and ... to the PMT that had no experience in airport construction, baggage handling system technologies, and the introduction of the new ... management structures. This modification created confusion, inefficiency, and ultimately failure. *2 major changes in personnel. The resignation of ...
Inadequate training given to the operators – The people who have to use the software are not taught properly how to use the software system or they are expected to learn on their own. Over reliance on the software system – The operators expect their software system to work in all conditions and to perform miracles for them. I will be looking at these types of systems failure with examples. 2.1 Poor software design – the Denver airport automated luggage handling system An example of poor software design is the Denver International Airport luggage controller. In this case Jones says that the senior executives did not have a sufficient background in software systems and as a result accepted “nonsensical software claims at face value”. The airport boasted about its new “automated baggage handling system, with a contract price of $193 million, will be one of the largest and most sophisticated systems of its type in the world.
It was designed to provide the high-speed transfer of baggage to and from aircraft, thereby facilitating quick turnaround times for aircraft and improved services to passengers.” The baggage system, which came into operation in October 1995, included “over 17 miles of track; 5.5 miles of conveyors; 4,000 telecarts; 5,000 electric motors; 2,700 photocells; 59 laser bar code reader arrays; 311 radio frequency readers; and over 150 computers, workstations, and communication servers. The automated luggage handling system (ALHS) was originally designed to carry up to 70 bags per minute to and from the baggage check-in.” However there were fundamental flaws identified but not addressed in the development and testing stage. ABC news later reported that ” In tests, bags were being misloaded, misrouted or fell out of telecarts, causing the system to jam.” The Dr. Dobbs Journal (January 1997) also carried an article in which the author claims that his software simulation of the automatic baggage handling system of the Denver airport mimicked the real-life situation. He concluded that the consultants did perform a similar simulation and, as a result, had recommended against the installation of the system. However the city overruled the consultant’s report and gave the go-ahead (the contractors who were building the system never saw the report).
The Term Paper on System Software Computer Information Data
CHAPTER 1. INTRODUCTION TO COMPUTERS. -Central processing unit CPU... is the electronic device that interprets and carries out the basic instructions that operate the computer. -Data... is a collection of raw unprocessed figures and symbols computer uses data to create information. -Free ware... is software provided at no cost to a user by an individual or company. -Information... is data that is ...
The report into the failure of the Denver ALHS says that the Federal Aviation Authority had required the designers (BAE Automated Systems Incorporated) to properly test the system before the opening date on 28th February 1995. Problems with the ALHS had already caused the airports opening date to be postponed and no further delays could be tolerated by the city.
The report speculates that delays had already cost the airport $360 million by February 1995. The lack of testing inevitably led to problems with the ALHS. One problem occurred when the photo eye at a particular location could not detect the pile of bags on the belt and hence could not signal the system to stop. The baggage system loaded bags into telecarts that were already full, resulting in some bags falling onto the tracks, again causing the telecarts to jam. This problem caused another problem. This one occurred because the system had lost track of which telecarts were loaded or unloaded during a previous jam. When the system came back on-line, it failed to show that the telecarts were loaded. Also the timing between the conveyor belts and the moving telecarts were not properly synchronized, causing bags to fall between the conveyor belt and the telecarts.
The bags then became wedged under the telecarts. This eventually caused so many problems that there was a need for a major overhaul of the system. The government report concluded that the ALHS at the new airport was afflicted by “serious mechanical and software problems”. However you can not help thinking how much the city was blamed for their part in a lack of demand for proper testing. Denver International Airport had to install a $51 million alternative system to get around the problem. However United Airlines still continue to use the ALHS.
A copy of the report can be found at http://www.bts.gov/smart/cat/rc9535br.html. 2.2 Political / Commercial pressures the Challenger Disaster There are many examples of failures occurring because of this. One of the most famous examples of these is the Challenger disaster. On the 28th January 1986 the challenger space shuttle exploded shortly after launch, killing all seven astronauts onboard. This was initially blamed on the design of the booster rockets and allowing the launch to proceed in cold weather. However it was later revealed that “there was a decision along the way to economize on the sensors and on their computer interpretation by removing the sensors on the booster rockets.
The Essay on Remittance Advice System Problems Internal
The implementation of the new cash receipts procedure is to improve the efficiency and effectiveness involved with recording customer payments and eliminating risks involving fraudulence in the organization. Although this effective new system highlights many opportunities, the emphasis on internal controls still plays an important role for the Optima Phone Company. This short essay will highlight ...
There is speculation that those sensors might have permitted earlier detection of the booster-rocket failure, and possible early separation of the shuttle in an effort to save the astronauts. Other shortcuts were also taken so that the team could adhere to an accelerated launch sequence.” (Neumann).
This was not the first time there had been problems with space shuttle missions. A presidential commission was set up and the Chicago Tribune reported what some astronauts said, “that poor organization of shuttle operations led to such chronic problems as crucial mission software arrived just before shuttle launches and the constant cannibalization of orbiters for spare parts.” Obviously the pressures of getting a space shuttle launch and mission to run smoothly and on time is huge. However there has to be a limit on how many short cuts can be taken. Another example of commercial pressure is the case of a Fortune 500 company.
(A Fortune 500 company is one that appears in a listing of the top 500 U.S. companies ranked by revenues, according to Fortune magazine’s classic list.) According to Jones, “the client executive and the senior software manager disliked each other so intensely that they could not never reach agreement on the features, schedules, and effort for the project (a sales support system of about 3000 function points)”. They both appealed to their higher executives to dismiss the other person. The project was eventually abandoned, after acquiring expenses of up to $500 000. Jones reported another similar case in a different Fortune 500 company. “two second-line managers on an expert system (a project of about 2500 function points) were political opponents. They both devoted the bulk of their energies to challenging and criticizing the work products of the opposite teams.” Not surprisingly the project was abandoned after costing the company $1.5 million.
The Essay on Just In Time Systems
Devising an efficient system of counting and maintaining a stock of inventory items has long been an arduous task for many business owners. The old method of cataloging by part or item number has all but disappeared since the proliferation of electronic catalogs and the use of computer software and web-based ordering / monitoring systems. Inventory management is necessary for owners who want to ...
2.3 Incorrect analysis and assumptions – the Three Mile Island accident Incorrect assumptions can seem very obvious when they are thought about, however it does not stop them from creeping in. According to Neumann a Gemini V rocket landed a hundred miles off course because of an error in the software. The programmer used the Earths reference point relative to the Sun, as elapsed time since launch, as a fixed constant. However the programmer did not realise that the Earth position relative to the Sun does not come back to the same point 24 hours later. As a result the error accumulated while the rocket was in space. The Three Mile Island II nuclear accident, on 28th March 1979, was also blamed on assuming too much. The accident started in the cooling system when one of the pipes became blocked, resulting in the temperature of the fuel rods increased from 600 degrees to over 4000 degrees.
Instruments used to measure the temperature of the reactor core was not standard equipment at the time, however thermocouples had been installed and could measure high temperatures. However after the temperature reached over 700 degrees the thermocouples had been programmed to produce a string of question marks instead of displaying the temperature. After the reactor started to over-heat the turbines shut down automatically. However this did not stop the rods from over-heating as someone had left the valves for the secondary cooling system closed. There was no way of knowing this at the time because there was no reading on the temperature of the reactor core. Operators testified to the commission that there were so many valves that sometimes the would get left in the wrong position, even though their positions are supposed to be recorded and even padlocked. This is also a case of the designers blaming the operators and vice-versa.
In the end the operators had to concede reluctantly that large valves do not close themselves. Petroski says, “Contemporaneous explanations of what was going on during the accident at Three Mile Island were as changeable as the weather forecasts, and even as the accident was in progress, computer models of the plant were being examined to try to figure it out.” Lots of assumptions had been made about how high the temperature of the reactor core could go and the state of the valves in the secondary cooling system. This shows that in an environment where safety is supposed to be the number one issue people are still too busy to think about all the little things all the time and high pressure situations develop that compromise the safety of hundreds of thousands of people. It took until August 1993 for the site to be declared safe. Facts are taken from Neumann and Perrow. 2.4 Not properly tested software implemented in a high risk environment the London Ambulance Service The failure of the London Ambulance Service (LAS) on Monday and Tuesday 26 an ….
The Essay on Accidents resulting from police high speed pursuits and responses to emergency calls
In the society where the law enforcement personnel and the ordinary citizens interact everyday there were situations were both sides must know there present role in a certain area of their responsibility. Situations arises when the two respondent will clutch without knowing the essential precaution of taking there proper role in the society. When talking about police high speed pursuits as ( ...