Client/Server technology is very common amongst most infrastructures. Most of you are already familiar or have at least used this technology. This technology consists of at least a client and a server; the two must also have a way of communicating with each other. The client can either host the application and do the processing locally or serve as an interface to an application running from a server, the server either hosts the application and does the processing locally or act as a file store.
Either way both are required for the Client/Server technology. In the environment that I work in, one example of this technology is our Finance system. An application is installed on the local computer while the data lay on the server. When data is centrally managed ensuring that everyone’s information is consistent.
When the client serves as an interface to an application located on and processed by a server, the client is called a thin-client. A thin-client does not have a local hard drive, the only memory it has is RAM. The benefit of this configuration is that the thin-client is much cheaper than a regular personal computer. Deployment is much easier due to the fact that there is no operating system to configure and no software to install, all of this is served up by the server.
Since the application is only on the server, updating the application is also a less timely process since you only have one system to worry about. If there is a bug in an application or corruption occurs, the troubleshooting process is simplified since the application is only on the server. Obviously there is a down-side to this configuration, what if the server fails completely? Single point of failure is the issue here, but of course there are solutions and one that comes to mind is clustering. Clustering is having multiple servers all configured identically, to the users the cluster appears as one server. When a server goes down the cluster goes on giving you time to rebuild and restore the failed system, all of this is transparent to the users unless your clusters’ resources were stretched to the limit in which this case you might experience delayed performance.
The Research paper on Case study on Implementation of quality systems through information technology systems in Athal Company
Case study on Implementation of quality systems through information technology systems in Athal Company Introduction Athal is a service offering company that was established in 1980. It offers inspection, testing, consulting, and quality control services. The company offers services to different companies in different industries through its 15 branches in United Kingdom. The company ...
Regardless delayed performance is better than no performance. One down side that is not avoidable is the cost of this configuration and the cost of maintenance. You will need very powerful servers in a large environment and you will need skilled professionals to keep the cluster humming along. The more common configuration, which may not be more common in the near future, is having the application loaded on each individual personal computer so that the processing is performed locally and the data store located on the server (s).
The benefit of this configuration is that there is a more balanced load amongst the client and the server.
Performance is normally higher in this configuration and server maintenance is lighter. The cost of personal computers is so minimal these days that having efficiently powered systems is attainable by most companies without breaking the bank. The down-side of this configuration is the initial roll-out of the application or configuration of the personal computers. There is also another layer to the troubleshooting process in this type of configuration, first you need take into consideration the ID-Ten-T error, then determine whether the issue exists at the server or local computer.
Mainframe Computers Mainframe computer is defined by Webster dictionary, as a computer with it's cabinet and internal circuits; also: a large fast computer that can handle multiple tasks concurrently. 1 The second definition is probably more accurately because in the 1940 s, there were a half-dozen computers, designed in clumsy ways, using expensive (vacuum tube) components, consuming vast amounts ...
Both examples of Client/Server technologies have their benefits and down-sides. Regardless of which configuration that you are using you must always account for data recovery. In both configurations the data lay on the server (s) and if there is a failure you need to be able to recover, or at least have your resume updated. Even with redundant systems data recovery is ultimately necessary.
In my environment I restore “missing” data on a weekly basis, even though I can prove who deleted it and when the spotlight is still shining away on me when emergencies arise. Backups are an administrator’s best friend.