Objective Using Victorian Rural Ambulance Services Special Report No. 51 (the report) this paper will analyse different concepts of quality in an ambulance service and then explore how to define and measure quality in relation to the report. Included within this objective will be a review of the relevant literature relating to service quality. Furthermore, an analysis of the different methods of optimising location decisions will be undertaken. This will be followed by the application of two of the analysed methods to findings contained within the report that relate to the location of ambulance service resources. These analytical stages will be followed by conclusions and reflections.
Part 1 Literature Review The notion of ‘service quality’ is attracting increasing attention from academics and industry alike. Indeed, this is not surprising when one considers that service industries are the new dominant force in economic activity. Yet, what is service quality and how can it be measured? Pioneers in the area define service quality in terms of customer satisfaction (Gr ” onroos 1984 and Parasuraman et al 1985), which is seen as the degree of commensurability between a customer’s expectation and perception of a service. However, latterly, others have come to see service quality as a derivative of a comparison between performance and ideal standards (Teas 1993) or simply from the solitary perception of performance (Cronin & Taylor 1992).
The Term Paper on The Determinants of Service Quality
The determinants of service quality: satisfiers and dissatisfiers Robert Johnston University of Warwick, Coventry, UK Introduction There appear to be five major debates taking place in the service quality area. One debate concerns the similarities and differences between the constructs of service quality and satisfaction (see e. g. Anderson and Sullivan, 1993; Bolton and Drew, 1991; Cronin and ...
Perhaps the only aspect on which there would appear to be agreement, is the fact that service quality is distinct from customer satisfaction. However, unsurprisingly, inversely, there is disagreement upon which factor is the precursor within the relationship (Robinson 1999).
Whilst Silvestro et al (1990) report that, historically, the measurement of service quality has either been ignored or seen as to difficult (Voss 1985, Smith 1987), they offer empirical evidence to show that it is on the increase. Moreover, despite widespread disagreement on how best to measure service quality (Robinson 1999), a variety of techniques for measuring service quality have arisen during the past two decades. Whilst SERVQUAL is perhaps the most recognisable technique, others such as the Technical/Functional approach and benchmarking, are widely practiced. SERVQUAL has, since its creation, been the method most widely used to measure service quality. Developed by Parasuraman et al (1988), SERVQUAL attempts to measure quality by determining the difference between a customer’s expectation and their actual perception of a service. However, more recently SERVQUAL has attracted mounting criticism.
In a comprehensive review of the literature, Buttle (1996) highlights several shortcomings of SERVQUAL, as does Robinson (1999) who concludes that: “It is questionable whether SERVQUAL is a reliable measure of service quality or, indeed, whether it is measuring service quality at all.” Yet to simply dismiss SERVQUAL as a means by which to measure quality would, at this stage, be somewhat hasty if not ill considered. Indeed, Youssef et al (1996), in a review of healthcare quality in NHS hospitals, found that SERVQUAL was the most suitable means by which to measure quality. As such, this would have comparative value to the case study. Conceived by Gr ” onroos (1983) the Technical/Functional model of service quality concerns what is being delivered, the technical aspect, and how it is delivered, the functional aspect. Whilst not employed to the same extent as the SERVQUAL model, a recent study conducted on the private banking industry by Lassar et al (2000), compares the two methods and concludes that the Technical/Functional model is better than SERVQUAL at predicting customer satisfaction. Benchmarking is “the practice of recognising and examining the best industrial and commercial practices in an industry or in the world and using this knowledge as the basis for improvement” (Naylor 1996).
The Research paper on Case study: Managing Customer Quality Service
There are number of reasons show the difference of LCV between these two stores. First of all, when people go in a shop, they look for what they want, and then the first important consideration comes up from their mind is the PRICE. Price is the amount of money that charged for a product or service. In reality, people like comparing prices on the same brand of product from different shops to find ...
Benchmarking is used worldwide and no doubt receives attention due to the tangible nature of its underlying methodology. Benchmarking has long since been utilised in the UK NHS and, whilst it may therefore have comparable value to the case study, the benefits and potential pitfalls of the technique in this arena are well reported (Bullivant 1996).
Indeed, Black et al (2001) warn of the potential dangers of externally imposed benchmarks and an over reliance on performance indictors as measures of quality. Again, the situation of externally imposed benchmarks has relevance to the case study, where standards are set by an external body. Whilst restrictions on space prevent discussion, it is worth noting that other techniques such as the Six Sigma method (see Behara et al 1995), and a systems approach (see Johnson et al 1995) also exist to measure quality. In conclusion, whilst a universally agreeable definition of service quality remains elusive, it is clear that service quality generally equates to an, albeit indefinable, evaluation regarding the superiority of a service.
Indeed, any attempt at a more exacting description that this is futile. Furthermore, any attempt to conclude which system for measuring quality is most effective would be equally futile. The basis of a preference for a particular method may be as varied as its extolled merits and deficiencies, and would appear to be dependent on the prevailing circumstances. Analysis As Crosby (1979) famously stated, “Quality is free.” Well, is it? This may have been true of 1960’s manufacturing where end consumers directly compensated the manufacturer for their product and, thus, the costs of implementing quality control processes were effectively covered by financial savings brought about by lower defect rates, a reduction in the necessity for after sales servicing and a decrease in warranty claims. However, in the arena of a 21 st century, public sector, service industry, where additional processes and requirements are met from a limited budget, the majority of which is provided by government funding, and where the consumer does not directly compensate the service provider in all cases, it may be argued that quality is not free. Indeed, it is apparent from the case study (s.
The Essay on Total Quality Service Satisfied Customers
Is TS more important today that it was then years ago? Why or why not? Total quality service is more important today than it was ten years ago. Quality is a key factor in customer satisfaction and service is expected. In the past quality service may have been an afterthought to some companies, now it is essential to remain a competitive and successful business. Satisfied customers are the driving ...
4. 13) that the rural ambulance service is continually seeking a balance between the cost of providing a high quality service and the risk to the community should the quality be reduced. Thus, in real terms, the provision of a quality service may be very costly. Consider if you will, the situation where a rural ambulance service is required to meet previously defined response times. The only way these times may be achievable is by substantial investment in technology, or by an increase in the number and location of ambulance crews.
Neither of these solutions comes without out a price, whether it be financial, as alluded to by Crosby (1979), or political. Yet, are response times the only indicator of quality and, if not, how can other aspects of quality be identified, defined and, thus, measured? Historically, response times have been the holy grail of emergency service performance indicators and at the time of the report the Victorian Rural Ambulance Service (VRAS) relied predominately on this factor to measure their effectiveness (s. 4. 25).
However, problems with accurately obtaining this measurement, related to the different approaches and inconsistencies (see s. 4.
26), serve to highlight the danger of relying on a single measurement. It would also appear from the report that VRAS places great emphasis on the evaluation of financial performance as a means by which to judge overall performance. Whilst also a useful indictor, there are growing concerns that an over reliance on this measurement can lead to long-term problems (Kloot 1999).
The other measurement used by VRAS to monitor quality, as highlighted by section 4.
The Research paper on Consumer Perspective About Quality Service Provided
This research is based on comparison of Quality service offered by pizza hut and dominos. This research is based on to test the quality of service and comparison between two Pizza Houses. The fast food industries of India is experiencing phenomenal growth and is one of the fastest growing sectors in the country, with the compounded annual growth rates of the market crossing 25%. Further, on the ...
27 of the report, is the comparison of the number of external compliments and complaints that are received. Whilst it is accepted that this method is only used to provide a broad indication, it must be noted that this measurement is unsolicited and, as such, is reliant on the impulses of third parties. Indeed, if a patient is the recipient of poor service, they may not always be capable of raising a compliant at a later date. In is evident from the report that there is a preference towards the use of benchmarking as a means by which quality is to be measured.
To this end, it is necessary to identify performance indictors that are capable of benchmarking in order to assess quality. Indeed, as alluded to above, the accurate collection of data relating to response times would be one of several suitable measurements. However, whilst the measurement of response times is undoubtedly an important factor in providing a quality service, it is by no means the only factor. The VRAS’s main responsibilities are given as the maintenance of a suitable standard of clinical care in providing the initial healthcare response, ensuring the operation of an effective communication system, and determining the location and availability of ambulance service resources in order to provide a timely and quality ambulance service (s. 2. 4).
As such, any initial benchmarks should seek to measure the performance of these responsibilities. It is important to note that benchmarking is only as effective as the data on which it is based and it is apparent from the report that, historically, there have been problems with collecting quality data. As such, a crucial first step after the identification of possible areas for benchmarking would be to establish means of accurate and consistent data collection relating to the respective areas. The importance of this step is recognised by Bullivant (1996) in his review of benchmarking in the UK NHS. Furthermore, in relation to communications, the report highlights the current governmental objective to apply computer aided dispatch systems to all of the State’s emergency service operations in order to improve various aspects of the service’s quality (s.
4. 34 – 4. 36).
This point neatly illustrates how technology can be used to improve quality. However, there can, of course, be no guarantee. One need look no further than the London Ambulance Service’s well publicised and disastrous attempts to computer ise their dispatch system (Page et al 1993).
The Essay on Analysis of E-Tailing Service Quality
Their study focused on the consumers’ purchase and delivery (PD) choices, as part of a broader e ort to understand consumers’ shopping behavior. The present article begins by criticizing the content validity of E-S-QUAL (Parasuraman, Zeithaml, and Malhotra 2005), the principal academic measure of e-retailer service quality, which is probably the most important construct in contemporary services ...
Indeed, this point also serves to emphasise the earlier submission that quality is not always free; take for example the huge cost of installing a new computer system. In relation to the standard of clinical care, the report identifies the importance of maintaining the required level of clinical expertise amongst ambulance officers. Yet, rather than maintaining a standard, the importance of continual improvement with a view to long term development, has been identified as an important factor in providing a quality service in a healthcare setting (Gummesson 2001).
Indeed, using the example of the Should ice Hospital in Canada, Gummesson also points to the benefits of patient follow-ups as a vital aspect in the provision of a quality service. Gummesson’s (2001) point serves to illustrate the dangers of a reliance on internal benchmarks alone, as a means by which to measure quality; a point well made by Black et al (2001).
Whilst internal benchmarking and performance measurements, many of which are provided in the report (see table 6 B), can promote accountability to stakeholders (Kloot 1999), they fail to measure the expectations and perceptions of the very same people.
It is noticeable from the report that one of the potential uses of benchmarking is given as ‘making the community aware of the quality of service it can expect’ (s. 4. 15).
Indeed, as suggested by Role do (2001), if service quality is dependent upon customer expectation then we ought to consider a more active approach to the management of customer expectation. Strange then, that whilst the report recognises the importance of customer’s expectations, it stops short of suggesting they are measured. Expanding the ideas of Roth and Griff i (1994), Everett et al (1997) have concluded that “The customer specifies quality, and his or her satisfaction is the basis for measuring quality performance.” As such, it may be argued that VRAS needs to employ one of the previously discussed techniques for employing direct customer input, rather than relying on the solitary technique of benchmarking alone, as would largely appear to be the case.
The Essay on Identification of critical service factors
1. People People here refer to the staffs who are working at the different outlets. No matter what their position is in the outlet, manager or part-timers, customer will always look at the staffs’ attitude towards them. Due to their excellent training and development, we can see that staffs have met most of the customers’ requirement. Hence, this is one of the critical factors that cause Sakae to ...
On this point, the report does highlight VRAS’s current (at the time of the report) commitment to obtain operational accreditation with the International Standards Association under quality standard ISO 9002. Accreditation will, if approved, ensure the continuing development of standard operating procedures which should, in turn, improve the over all quality. Indeed, recent research by Dick et al (2002) led them to conclude that service firms who consider quality accreditation (ISO 9000) to be important, “have an increased usage and emphasis on both the internal (conformance) and customer-based (exception-gap) quality measures.” Conclusions So, how can we now define what quality is in the case of VRAS? Well, take your pick. However, before you do, remember to include aspects of timeliness, clinical excellence, continual professional development, communications, appropriateness of resource location, patient outcomes and last, but by no means least, customer satisfaction.
Furthermore, how can we measure quality in this case? Whilst a definitive answer remains elusive, it is apparent that it must have an internal and external, customer related, focus. Indeed, simply because the precise measurement of service quality is by no means easy, this should not negate the responsibility to try. Today, VRAS has a well established Clinical Quality Assurance programme and places great emphasis on the continual professional development of its employees. It also actively seeks feedback from those who have used the service. In times of global markets and increasing competition, the quality of service that one provides has become even more crucial and so, therefore, has its measurement.
Indeed, it is apparent from the case study that, as well as financial pressure, VRAS is coming under increasing pressure to compete with the private sector. So whilst quality may not be free, it is certainly competitive. Indeed, the winner of this competition may well be those best able to measure quality and then strike its harmonious balance with cost. Part 2 Location of ambulance service resources is recognised in the report as ‘perhaps the most important responsibility’ under the control of VRAS.
Indeed, the report states: “Decisions concerning the positioning of resources in rural areas will have the most impact in achieving a balance between the quality of service delivered to the community and the cost of those services.” (s. 4. 47) So, what methods are available for optimising location decisions? Numerous models exist that seek to optimise location decisions and, whilst restrictions on space prevent a full analysis, it is appropriate to examine several of the more widely used methods. Perhaps most simple is the Factor Rating Method. This involves devising a list of factors that need to be considered when selecting a location. Potential locations are then typically given a mark between 1 and 5 for each factor.
Factors can also be weighted according to their level of importance. Using this method several locations can be compared and the best location, that with the highest score, chosen. The Load-Distance Method is used to calculate the distance that loads must travel to and from a potential location. The method is then used to select a location that minimises this distance. Whilst the Load-Distance Model can be used to evaluate several potential sites, the Centre of Gravity Model can be used to find the general area in which the sites should be located. The method uses linear equations based on transportation factors to calculate the ideal location.
A previous weakness of both the Load-Distance and Centre of Gravity models, as identified by Naylor (1996), have been their reliance on Euclidean, or straight line, distance and whilst computer software is now available to address this problem (Krajewski and Ritzman 1999), Naylor also establishes various other weaknesses, relating to possible ignorance of financial and humanistic factors, that are still valid. Break-Even analysis is used to compare different location alternatives on the basis of total cost. It involves identifying the fixed and variable costs of each location and then calculating how these costs will vary with production. A decision is then made based on the most economic location according to projected levels. Again, this model has its weaknesses in that it relies on the precise estimation of production costs at different locations, which may not always be possible. Schmenner (1994) notes that, historically, the area of operations management has largely ignored service industries and concentrated instead on those factors that have most influenced the location of manufacturing facilities.
As such, Schmenner proposes a service facility location model based on choosing a ‘general area’ for the service operation and then a ‘particular site’. Influences on these decisions are described as ‘musts’ or ‘wants’. Fig. 1 – Model to show location decision making of services firms. It is noticeable that all of the approaches outlined above pay little attention to any long-term strategic planning when choosing where to locate a facility. Indeed, this may be a criticism of all of the models.
As Owen and Daskin (1999) conclude: “Decision makers must select sites that will not simply perform well according to the current system state, but that will continue to be profitable for the facility’s lifetime, even as environmental factors change, populations shift, and market trends evolve.” Indeed, as if to emphasis this point, the report highlights the need for additional ambulance stations in the North West region of Romsey due to population expansion (s 4. 49).
Finally, different authors have drawn up differing lists of the factors they consider to be influential on facility location, some of which are represented in table 1 below. Table 1 – to show factors influencing facility location. Author Lockyer et al (1988) Naylor (1996) Krajewski & Ritzman (1999) Factors Proximity to market Economic policies of governments at supranational, national, regional and local level. Favourable labour climate Integration with other organisations International risks Proximity to markets Availability of labour and skills Raw material and energy resources Quality of life Availability of amenities Location of markets Proximity to suppliers Availability of transport Transport links Proximity to resources Availability of inputs Climate and quality of life Proximity to parent company’s facilities Availability of services Labour supply and training opportunities Utilities, taxes and real estate costs Suitability of land and climate Competitors and allies Regional regulations Availability of sites Room for expansion Safety requirements Site costs Political, economic and cultural situation Special grants, regional taxes and import / export barriers.
In summary, from the views represented above, it would appear that there is a consensus of opinion that markets, labour, resources and financial implications largely influence facility location. Yet, are these factors equally applicable to service industries? It has been recognised that the criteria for choosing the location of service facilities differ from those used in manufacturing (Evans 1993, Krajewski and Ritzman 1999).
For example, it is obvious that service facility location decisions need not be concerned with issues relating to product distribution or proximity to raw materials. In his study of dominant factors in the location of 926 American, Midwestern, service firms of a variety of sizes and types, Schmenner (1994) determines that: “Infrastructure, proximity to customers and the ability to attract quality labour, are the three most important ‘general area’ influences.
Whilst the chief influences on particular site choice include adequate parking, an attractive building, attractive rent or costs, and specialised space needs. Indeed, to take this argument a stage further, are these factors equally applicable in the arena of emergency services? It is suggested that the only thing of real importance in the provision of emergency services, are response time influences (Noori and Radford 1995).
Using the factor of response times alone as the solitary influence, the location of an emergency service facility can be determined by calculating the travelling times from all possible locations to all those areas that require emergency service coverage (Evans 1993).
So what of other possible solitary influences on emergency service facility location? We may take the example of locating an Operation Centre at one of the service branches located within VRAS’s Area 8 (see Map 1 above) using the solitary objective that the maximum travelling time to any other branch is the smallest.
Fig. 2 (see Appendix 1) determines this location by measuring the distances between the eleven possible locations, as provided by Map 1, and then calculating the travelling times. It is, in effect, an adaptation of the Centre of Gravity Model based solely on travelling time. Using this criterion, it can be seen that Longatha would be the ideal location.
Indeed if we look at Map 1, it is no coincidence that Longatha is the most central of the eleven locations. Alternatively, if we use lowest average travelling time as the only criterion, it can be seen that Mirboo North becomes the ideal location. This has occurred, as, whilst Mirboo North is not the most centrally located branch, it is closer to a higher concentration of branches than Longatha. Indeed, Figure 3 illustrates the differences that occur when the criterion of average, as opposed to smallest, travelling times is used. However, it can also be seen from Map 1 that the operations centre for Area 8 is actually located at Mor well.
So, whilst our example was largely hypothetical, this would nevertheless suggest that travelling times alone are not the only factors that influence emergency service resource location. Indeed, this point is truer still if we consider that not all emergency service resources physically provide emergency assistance. We have used hypothetical criteria to locate the operations centre, yet it could feasibly be located anywhere within the area provided it could communicate as required. Fig. 2 – Charts to show the distances and travelling times between Area 8 branch locations. Figure 3 – Chart to show Areas in rank order.
Whilst it is obvious that response time influences are no doubt an overriding factor in the location of ambulance stations, to simply negate the effect of any other influence, how ever slight, would be imprudent. Indeed, whilst it is apparent that some of the methods outlined above are unsuitable for service facilities, the Factor Rating Method can ideally address the situation by adding appropriate weightings to response time influences. We need also to define exactly what we are attempting to optimally locate. Whilst the report recommends the reduction of communication centres from 5 to 2, to date, some 5 years later, this has not been done (VRAS website).
As such, it would appear appropriate to define criteria for the location of the 113 branches that are currently located throughout rural Victoria. It can be seen from the previous map that the branches are fairly uniformly spread throughout the state, with the noticeable exceptions being those of areas 7 and 4.
Indeed, Australia being as it is, this may simply be because these areas are uninhabited, or so sparsely populated that it is not deemed financially justifiable to locate resources there. Whatever the reason, it highlights the problem of locating limited resources. Using four, fictitious, potential sites, Figure 4 below shows eleven factors that have been identified as influencing the location of ambulance stations. It can be seen that the factors that would potentially influence response time have been allocated greatest importance, closely followed by financial factors. This would serve to reflect the responsibilities of VRAS, as stated earlier. Fig.
4 – Factors influencing the location VRAS branches. Indeed, Figure 4 illustrates the importance that weightings can have on the overall outcome of a location decision. It is noticeable that before the application of weightings Area 4 was most desirable, however, Area 1 becomes most desirable once the weightings have been applied. Applying Schmenner’s (1994) service specific facility location model (see Figure 5 below), it can be seen that the area and site ‘musts’ and ‘wants’ are comparable to those factors weighted most heavily in the Factor Rating Method. It is no coincidence that the factors relating to response times, and those factors with financial implications, take precedent in both applications. Fig 4 – Model to show location of VRAS stations.
Whilst we have looked at those factors that may influence facility location and methods that can be employed to best determine it, it is a fact of facility location that opening one facility can often necessitate closing another one. Indeed, in the field of Victorian rural ambulance location, where ambulance resources are distributed over significant geographical areas, the operations centres are regularly required to move ambulance resources around the state to maximise operational coverage (VRAS website).
As such, what factors should be considered when selecting a plant or facility for closure? In an examination of multi-plant manufacturing firms, Kirkham et al (1998) suggest that the key factors in selecting sites for closure are the size of site, the limited range of activities of the site, site difficulties of access and expansion, labour problems, old age of capital equipment (both machinery and buildings) and site distance from head office. However, these factors are related to manufacturing firms and are, therefore, not wholly applicable to service industries or, indeed, emergency services. Thus, whilst some of the factors may have a bearing on the location of operation centres or communication centres, they would appear to have relatively little influence on the selection of ambulance stations for possible closure.
Again, it is apparent that response time influences would take precedent. Conclusions The criteria for locating emergency service facilities is dependant upon their type. Operations centres and communications centres may well opt for central locations on existing sites for financial reasons. Yet in the age of reliable, high technology, communication systems, there location is not so fundamental. The real issue is the location of those emergency service resources that are required to provide a physical response. Herein lies a problem, although not unique to Australia, perhaps most associated with it.
A factor, often disguised by the distorted image provided by the maps we so readily relate to, is the sheer size and sparsely populated nature of large tracts of the country. Let us not forget, that in terms of size, the state of Victoria is larger than the UK. As such, great reliance is placed on the ‘flying doctor’ who is, in a vast number of rural cases, the only viable means by which to transport patients to hospital. However, whilst the problems routinely encountered by Australian emergency services may be different to that of UK services, facility location for the provision of emergency response is still overwhelmingly dependant on response times.
As, such many of the facility location models are obsolete and we are forced, instead, to look at an adaptation of the centre of gravity model based solely on travelling time. Whilst it is true of all companies that great importance should be attached to facility location decisions, perhaps it is of even greater importance when considering the location of ambulance facilities that operate on limited budgets and where errors may not simply lead to financial loss, but the loss of life. Reflection Having virtually no direct operations managing experience, and perhaps a limited foreseeable use for its wisdom, before the commencement of the module I was somewhat sceptical about its relevance, and thus its benefit, to myself. However, having read Goldratt’s “the Goal” and commenced my study of the module, my understanding and, equally, my interest in the subject increased demonstrably. Indeed, I now see applications of operations management knowledge in areas that I had never previously perceived. So much so that I find myself subconsciously identifying and evaluating constraints and service quality in places as innocuous as supermarkets and football stadiums.
In this sense, one may conclude that the study of operations management has saddled me with a burden that I must now carry with me wherever I go. However, to believe this would be to underestimate the importance and relevance of my recently gained knowledge. Whilst initially couched mainly in terms of manufacturing, my additional study, especially in the area of quality, has seen an expansion of my operations management thinking to service sector industries with which I am more personally familiar. Indeed, I actually found the production of the coursework enjoyable and rewarding. Upon reflection, it is apparent that this enjoyment was born out of an enthusiasm fostered by my learning experience. Having previously been determined as an accommodator (Kolb 1984), my learning experience on this module would add weight to this submission.
This is to say that, following a period of reflection, I applied my recently gained knowledge in order to complete this assignment. Indeed, the feedback that was provided after the completion of the first assignment was of valuable assistance in this reflective stage. Expanding this further, looking at Kolb’s Learning Cycle (1984) below, it can be seen how the four stages of the model apply to my learning experience within the operations management module. Again we can see the instrumental role played by the period of reflection.
Having recently come from a regulatory background I was used to issues being black and white and, as such, I have enjoyed the considerable opportunities to debate issues that have arisen within the module. On reflection, it is obvious that my first degree in law has been a considerable influence on my style. Indeed, this style has helped develop and deepen my understanding by ensuring a full review and comparison of the appropriate issues. References Behara, R.
S. , Fontenot, G. F. and Gresham, A. (1995) “Customer satisfaction measurement and analysis using six sigma”, International Journal of Quality & Reliability Management, Vol. 12 No.
3, pp. 9-18. Black, S. , Briggs, S. and Keogh, William. (2001) “Service quality performance measurement in public / private sectors”, Managerial Auditing Journal, 16/7, pp.
400-405. Bullivant, J. N. R (1996), “Benchmarking in the UK National Health Service”, International Journal of Health Care Quality Assurance, 9/2, pp. 9-14.
Buttle, F. (1996), “SERVQUAL: review, critique, research agenda”, European Journal of Marketing, Vol. 30 No. 1 pp. 8-32. Cronin, J.
J and Taylor, S. A (1992), “Measuring service quality: a reexamination and extension” Journal of Marketing, Vol. 56, July, pp. 55-68. Crosby. P.
B, (1979) “Quality is Free”, McGraw Hill. Dick, G. , Galli more, K. and Brown, J. C (2002), “Does ISO 9000 accreditation make a profound difference to the way service quality is perceived and measured”, Managing Service Quality, Vol. 12 No.
1, pp. 30-42. Evans, R. J. (1993), “Applied Production and Operations Management”, 4 th Ed. West, Minneapolis.
Everett, A. and Lawrence. C. (1997), “An international study of quality improvement approach and firm performance”, International Journal of Operations & Production Management, Vol. 17, No. 9, pg.
842-873. Gr ” onroos, C. (1983), “Strategic Management and Marketing in the Service Sector” (report no. 83-104), Marketing Science Institute, Cambridge MA, in Lassar, M. A. , Manolis, C.
and Winsor, R. D. (2000), “Service quality perspectives and satisfaction in private banking”, Journal of Services Marketing, Vol. 14 No. 3, pp. 244-271.
Gr ” onroos, C. (1984), “A service quality model and its marketing implications”, European Journal of Marketing, Vol. 18 No. 4, pp. 36-44.
Gummesson, E. (2001), “Are you looking forward to your surgery”, Managing Service Quality, Vol. 11 No. 1, pp. 7-9.
Johnson, R. L. , Tiros, M. and Lancioni, R. A (1995) “Measuring service quality: a systems approach”, Journal of Services Marketing, Vol.
9 No. 5, pp. 6-19. Kirkham, J. D. , Rich bell, S.
M. and Watts, H. D (1998), “Downsizing and facility location: plant closures in multi plant manufacturing firms”, Management Decision, 36/3, pp. 189-197. Kloot, L. (1999), “Performance measurement and accountability in Victorian local government”, International Journal of Public Sector Management, Vol.
12 No. 7, pp. 565-583. Kolb, D.
A. (1984) “Experiential learning: Experience as the source of learning and development.” Prentice Hall, Englewood Cliffs. Krajewski, L. J. and Ritzman, L.
P. (1999), “Operations Management: Strategy & Analysis”, 5 th Ed. Addison-Wesley, Reading MA. Lassar, M. A. , Manolis, C.
and Winsor, R. D. (2000), “Service quality perspectives and satisfaction in private banking”, Journal of Services Marketing, Vol. 14 No.
3, pp. 244-271. Lockyer, K. , Muhlemann, A. and Oakland, J (1998), “Production and Operations Management”, 5 th Ed.
Pitman. Naylor, J. (1996), “Operations Management” Financial Times, Prentice Hall, London. Noori, H. and Radford, R.
(1995), “Production and Operations Management”, McGraw Hill, New York. Owen, S. H. and Daskin, M. S (1998), “Strategic facility location: A review”, European Journal of Operational Research, 111, pp. 423-447.
Page, D. , Williams, P. and Boyd, D. , (1993), Report of the public enquiry into the London ambulance service, South West RHA. Parasuraman, A.
, Zeithaml, V. A. and Berry, L. L. (1985), “A conceptual model of service quality and its implications for future research”, Journal of Marketing, Vol. 49 No.
4, pp. 41-50. Parasuraman, A. , Zeithaml, V. A. and Berry, L.
L. (1988), “SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality”, Journal of Retailing, Vol. 64 No. 1, pp. 12-40.
Robinson, S. (1999), “Measuring service quality: current thinking and future requirements”, 17/1, pp. 21-32. Roble do, M.
A. , (2001), “Measuring and managing service quality, integrating customer expectations”, Managing Service Quality, Volume 11 No. 1, pg. 22-31.
Roth, A. V. and Giff i, C. A.
(1990) Critical factors for achieving world class manufacturing governance, Operations Management Review, Vol. 10 No. 2, pg. 1-29.
Schmenner, R. W. (1994).
“Service firm location decisions: some Midwestern evidence”, International Journal of Service Industry Management, Vol. 5 No. 3, pp.
35-56. Silvestro, R. , Johnston, R. , Fitzgerald, L. and Voss, C.
(1990), “Quality measurement in service industries”, International Journal of Service Industry Management, Vol. 1 No. 2, pp. 54-66. Smith, S. (1987), “How to quantify quality”, Management Today, Oct 1987.
Teas, R. K (1993), “Expectations, performance evaluation, and customers’ perceptions of quality”, Journal of Marketing, Vol. 57 No. 4, pp. 18-34.
Voss, C. (1985), “Field service management”, in Voss, C. , Armistead, C. , Johnston, B. and Morris, B. , Operations Management in Service Industries and the Public Sector, John Wiley & Sons.
VRAS website: web > Youssef, F. N. , Nel, D. and Bova ird, T.
(1996), “Health care quality in NHS hospital”, International Journal of Health Care Quality Assurance, 9/1, pp. 15-28. Appendix 1 All the distances contained with in the charts are actual distances that have been calculated using Microsoft Atlas. An example, showing a measurement between Korumburra and Traralgon is given below.
It is recognised that straight-line distance has been used and, therefore, the distances take no account of geographic constraints. Using these distance, the travelling times have been calculated using the standard formula, distance divided by speed, multiplied by time. As such, the travelling times are not actual ‘on the road’ times and, thus, can only be used for demonstrative purposes. Microsoft Atlas – used to calculate distances.