Pre-requisites click here
Independent variable click here
Dependent variable click here
Data management click here
Analysis of survey results click here
Mapping landscape quality click here
Conclusions click here
References click here
In this section, the Community Preferences Method for landscape quality assessment is described in detail. This provides a “how to do it” manual for anyone to measure and map landscape quality. It draws together the basic method plus the experience of having applied it in ten landscape quality assessment projects together with three assessments of the visual impacts of developments.
Although the Community Preferences Method is not overly complicated or technical, it does require close attention to detail and a guard against errors creeping in at any stage. Quality control is paramount – constantly checking calculations and cross-checking. Furthermore, the Community Preferences Method requires considerable work. It is estimated that a typical project such as the Lake District or the Mt Lofty Ranges surveys each took around 1,500 hours, much of it in taking the photographs and in report preparation. The statistical analysis part was relatively minor, around 50 hours. Mapping often has to be fitted into the agency’s other priorities so can further extend finalization of the project.
Pre-requisites for carrying out a landscape quality assessment project include the following;
- A sponsoring agency to pay for the project and to lend its legitimacy to it when inviting people to participate. People from the agency may also assist throughout the project by distributing invitations to participate in rating the scenes, in scoring the landscape components, and in mapping the results. The sponsoring organization would also utilize the results of the project upon its completion.
- A list of organizations that can be contacted by email and invited to participate in the survey. This involves compiling a list of individuals and organizations – both government and non-government.
- Access to the area to be investigated. Normally, this will involve a vehicle, but it can also require walking over parts of the area and gaining access by boat. Fine sunny cloud-free weather is preferred during photography sessions.
- Photographic equipment. A digital single-lens reflex (DSLR) camera is best as it enables the focal length of the lens to be set at 50 mm, which approximates human vision and also standardizes the photographs. In most simpler digital cameras of smart phones, the focal length cannot be set. Some researchers are now using 75mm lens as it better represents the landscape as seen.
- Computer and statistical programs – Excel is sufficient for much of the analysis and is excellent for producing graphs and with Data Analysis Tools downloaded can carry multiple regression, ANOVAs, F-tests and T tests. Statistical programs such as SPSS, SAS, R, GenStat or Minitab may also be used. These and many open-source statistical programs are also available – see Wikipedia for hyperlinked lists. Wikipedia also has a comparison of statistical packages, both open source and proprietary, which show the range of statistical functions each will do. The project also needs access to statistical knowledge – how to use statistics to analyze the data gathered in the survey.
- Access to a GIS facility to map the landscape quality based on the survey’s results. Ideally, this will be the sponsoring agency.
Figure 1 summarizes the Community Preferences Method from beginning through to its completion.
Figure 1 Community Preferences Method
The basic model of the method is shown by Figure 2. The landscape is the independent variable because it remains constant and invariable regardless of who views it, rates it, measures it or examines it. Human observers, on the other hand, are not constant but their preferences and reactions to a landscape can vary widely influenced by a range of personal, interpersonal, cultural and other factors.
Figure 2 Landscape assessment components
The independent and the dependent variables are examined below.
The independent variable comprises the landscape to be assessed in the survey region. It involves photographing the landscapes and selecting, based on landscape units, scenes for inclusion in the survey. These may include benchmark photos to relate the survey area to the wider region. It also involves measuring the landscape components present in the landscapes.
Photographs are surrogates for an on-site assessment of the landscape. Click here for a review of the literature regarding the use of photographs and established that, providing they meet specified criteria, the preferences recorded from photographs will be similar to in situ ratings of the landscape.
Few studies provide guidance for photographs, but it is essential to ensure that, as far as practical, the photographs are standardized to minimize the variations other than in the landscape they represent. In this way, the ratings will be of the quality of the landscape, not the quality of its representation by a photograph. The criteria are: being in color, 50 mm focal length, landscape (i.e. horizontal) format, extend the scene to the horizon, provide lateral & foreground context of single landscape unit, aim for sunny cloud-free conditions, avoid composition, avoid extraneous and transitory features and photograph from ground level, not from the air. These criteria are illustrated below.
Photograph in color, not black and white
Black and white photographs emphasize the formalist qualities but lose the life-giving quality that color conveys (Figure 3). Shuttleworth (1980) found that black and white photographs gave more extreme ratings and had lower correlations with field assessments than color.
Figure 3 Color Versus black and white images
Photograph at 50 mm focal length which represents what the eye sees. Photographs at 35 mm will render objects very small (Sevenant & Antrop, 2011). For a technical explanation of why the 50 mm lens is best, see Banks et al (2014).
A growing number of researchers however are using 75 mm lens as this seems to better represent the landscape as seen. This is particularly important for surveys involving wind farms as the 75 mm lens lifts them in the landscape whereas with the 50 mm lens they appear diminutive. This is an area requiring further research as the longer focal length may provide more life-like images. MacDonald (2015) provides a comprehensive account of this.
The focal length of digital cameras is multiplied by 1.5 to equate to conventional cameras (e.g. digital 35 mm = 52.5 mm conventional camera). While covering a wider landscape, wider angles render distant objects small and somewhat insignificant, which can affect ratings of the scene (Figure 4).
Figure 4 Influence of the focal length on the image (near Zermatt, Switzerland)
A digital single-lens reflex (DSLR) camera utilizes the standard 3:2 aspect ratio (i.e. width:height) of 35 mm photographs based on the former 36 mm × 24 mm image size. While DSLR cameras usually have this ratio (e.g. 3216 pixels X 2136), smaller digital cameras and smart phones are higher than this, 4:3, to match the computer display screens. These are squarer images (Figure 5).
Figure 5 Comparison of image sizes with digital cameras
If spliced photos are used, avoid combining more than two as the resulting image is very wide and thin (Figure 6).
Two photos spliced
Three photos spliced
Figure 6 Spliced photos
Photograph in the horizontal landscape format, not the vertical portrait format (Nassauer, 1983). The issue of format is consistency and standardizing the survey scenes, the two formats should not be mixed in the same survey (Figure 7). The photographs should extend where possible to the horizon and avoid close-up confined views. Include some sky to help convey its landscape character.
Figure 7 Horizontal versus vertical format
Avoid photographic composition of a scene to frame a view or to lead the viewer into a scene; such composition can enhance its appearance and increase its rating (Figure 8). Aim for good lateral and foreground context to scenes, of a single landscape unit, and of typical representative scenes, not unusual (i.e. rare) scenes.
Figure 8 Trees are beautiful but should not be used to frame or lead into the scene
Although Law & Zube (1983) found that framing the scene had no influence on ratings, Svobodova et al (2014) found that compositions based on the Golden Section and the Rule of Thirds (the image is divided into three horizontal sections and three vertical sections creating points of interest at their intersections), together with the position of the horizon in the photograph significantly influenced ratings of the scene. Placing positive elements at the intersection points significantly increased the ratings of the landscape but placing negative elements on these points made negative ratings even more negative. Moreover, placing the horizon in the lower third of the photo (thus increasing the dominance of the sky) reduced the ratings of the scene but having the landscape fill at least half the scene increased the rating.
Minimize extraneous features such as people, sheep or cattle, wildlife, fences, electricity poles and wires, and excavations or other eyesores, each of which can influence preferences either positively or negatively (Figure 9). Hull & McCarthy (1988) found wildlife in the scene lifted preferences slightly. Scenes should not include features of an ephemeral nature as these are not part of the permanent landscape scene. Simplify, simplify, simplify!
Figure 9 Minimize extraneous features
Where necessary, remove such objects from photographs digitally. In a study of urban familiar places, Herzog et al (1976) included a scene with an “inadvertent speck” which turned out on closer examination to be a young woman in a miniskirt. They wrote: “A typical reaction from male subjects was ‘Wow! Look at that chick in the miniskirt!’ The scene loaded 0.51 on the Entertainment dimension and 0.37 on the Commercial dimension. Clearly, the decision to exclude people from the scenes was a wise one.”!
Avoid transitory effects of special atmospheric lighting such as sunsets or particularly vivid side lighting (Figure 10). Heavy cloud dampens the color saturation while spectacular cloud formations can enhance the scene. The rating of a sunlight scene but with extensive cloud cover averaged 1.2 lower (on 1 – 10 scale) than cloudless scenes (Lothian, 2000). Interestingly, however, scenes with a few scattered clouds averaged nearly 0.1 higher than cloudless scenes. Where clouds are present, ensure that the landscape is sunlit. Herzog & Bosley (1992) argued that mist and haze reduce the clarity of the scene and its understandability (in Kaplan’s terms) which would lower ratings. This finding supports standardizing scenes with cloud-free conditions.
Figure 10 Avoid transitory effects of special atmospheric lighting
Aim for sunny cloud-free conditions to standardize scenes against a blue sky. Avoiding the strong side lighting of morning and evening reduces the potential time to around six hours. Avoid heavy cloud though some clouds are acceptable providing they do not distract from the landscape. Figure 11 shows the same scene under a range of conditions.
Figure 11 Aim for sunny cloud-free conditions
High sun angle A further consideration is the low angle of the sun in winter. Figure 12 shows the sun angle for mid-summer and mid-winter for Adelaide, Australia and Vancouver, Canada. While the literature usually suggests taking photographs between 10 am and 4 pm (i.e. 1000 – 1600 hours), the graphs indicate that for Vancouver in mid-winter the sun angle lies between 16° and 0°, the sun setting before 4 pm. Photographs in mid-winter there would be restricted to between 10 am and 2 pm. In mid-winter the sun angle is only 16° but rises to 63° in mid-summer. In mid-summer when the sun rises at 4 am and sets after 8 pm, photographs could be taken between, say, 6 am and 6 pm.
Figure 12 Sun angle for 22 June and 22 December, Adelaide and Vancouver
It is therefore difficult to be prescriptive about what sun angle is appropriate because each location needs to evaluate its own requirements. If the survey has been given a certain period to time to be completed, and the timing is winter, then it is a matter of making the best of the conditions. This was the situation for the author’s River Murray survey in which the photography was carried out from May to August, winter-time, however, fortunately it was very dry and sunny during much of this time.
Problems caused by the low sun angle for photography include the long shadows, the strong back lighting and the loss of features, and the strong reflections off water (Figure 13). The presence of heavy cloud cover, mist and rain in winter further reduces photographic opportunities.
Figure 13 Scenes with back lighting
Photograph from eye level from the ground. Using a tripod will usually place the camera well below eye level and the scene will include more of the immediate foreground. Scenes can be included from hills and mountain tops of the valleys and vistas below, but these scenes should include some foreground to provide context as otherwise the scene can appear as though it was taken from an aircraft (Figure 14). Photographs are not normally to be taken from the air as this is not the usual way the landscape is viewed. However, where aerial oblique photographs are used, ground based scenes should also be included for comparison of ratings (Ramsay, 1992).
Figure 14 Include foreground in vista photographs
Overall, the ratings should reflect the quality of the scene, not the quality of the photograph, and standardizing photographs as far as possible through the application of these criteria will assist in ensuring this is achieved. Scott & Canter (1997) showed the importance of asking participants to rate the scene, not the photograph. Standardizing photographs as far as possible through the application of these criteria will assist in ensuring this is achieved.
In some instances, it may be necessary to draw from existing collections of photographs. Caution is needed to avoid selecting photographs which are well composed, have appealing lighting or clouds, or have people or other extraneous features. The author’s experience is that over 95% of such collections will be rejected because of such defects (Lothian, 2000, 2009, 2013).
With Photoshop and similar programs, photographs can be altered, for example to remove extraneous objects. While this can be used to remove electricity poles and the like, such manipulation risks the photograph ceasing to represent accurately the landscape and should therefore be used minimally to edit out unnecessary objects and not to change colors or remove intrinsic features of the landscape.
Photographing a region
When photographing a region, aim to traverse as many roads as are accessible throughout the area. Photographing always involves balancing the need to gain sufficient coverage of the region with the time involved and also accomplishing what can be done in the hours of daylight available. Waiting for the right weather conditions can be trying; when conditions are right, time is of the essence. In large regions where considerable travel is necessary, much time can be spent in reaching the destination. Photographs should not be taken from the vehicle; get out of the vehicle and photograph over the fence if it borders the road. While this takes more time, it is essential that features such as fences be excluded as far as possible.
Figure 15 shows the network of routes taken in two studies, the Mount Lofty Ranges in South Australia, and the Lake District in England. Where there are areas which are inaccessible by vehicle, traversing by foot may be required, but time and weather may not permit this. Sometimes it is possible to gain photos of such areas from others such as walkers who are familiar with the area.
Figure 15 Routes taken for photography, Lake District & Mount Lofty Ranges Projects
Table 1 summarizes the number of survey scenes per unit area of the survey or per length for linear studies such as the coast or river. The linear sureys ranged from 5.6 – 16.7 miles per scene (9 to 26 km per scene) while the area surveys were mainly between 3 and 40 sq. miles per scene (7 and 102 sq. km per scene), excluding the South Australian survey which was nearly 2,400 sq. miles per scene (6,100 sq. km per scene).
Table 1 Area per survey scene
The smaller the area, the more fine-grained the analysis of landscape quality can be; for a large area, it must necessarily be broad-brushed but for smaller surveys, the survey can reach down into small areas. In the South Australian survey, much of the northern arid region is flat and relatively featureless and a few scenes suffice to cover the variations, whereas in a more complex area such as the Mt Lofty Ranges or the Lake District where there is considerable variation, more scenes are required to adequately represent it. Figure 16 shows the close relationship between the size of the survey area and the area per scene.
One option is to establish a maximum area per scene and to increase the size of the survey say from 130 scenes to 200-300 scenes. This could be achieved through holding several consecutive surveys but raises the issue of varying numbers of respondents per survey.
In summary, photographs that are used should be standardized: be in color, 50 mm focal length equivalent, landscape format, extend scene to horizon, provide lateral & foreground context of single landscape unit, aim for sunny cloud-free conditions, avoid composition, avoid extraneous and transitory features and photograph from ground level, not from the air. These guidelines aim to minimize variations in the photographs so that the ratings are of the landscapes they represent rather than of the quality of the photograph. Use digital manipulation sparingly to remove unnecessary features, not to enhance the scene. Photograph the landscape throughout the region, using every point of access available and if necessary, supplementing it with photos from others familiar with the area.
Where the landscape quality assessment is of a limited area such as a small region within a country, the range of landscape ratings is likely to be relatively narrow. In the Barossa survey, for example, the ratings ranged from 5 to 6.5, a span of only 1.5 units (Lothian, 2005b). The scenes for South Australia covered a wider range, 3 to 8 (Lothian, 2000). If the Barossa scenes were rated only relative to the scenic quality within that area, they would cover a wider range but these ratings could not be compared to other areas because they have not been benchmarked against a State standard. Therefore, it is necessary to include in the survey, benchmark photographs from elsewhere in the wider region to enable the ratings to reflect a regional or even a national perspective. The regional scenes should cover a wider range of scenic quality than is evident within the survey area. By this means the results from various surveys can be compared one with another.
In the Mount Lofty Ranges study, no benchmark scenes were included because it was believed at the outset that a full span of ratings would result, similar to the South Australian survey. In the event, the ratings ranged from 3 to 8, which justified this assumption.
Apart from the author’s studies, Prineas & Allen (1992) is the only other study which included ten photographs from outside the survey region (including from elsewhere in the world) in a survey of 90 photographs of a World Heritage Area. This is a ratio of 11%. To ensure the benchmark scenes provide the State-wide context for ratings and influence the ratings of the study area, a higher ratio is considered necessary. In a survey of 150 scenes, generally 20 – 30 are included from outside the area, a ratio of 13 – 20%, leaving 120 – 130 from the area of study.
Number of photographs in a survey
Prior to the Internet, the number of scenes in a survey was determined by concerns of fatigue affecting the performance of participants. Surveys typically were limited to around 80 scenes. However, with the Internet, fatigue is not an issue as the surveys enable participants to leave the survey for a while and return later. The author’s surveys generally contain around 150 scenes. It is also possible to include several surveys, each of say 100 – 150 scenes, for a study. However, these may have issues in gaining sufficient participants.
Using an Internet-based survey, the scenes can be viewed at the participant’s own pace, fast or slow, assuming adequate connection speed. Even a survey of 150 scenes can be processed by some participants very quickly, in 15 – 20 minutes. The brain is able to rapidly discriminate the appropriate rating for a scene (Herzog, 1984, 1985) and rapid evaluation minimizes the likelihood of analysis and revision.
Classify the landscape units in the study area
The photographs selected for the survey aim to sample the range of landscapes present in the study region. This may be achieved by classifying the region into landscape units of broadly similar characteristics.
Previous physiographic or geomorphic classifications for the survey region may be examined and adapted where possible for the visual landscape. Photographing the region provides familiarity with its characteristics from which the landscape units may be classified. Landscape units need not be overly complex. The objective is to differentiate the region’s landscape sufficiently to ensure the photographs adequately sample its characteristics. Figure 17 illustrates the identification of landscape character areas for two surveys.
Figure 17 Examples of landscape unit classification: broadscale for South Australia and fine scale for the Barossa region
In some studies, a description of the landscape units is sufficient (Table 2).
Table 2 Landscape units for three surveys
The identification of landscape units across a region provides guidance for photography and crucially provides the basis for the selection of photos to ensure each unit is represented in the survey.
Photographs are allocated to each of the landscape units and a selection made. The selection for the survey is not usually on the basis of area as some units such as extensive plains are large in area but with little variation. Rather the selection should capture the diversity of the landscape units. The South Australian landscape includes an extensive arid region which covers 86% of the State (Figure 17), large tracts of which lack diversity. In the study of the State’s landscape (Lothian, 2000), this extensive arid region was represented by only 29% of the scenes. In contrast, the coast, agricultural regions and particularly the Mt Lofty Ranges are far more diverse and required more photographs to capture their complexity.
Statistical good practice indicates that three replicates of a type of scene should be included. Three versions of a particular type of scene provide a reasonable replication of the characteristics and help estimate the variance. Variance is a measure of how spread out the data is from the overall mean. The square root of the variance is the standard deviation.
The scenic quality of scenes derives from their content, including land forms, trees and water, which trigger responses in participants; such features are termed landscape components. Alternative terms include landscape dimensions (Williamson & Chalmers, 1982), scenic quality indicators (Chenoweth et al, 1997), attributes (Preston, 2001), and visual features (Wu et al, 2006). Scoring such characteristics in the scenes allows the analysis of ratings to proceed beyond mere description of the ratings to understanding the contribution of landscape components to the scene. Multiple linear regression analysis allows these landscape components (the independent variables) to be compared with the ratings (the dependent variable) and to identify and quantify which landscape components contribute to the ratings and their relative significance. Depending on the selection of the landscape components, the regression models can explain a large proportion (say 85%) of the variance of the data. These models are described further below.
Table 3 Landscape Components in Australian preference surveys
A review of Australian preference surveys found that the most common landscape components were land cover (trees), land forms, water, and built forms (Table 3).
Table 4 summarizes the landscape components the author has used in South Australian landscape studies. Naturalness and diversity have been used in most surveys as have trees and water.
Table 4 Landscape components of South Australian surveys
Landscape components are usually scored on their visual significance in the scene. While this can be measured objectively using surrogates such as the percentage of the scene, it is best evaluated by people scoring each scene on a 1 – 5 (low – high) scale. This scoring scale has been found to provide sufficient discrimination and differentiates it from the 1 – 10 rating scale. The landscape components are scored by groups of up to 30 people, 30 being a figure recommended in the Manchester study (Robinson et al, 1976).
In some surveys, further objective assessments of landscape components have been undertaken by the author. Table 5 shows the assessments by the author in the Lake District study.
Table 5 Landscape components assessed by author, Lake District study
The dependent variable comprises the ratings by people of the photographs; their ratings depend on the quality of the landscape. Gaining these ratings is achieved by means of an Internet-based survey. The survey involves assembling the scenes for the survey including benchmark scenes, defining the number of scenes, defining the sample of participants, providing instructions for participants, placing the survey on an on-line survey instrument, launching the survey, and inviting participation in the survey.
Use of the Internet
Early in its use, several researchers examined the efficacy of the Internet for landscape preference surveys (Bishop, 1997; Wherrett, 1999, 2000). The advantages of Internet surveys over traditional questionnaires include the following:
- Does not require postal surveys or interviews so therefore less expensive to implement;
- Automation of responses;
- Potentially enormous sample size; caution is needed however as the sample can potentially include virtually the entire world;
- Rapid response and greater user control over the speed of the survey;
- Improved randomization of scenes;
- Improved accuracy of response as the results do not have to be transferred by hand from sheets as in paper-based surveys.
The rapid rise of access to the Internet has been a defining revolution over the past few decades. Figure 18 shows that in the mid-1990s, household Internet access scarcely existed but now approaches 95% in developed countries.
Australian Bureau of Statistics 2018. Household use of information technology, 2016-17;
UK Internet access – households and individuals, Great Britain: 2020;
US Martin, M., 2021. Computer and Internet Use in the United States: 2018. American Community Survey Reports.
Figure 18 Household Internet Access, Australia, United Kingdom, United States, 1996 – 2020
On-line survey instruments
The on-line survey instrument used in several of the author’s studies is Survey Monkey. It is a popular instrument of which there are many, including Question Pro, eSurvey Pro, Zoomerang, Survey Gizmo, Free online survey, Fluid surveys, Qualtrics, Survey Expression, Goodle Consumer Surveys, and Smart-Survey. The alternatives were assessed and Survey Monkey chosen as it had more features than most (including question randomization) and proved easy to use. It also provides a rapid answering service for queries. It is well able to handle landscape quality surveys as there are hundreds of thousands of surveys running at any one time (pers comm, Survey Monkey).
Survey Monkey requires images to be less than 150 kb for rapid appearance on computer screens. Using IrfanView, all images are compressed to 900 pixels width, which will bring most under 150 kb. Where they exceed 150 kb, IrfanView can be used to reduce them to slightly under 900 pixels width.
The survey proper comprises the following sections:
- Introduction to the survey, its purpose, what it covers, emphasizing that no qualifications or experience are required, the survey includes the opportunity to make comments and ask for a summary of its results. It also includes an email address for queries.
- Instructions covering how it works, how long it will take, and hints for doing the survey.
- Demographic questions to enable respondents to be compared with the nation’s population to determine the representativeness of the survey. The questions generally cover: age, gender, education level, birthplace, home postcode and familiarity with the area being surveyed. Participants should be 18 years or over in age as the aesthetic preferences of children can differ from adults (Zube et al, 1983).
- Sample scenes are then shown, not for rating but to indicate the range of scenes that the survey will contain. The full span of likely ratings should be shown. This also serves to cue the respondent’s mind to the rating scale which is shown.
- The survey scenes are then shown. The survey instrument randomizes these continually so that the issue of one scene affecting the rating of the next is avoided. The instrument also indicates progress with the survey.
- Following completion of the survey scenes, respondents are thanked for participating. It also leads to a box for making comments on the survey and for the respondent’s email address, if they wish to receive a summary of the results.
To invite participation in a survey, a letter is emailed to potential participants. The Internet is searched for groups likely to be interested and local sources of information such as newspapers are canvassed. For the Lake District survey, the groups included:
- Walking, rambling, rock climbing, cycling, angling clubs and mountain rescue organizations;
- Councils, including parish councils – staff and elected councilors;
- Campsites, B&Bs, hotels;
- Tourist attractions;
- Miscellaneous groups.
Over 1500 email addresses were obtained to provide the basis for invitations. Experience indicates that at least 10% of email addresses will be invalid, being inaccurate or out of date. On-line survey companies can also provide access to potential participants. For example, Survey Monkey Audience provides respondents located in the US and internationally who can be accessed for a price.
In addition, Citizen Science offers access to ordinary people who like to assist scientific projects at their own time and expense. citizensciencecenter.com and CitizenScience.org are good starting points. There are many websites which can assist in making the survey attractive to respondents and not look like spam. Social media sites such as Facebook and Twitter also provide the means for accessing many people who may be willing to participate in the survey.
Where a list of potential organizations is available, a generic letter is prepared and then tailored for each group. Where a name is available, the invitation is addressed personally. Clubs and churches are asked to notify their members and congregations through their newsletters. The generic letter for the Lake District survey is shown in Figure 19.
The invitation may ask the participant to forward it on to others who may be interested in participating, however, if tight control is sought over participation this may be omitted (Wherrett, 1999). Consideration should also be given whether participants should be drawn solely from the survey region, from a wider area (e.g. State), from elsewhere in the nation or from other nations as well. The instructions regarding distribution should be clear on this matter.
Following is the format of the Lake District survey.
Page 2 Instructions
Page 3 Demographics
Pages 4, 5, 6, and 7 were example scenes, selected to cover the range of landscape quality in the survey. These scenes help to cue the participant’s mind to rating the scenes.
Scene for rating
Each of the rating numbers has a circle underneath on which to click the rating.
The following was at the end of the survey.
In compiling the survey, a rating scale is used to translate the participant’s subjective assessment of a scene into a number. Instead of using adjectives such as superb, attractive, beautifully, wonderful, stunning, or boring, mediocre, ugly and so on, the number forces a choice. It is a surrogate of the degree of pleasure or displeasure gained by a person viewing a scene. Experience with thousands of participants in surveys indicates that most find this an easy and even an enjoyable process.
The rating scale provides a measure of scenic quality and approximates an interval scale (Stevens, 1946). Ranking of photographs, on the other hand, provides only a relative measure – scene A is better than scene B but not as good as scene C. However, ranking does not enable results to be compared between regions, nor does it facilitate statistical analysis.
A common rating scale should be used to enable comparisons of results; surveys which use rating scales of 1 – 5, or 1 – 15 instead of 1 – 10 are difficult to convert. The scale should be from low to high, not the reverse which tends to confuse participants. Participants tend to assume the low – high continuum and when the opposite is used, they may revert during the survey to the low – high continuum which renders analysis of the ratings problematic.
On the basis of Kant’s dictum that beauty has no ideal (Lothian, 1999) there should probably be no upper limit to the scale as this suggests a finite limit to beauty. For analytical purposes, however, it is necessary to limit the scale at its upper end.
A baseline such as 1 is also needed to bench the scale. This is preferred over a baseline of zero as it is difficult to conceptualize the appearance of a landscape of zero value – i.e. complete absence of aesthetic appeal. In an interval scale, a zero may represent the minimum amount of scenic beauty available to observers in the area being evaluated (Hull, 1987), however it does not possess the quality of an absolute zero. Even a flat, featureless landscape which some might regard as having the pre-requisites of a zero score has its appeal as papers on the Canadian prairies testify (e.g. Rees, 1977; Evernden, 1983). As evident from the scenes in Figure 20, scenes lacking any variation in land form, without land cover, land use or water still rated nearly 4.1.
The size of the sample should be sufficient to reduce sample error to ≤ 5% (i.e. 0.05) which requires a minimum of 380 participants. The confidence interval falls quickly to 5% as the sample size increases towards 400 (Figure 21). The graph indicates that increasing the sample size will further reduce the confidence interval but at increasingly slower rates. If larger samples can be obtained, such as via the Internet without cost, then these will lower the confidence interval yet farther, however, if cost is involved, stop the survey when it reaches 400.
Understanding confidence interval & confidence level
The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in opinion poll results. With a confidence interval of 4 and the average response of 47%, then this is between 43% (47-4) and 51% (47+4). The confidence level is the degree of certainty. It is the % of the sample that would pick an answer within the confidence interval. A 95% confidence level means you can be 95% certain. Combining the two, you can say that you are 95% sure that the true % of the population is between 43% and 51%.
Table 6 summarizes the thirteen surveys carried out by the author, with all but the first one (SA Landscapes) using Internet surveys. These indicate the large samples that are possible through use of the Internet. “Useable responses” generally means completed surveys. The confidence interval for most of these surveys is less than 5%.
Table 6 Summary of survey responses
Five surveys in which the time taken to complete the survey was recorded is shown in Table 7. In the earlier surveys when some respondents used dial-up Internet connections, it was found that there was only a 6% difference between dial-up and broadband.
Table 7 Average survey time
Table 8 Breakdown of times taken to rate each scene
Table 8 shows the breakdown of times taken to rate each scene over three surveys and reinforces the point that rating is best performed extremely quickly. This better ensures that the rating is based on an immediate affective judgement. There is a marked reduction in the number of participants who took longer than 10 seconds. The lengthier times are presumably those participants who apply cognitive analysis before rating the scene. If it is assumed that 10 seconds is the threshold for affective judgement, then 89% assessed the scenes on that basis, while 11% used cognitive judgement. Figure 22 shows the distribution of times per scene for two studies.
Scoring landscape components
While the Internet survey is underway, a small group or groups of people, up to 30, score the various attributes present in the scenes, for example, the visual significance of trees, or water, or the naturalness or diversity present in the scenes. These are scored on a 1 – 5 scale. These surveys are also loaded onto Survey Monkey and participants invited by email.
The figure of 30 respondents was derived from the Manchester study where a statistician advised that “In statistical terms, quality scores for a number of survey units for a minimum sample of 30 observers are needed to prove normality” (Robinson et al, 1976).
The scoring of landscape components is absolutely vital for determining the contribution of each of the components to overall landscape quality. Without these scores, all the survey produces is the ratings of the landscape. The scores for each of these components for each scene enable their contribution to the ratings to be assessed and the interactions between components, e.g. land cover and naturalness, to be quantified using multiple regression analysis.
Inclusion of respondents
Data management involves preparing the data for analysis. The first step is deciding which surveys to include. Normally, all completed surveys are included but should surveys that have completed say 50% of the scenes also be included? Figure 23 shows the number of scenes completed by participants in the survey of the Lake District. Out of 540 respondents, 314 had completed all 145 scenes. Fifty-one respondents rated fewer than 50 scenes. Due to their use of old browsers such as Internet Explorer, 30 respondents failed to proceed through the survey. In this survey, only those who rated no scenes were omitted which left 506 respondents.
The respondent means are examined to identify strategic bias, which occurs where the respondent seeks to use the survey for their own objectives, such as promotion of the region by giving high scores. As it is the respondents who decide strategic bias, the respondent means rather than the scene means are analyzed. Figure 24 shows the distribution of respondent means for the Lake District survey and indicates a few near 10 but a couple near 1, the lowest score. In some surveys, there have been several respondents who average either 10 or 1. In the Lake District project, there were four respondents whose means were between 9 and 10, i.e. they must have rated most scenes either 9 or 10, thus not showing much discrimination of judgement. These surveys were all rejected.
Following the selection of respondents, the confidence interval can be calculated (www.surveysystem.com/sscalc.htm).
The normality of the responses can be ascertained by plotting a histogram of respondent ratings and also a Q-Q plot which show how well a set of values fit a normal distribution; if the data lie along the diagonal line, they are normally distributed. Figure 25 shows the histogram and Figure 26 the Q-Q plot for the respondent ratings in the Lake District survey. Both show a typical normal distribution of responses.
Another view of the distribution of ratings is by plotting the scene means in ascending order. Figure 27 shows this for the Lake District survey. The distribution had an ‘S’ curve, which arches upwards at the top ratings and curves down at the lower ratings. This suggests a tendency to place slightly more extreme values on scenes of very low or very high scenic quality, a phenomenon which is common in surveys of this nature (pers. com. Prof. Terry Daniel, Dept of Psychology, Univ. of Arizona).
The ratings of the benchmark scenes, if any were included in the survey, can be examined and compared with the ratings from previous surveys. They should be removed from the survey so that the scenes only from the study area are analyzed.
ANALYSIS OF SURVEY RESULTS
The analysis of the survey’s results is the most exciting stage of a landscape quality assessment project because it is during this that new knowledge and understanding of human perception of landscapes emerges. This is knowledge that no one previously had so it is exciting for that alone, but as well, so many insights and understandings derive from the detailed analysis that provides further stimulation.
The first step in the analysis of the data is to examine the demographics of the respondents to ascertain the extent to which they match that of the wider population. A word of warning, they generally do not match the community being usually older and far better educated than the country as a whole, however, surveys generally find it very difficult to gain the participation of members of the community who lack interest (Tucker et al, 2006). Chi-square tests are conducted to assess the significance of differences between the participants and the community.
It is evident from Table 9 that in all seven surveys, in respect of age and education, the proportions of participants differed significantly from that of the wider South Australian state population. In respect of gender and birthplace, three out of the seven surveys differed significantly.
Table 9 Significance of difference of survey participants with South Australian population (p value)
Figure 28 compares the characteristics of the respondents for four surveys by the author with those of the South Australian community (2001 Census). It is evident that the survey respondents were generally:
- Older – surveys have more 45 – 64 year olds;
- More educated – surveys have many more with higher degrees and slightly more with degrees;
- Gender balance and birthplace were similar to the population
Given that the survey respondents differed significantly from the community, particularly in respect of age and education, do these differences matter? One way of testing this is that if the differences affected results, it would be expected that the mean ratings across the range of respondent characteristics would show this, e.g. different ratings for different age groups. Table 10 and Figure 29 show the means across four characteristics from six surveys by the author. The differences in respondent characteristics had no appreciable influence on the results. Although these means were for the entire data set, if there were major differences between different sets of respondents, these would be evident in the ratings.
Table 10 Average ratings across respondent characteristics in surveys
Comments by participants
Where the survey invited comments either on the survey or on the area being studied, then a summary of these is provided. It is generally useful to classify the comments, for example, the comments in the Barossa Valley survey covered: photographs, survey, Barossa landscape, tourism and development issues. Some comments cover several topics. Although photographers among the respondents often complain about the quality of the survey photographs, there is usually an equal number of positive comments about the photographs from other respondents. Useful insights about the survey area are often included.
Location of respondents
If the survey asked the postcode of the respondent, these can be compiled and analyzed. Of interest is the number of respondents who live close to the survey area. The number of these can be compared with the proportion of the region’s population who live there. In the Flinders Ranges survey, for example, the number that participated from Flinders Ranges postcodes was 5.1% of the survey’s participants which was considerably higher than their 1.3% of the State population. In the Lake District study, 57% of respondents were resident in the area whereas in the Mt Lofty Ranges study, 38% of respondents were resident and a further 14% lived outside it but commuted through the area.
As familiarity of the survey area generally has a positive effect on ratings of landscape quality, familiarity should be included as a question. It can also be linked to a question on whether the respondent resides in or near the area. Figure 30 illustrates the results for the Lake District survey where 76% of respondents and nearly all of the residents were either very familiar or extremely familiar with it. The ratings of those who resided within the Lake District were 3% higher than non-residents. While it is often said that familiarity breeds contempt, in regard to landscape the opposite applies, the more familiar one is with a landscape the more it is loved. Thus the ratings of the Lake District increased with greater familiarity and respondents who were extremely familiar with it rated it 14% higher than those who had never visited it.
Analysis of overall ratings
The analysis aims to uncover as full an understanding as possible of the ratings that have been obtained and to explain these by reference to the landscape components. Analysis of the ratings commences with the general and moves progressively to the specific. Analysis may then cover sub-regions or areas, and each of the landscape components.
Table 11 summarizes the overall ratings statistics for nine of the author’s studies. The minimum values were in the 2s or 3s while the maximum was in the 7s or 8s. None reached 9. Across the 1 – 10 rating scale, the range varied from 2.40 to 8.88, a range of nearly 6.5 or two-thirds of the rating scale. The means of ratings per survey ranged from a low of 5.30 for the Barossa study to a high of 6.61 for the World’s Best Landscapes survey. In the surveys with water scenes, including the coast, River Murray, Lake District and the World landscapes studies, the means were higher than the remaining studies.
Table 11 Summary statistics, nine surveys
Table 12 summarizes the ratings of all scenes of nine surveys conducted by the author. Ratings 5 and 6 account for nearly two thirds (64%) and there are no 1, 9 or 10 ratings.
Table 12 Ratings of scenes (excluding benchmark scenes) for nine surveys
Figure 31 shows the distribution of ratings for the South Australian surveys and highlights the normal distribution of the ratings around the middle of the rating scale.
A breakdown of the overall ratings by landscape units provides a usual point of comparison. Figure 32 depicts the ratings for the landscape units in the South Australian landscape study. The boxplot shows the median as a thick line, the box is the interquartile range (i.e. 25% – 75% of values) while the outliers show the highest and lowest values. It is also called a box and whiskers plot. The box plot provides a useful visual image of the variance of data and the relative position of differing groups.
A fascinating finding which was common to all the author’s surveys except the Barossa survey was that corresponding with the high ratings, the standard deviation was low but as the ratings decreased, the standard deviation increased (Figure 33). Standard deviation is a measure of the consistency of opinion among respondents. A low SD suggests opinions are fairly similar whereas a high SD suggests diverse opinions. This indicates that respondents rate scenes of high quality more consistently than scenes of lower quality. A similar occurrence was found by Lamb & Purcell (1990) for respondents assessing naturalness of scenes, and by Williamson & Chalmers (1982). It suggests that the judgement of what a community prefer is more homogeneous than what it dislikes. This may be the reason why high quality landscapes have universal appeal and attract many visitors, while lesser landscapes have lesser appeal.
Analysis of landscape components
One of the most exciting parts of analysis is that of the landscape component scores. These scores are derived from surveys separate from the main ratings survey in which small groups of respondents score components of the landscape on a 1 – 5 scale. This is exciting because of the insights that the analysis provides of the contribution of these components to the ratings of landscape quality, and also for the relationships detected between components, e.g. the influence of trees and water on naturalness.
From the components from the Lake District project, Figure 34 shows the histograms of the distribution of scores for the cultural and land cover components, the former being middle ranking, while the latter is skewed towards the higher scores indicating a strong land cover component.
Figure 35 compares the component scores with the standard deviations of their distribution. The trend in the cultural graph is upward indicating that there is a similarity of opinion (i.e. low SD) when the cultural score is low but as the score increases, so too does the range of opinion (i.e. high SD). Thus there is diverse opinion when the feature is prominent in the landscape. Contrast this with the trend for the land cover graph which indicates that when the score is high there is narrow opinion (low SD) but as the score falls, the opinion widens (high SD). These may be summarized thus:
- Up slope trend line: diverse opinion when component is prominent (score 4-5) in the landscape;
- Down slope trend line: similar opinion when component is prominent (score 4-5) in the landscape.
The down sloping trend in the data thus suggests the component will be an important contributor to landscape quality.
From an examination of the component scores in other studies by the author, two in particular, stand out (Figure 36). In the Barossa survey, the increase in the natural component score corresponded with greater diversity of opinion while in the Flinders survey, opinion was consistent when the scores for terrain were high but were less consistent as the score reduced.
Examination of the algorithms for common components in all surveys indicates that there appeared to be little consistency (Table 13). For example, three of the land form equations were positive (Barossa, Lake District, Mt Lofty Ranges) while two were negative (R. Murray, Flinders Ranges). Three of the five land cover algorithms were negative indicating that consistent opinion often occurs when the land cover is prominent in the landscape. Three of the four water equations were negative suggesting that opinion about water is generally consistent. All five of the diversity equations were positive suggesting consistent opinion when diversity is low but decreases as the landscape became more diverse. This variation in the consistency of opinion regarding landscape components is a potentially rich area for further research.
Table 13 Algorithms for standard deviations vs component scores
Note: A positive algorithm is of the form y = 1.29x while a negative algorithm is of the form y = –1.73x. Only the slope of the equation is shown.
Apart from examining the components’ histograms and relationship of the scores with their standard deviations, the further value in scoring the components lies, firstly, in relating the component scores with the ratings obtained in the survey, and, secondly, in relating each component to other components to assess whether they correlate.
Table 14 Lake District: Correlations of ratings with components
Table 14 summarizes the correlations of the ratings with the landscape components for the Lake District survey. The highest correlations are for land form, diversity and naturalness. Figure 37 display the graphs for land form and diversity, showing that as the scores for each rose, so too did the ratings. Thus in the Lake District, these three components, land form, diversity and naturalness, have a strong influence in creating landscape quality.
In the River Murray survey, the visual significance of the cliffs which line part of the river were found to have a positive influence on ratings (Figure 38); at the lowest score of 1 the ratings averaged around 4.9 while in scenes where the cliffs were very significant in the scene they averaged over 7.9, a range of 3 units.
Relating components, one with another, reveals some interesting links. Table 15 provides the correlations between the components from the Lake District survey.
Table 15 Lake District: correlations of components with components
Although none of the correlations were particularly high, the highest included:
- 0.52 between cultural and stone walls, an unsurprising link; but also a negative -0.70 correlation between cultural and naturalness;
- 0.63 correlation between diversity and land form and 0.52 between diversity and land cover, indicating land form and land cover as driving diversity in the landscape;
- 0.57 correlation between naturalness and land forms, indicating the importance of the mountains in creating a sense of naturalness in the Lake District;
- An unsurprising negative correlation of -0.56 between stone walls and water – there are not many underwater stone walls!
Figure 39 shows the graphs for the stone walls and cultural scores, and the land form and diversity scores, showing that as one component increases, so too does the other.
A particularly interesting finding from the Barossa Valley survey was that the presence of vines actually has a slightly negative influence on ratings (Figure 40) – the slope of the trend line is -0.14 indicating that as the vine score increases (i.e. more vines), the ratings fell. However, as shown by Figure 41, there is an inverse relationship between trees and vines; as the tree score fell (i.e. fewer trees), the vine score increased and as the tree score increased, the vine score decreased. The reason for this is that most of the vineyards had been cleared of trees; what trees remained were around their perimeter, along creek lines or along roadsides. Thus in the Barossa Valley, Australia’s premier wine region, the presence of vines did not add to landscape quality, they actually had a slightly negative effect, but it was the presence of trees in the area that created the pleasing landscape.
In the Coastal Viewscapes survey, Figure 42 has the 1 -1.5 scores deleted as they represent cliffs and rocky foreshore without a beach. With these omitted, the quality of the beach had a substantial influence on the ratings – the trend line indicates for each unit increase in beach quality, ratings increased by 0.6. When combined with the scores for the presence of seaweed on the beaches, however, (Figure 43) low beach scores occurred where there was the most seaweed and conversely, where seaweed was absent, the beach quality scores were high. Clearly, the public find seaweed en masse objectionable in terms of scenic quality although this does not detract from its ecological and habitat value.
In the Flinders Ranges survey, the vegetation was scored on an arid – lush scale; if it looked arid, it scored 1 or 2, if it looked lush it scored 4 or 5. The visual significance of the vegetation in the landscape was also scored. Comparison of these two scores (Figure 44) indicates that arid vegetation was not visually significant while lush vegetation scored high in its visual significance. In the same study, Figure 45 indicates that rockfaces contributed to how spectacular the landscape appeared.
These quantitative findings can be very valuable for managers of these resources to identify objectively what are the components which contribute to the quality of the landscape or which detract from it, and to act accordingly. The adage, you cannot manage it until you can measure it applies particularly to scenic quality. Having quantified the relationships between the components, one is in a far better position to identify key areas on which to focus management attention.
Landscape quality model development
Multiple regression analysis provides the means for deriving algorithms which relate all of the components to the ratings. The components that are scored (e.g. trees, water, land forms, naturalness) provide the independent variables and the scenic quality ratings provide the dependent variable. It is assumed that these ratings are dependent on the various components that are scored. Multiple regression analysis enables this assumption to be tested. It quantifies the influence of each component on scenic quality. In contrast to linear regression which analyzes only one variable, multiple regression analyzes many variables concurrently. The formula derived describes the best fit between the competing variables and its strength. It helps in identifying the key factors influencing scenic quality ratings.
As well as providing insights into the components which influenced scenic quality and their respective strengths, the models can also be used to indicate the scenic quality of a scene that had not been previously rated. By scoring the relevant components and entering these into the model, the scenic rating of the scene can be derived.
Table 16 shows algorithms derived from four studies. It shows the equation for all of the components and also equations for just one or two components.
Table 16 Multiple regression algorithms for four surveys
It is particularly interesting that in most instances, the simpler equation which is highlighted has a high correlation coefficient (R2) and may be used as a substitute for the more complex equation. The R2 explains how much of the variance is explained by the equation, for example, in the River Murray Model 1, the R2 is 0.814 which means that 81.4% is explained by the equation. In the Lake District 1 model, it explains 94%. Such models can be used in the confidence that the equation closely matches what would be produced by ratings. These also indicate a good selection of components for scoring.
These equations may be used as a substitute for deriving the landscape ratings for a landscape. For example, in the Flinders Ranges, scoring the landscape according to how spectacular it is and entering this figure into the equation will yield the landscape rating. Thus a spectacular score of 5 yields 8.34 (2.84 + 1.10 X 5). A score of 4 yields 7.24.
Table 17 and Figure 46 illustrate how the equations may be simplified by using fewer components. Use of just one factor resulted in a R2 of 0.68, thus explaining 68% of the variance. If this is considered too low, then just three factors resulted in the explanation of 80%. Adding further components lift the explanation further but with diminishing returns as shown in Figure 45. Thus a fair measure of the landscape quality of a coastal scene in South Australia could be derived by scoring (out of 5) the sense of tranquillity it inspires, and the extent and quality of the beach.
Table 17 Model components and correlation coefficients – Coastal viewscapes survey
MAPPING LANDSCAPE QUALITY
Mapping of landscape quality involves interpreting and applying the understanding gained from the analysis of the ratings of scenes together with the scoring of their components. However, an understanding of the principles involved is required in its mapping.
A rating of a scene as derived from the survey applies to the whole scene as exampled by scene 116 (Figure 47). This reflects the assessment made by a viewer who unconsciously aggregates all the various components and reaches an assessment. Ideally, it is not made cognitively but rather affectively, based on one’s likes and dislikes.
A scene generally comprises several parts, for example, different land forms such as plains, hills and mountains. The above scene comprises a valley floor in the foreground with tall stately trees and a mountain with rockfaces in the background. The flat plain-like foreground attracts a rating of 4 – 5 if it existed without the mountain behind, and the mountain of itself rates 7 – 8 (Figure 48). The rating derived for the scene of 7.73 is the expression of respondents summarizing or averaging the entire scene in their mind.
If the scene included more plain, then the rating would accordingly be lower as exampled by scene 68 (Figure 49). In scene 68, if the plain existed without the mountain behind, then it would rate 4 – 5. The rating of 5.67 reflects the presence of the mountain behind the plain. A way to test this is to turn say 90º or 180º from the mountain and rate only the plain.
The distance to the mountains over the plains is an important factor in determining the rating that is derived. Where they are close, the rating is higher than if they are more distant and therefore, less significant in the scene – e.g. compare the ratings of scenes 116 and 68.
The ratings reflect the landscape in view. For example, scene 28 has a rating of 7.14 but this actually comprises a series of ratings from the foreground fells through its lower slopes to its upper slopes (Figure 50). If the landscape stopped at any one of these ratings, that would be the highest rating it achieved. So if the scene only comprised the foreground fells, it would be in the range say of 4.5 – 5.50. A low hill might attain a rating of 6 – 6.5. Higher land forms in turn achieve the higher ratings.
Another aspect of this example is that ratings grade like contours around the landscape and progressively rise and fall as ratings. Thus one part of the landscape may be rated 5 and another 6, but it grades gradually between the two, not as a sudden change. An exception may be where a mountain rises from a lake, the lake having a rating of say 6.0 and the mountain rising to 7.5 (Figure 51).
It is important to distinguish what the scene comprises. The rating reflects what is viewed from a location; it does not provide the rating of that viewing location but rather what is seen from it. Thus a mountain viewed from a plain may rate say 8 but this rating applies to the mountain, not to the plain from where it was viewed.
In mapping, the rating applies the landscape viewed, not the landscape from where the view as taken.
The rating scale of 1 (low) to 10 (high) has been used consistently in surveys for rating the scenes. In mapping there are two numbering options:
- Adopt integer rating option in which the number used (e.g. 6) covers ratings within that integer (i.e. from 6.00 to 6.99);
- Adopt the nearest whole number option in which a number covers the range from 5.50 to 6.49. These round up to the nearest integer. Thus 6.42 becomes 6 and 6.72 becomes 7.
Either option has advantages and disadvantages. Option 1 is the more easily understood and a map containing areas marked 5, 6 or 7 would be taken to mean within the integer range. Option 2 is possibly more accurate as it enables a figure of say, 6.89 to be moved up to 7 rather than remaining as 6.
On balance, it is considered that the expectation is that a number refers to an integer and to avoid confusion it would be preferable to adopt Option 1, the integer approach. Thus the ratings shown on the map will refer to the range, say 6.00 to 6.99.
Resources for mapping
The information available to assist in mapping landscape quality typically comprises:
- Set of scenes each with their own rating; the scenes cover the range of landscape types present in the study area;
- Analysis of the scenes by landscape types which provides mean averages for each landscape type;
- Scoring of components by respondents, typically covering land cover, land form, land use, water, naturalness, diversity;
- Analysis of the components and the relationship of ratings to the scores for each component together with the strength of interactions between components;
- Analysis of comparison scenes, e.g. with and without water;
- Photographs taken of the study area, these may total some thousands and provide images of most of the area, which is very valuable when mapping;
- Maps that cover the study area at varying scales;
- Google Earth® which can be very useful in defining boundaries between landscape units, e.g. identifying the boundary of stands of trees;
- Familiarity of the study area by the consultant.
Generic ratings for study area
Table 18 Generic ratings, Lake District
From the survey results, generic ratings can be derived, which apply at the broadscale level in mapping landscape quality. These are then refined for the particular area of landscape under review. Table 18 is the list of generic ratings of landscape types in the Lake District.
These integers provide the baseline for mapping and will be modified by local circumstances and in particular, by the detailed understanding gained for each landscape type. Based on these and from the analysis area by area of the ratings applicable to each area, the generic ratings in Table 19 were derived. These differentiate ratings by height and steepness of terrain. High steep terrain will rate higher than low flat terrain. Special provision was made for the high peaks > 850 m which, because of their height, rate higher than lower peaks.
Table 19 Ratings of Lake District
Mapping landscape quality
Having defined the ratings for various landscape types, the next step involves mapping these. This can be facilitated by access to a Geographical Information System (GIS) and the consultant working closely with the technicians in translating the ratings to the map. Generally, this will be an iterative process with modifications and adjustments made as the process continues.
It is also useful to plot the landscape quality on a paper map. This may be done prior to the GIS mapping and provides the consultant with an initial view of the outcome, and adjustments can be made at that stage.
Figures 52 – 58 are landscape quality maps derived by hand and by GIS.
The maps are the vital end product of the landscape quality assessment process. Although it takes much time and effort to attain them, the journey is worthwhile and the end result should stand the test of time as representing the best possible assessment of an area’s landscape quality.
By measuring the area of each rating, figures are derived of the proportion of the study area per rating. Table 20 and Figure 59 summarize the overall figures for six of the seven studies the author has conducted (no figures were derived for the Barossa study).
Table 20 Percentage area rating by survey area
The Community Preferences Method is a very practical yet flexible and robust instrument for measuring the aesthetic quality of landscapes and for mapping the results. A typical project covering a region may take 4 – 6 months to complete, the major periods being for photography, preparing and posting the Internet survey, analyzing the results and mapping landscape quality for the region.
Banks, M.S., E.A. Cooper & E.A. Piazza, 2014. Camera focal length and the perception of pictures. Ecol. Psych., 26:1, 30 – 46.
Bishop, I.D., 1997. Testing perceived landscape color difference using the Internet. Landscape & Urban Plg., 37:3-4, 187 – 196.
Chenoweth, A. & C. Brouwer, 1997. Coastal Landscapes of Queensland. Department of Environment & Heritage, Coastal Management Branch. Brisbane.
Evernden, N., 1983. Beauty and nothingness, Prairies as a failed resource. Landscape, 27:3, 1 – 8.
Herzog, T.R., S. Kaplan & R. Kaplan, 1976, The prediction of preference for familiar urban places. Env. & Behav., 8:4, 627 – 645.
Herzog, T.R., 1984. A cognitive analysis of preference for field-and-forest environments. Landscape Research, 9, 10-16.
Herzog, T.R., 1985. A cognitive analysis of preference for waterscapes. Jnl. Env. Psych, 5, 225 – 241.
Herzog, T.R. & P.J. Bosley, 1992. Tranquillity and preference as affective qualities of natural environments. Jnl. Env. Psych., 12, 115 -127.
Hull, R.B., 1987. Interpreting scenic beauty estimates. Landscape Jnl., 4 – 27.
Hull, R.B. & M.M. McCarthy, 1988. Change in the landscape. Landscape & Urban Plg., 15, 265 – 278.
Lamb, R.J. & A.T. Purcell, 1990. Perception of naturalness in landscape and its relationship to vegetation structure. Landscape & Urban Plg., 19, 333 – 352.
Law, E.S. & E.H. Zube, 1983. Effects of photographic composition on landscape perception. Landscape Research, 8:3, 22 – 23.
Macdonald, A., 2015. Review of the SNH Visual Representation of Wind Farms 2014 and where we are today
Nassauer, J.I., 1983. Framing the landscape in photographic simulation. Jnl. Env. Mgt., 17, 1 – 16.
Preston, R., 2001. Scenic amenity. Measuring community appreciation of landscape aesthetics at Moggill and Glen Rock. Dept. Natural Resources & Mines and the Environmental Protection Agency, Brisbane.
Prineas T. & P.J. Allen, 1992. Queensland’s Tropics World Heritage Area. Mapping the scenic quality. Landscape Australia, 3, 241 – 246.
Ramsay, J., 1992. Letter regarding Prineas & Allen’s paper. Landscape Australia, 4, 383 – 4.
Rees, R., 1977. The prairie: A Canadian artist’s view. Landscape, 21:2, 26-32.
Robinson, D.G., I.C. Laurie, J.F. Wager & A.L. Traill, 1976. Landscape Evaluation. Report of the Landscape Evaluation Project to the Countryside Commission for England and Wales. Centre for Urban and Regional Research, University of Manchester.
Scott, M.V. & D.V. Canter, 1997. Picture or place? A multiple sorting study of landscape. Jnl. Env. Psych., 17, 263 – 281.
Sevenant, M. & M. Antrop, 2011. Landscape representation validity: a comparison between on-site observations and photographs with different angles of view. Landscape Research, 36:3, 363 – 385.
Shuttleworth, S., 1980. The use of photographs as an environment presentation medium in landscape studies. Jnl. Env. Mgt, 11, 61 – 76.
Svobodova, K., P. Sklenicka & J. Vojar, 2014. Dominance level of significant features in post-mining landscapes as a predictor of perceived scenic beauty. Proceedings of the 22nd MPES Conference, Dresden, Germany, 14th – 19th October 2013.
Tucker, D., C. Johnston, Z. Leviston, B.S. Jorgensen & B.E. Nancarrow, 2006. Sense of Place: Towards a Methodology to Value Externalities Associated with Urban Water Systems. The Hawkesbury-Nepean Case Study. CSIRO: Water for a Healthy Country National Research Flagship, Land and Water, Perth.
Wherrett, J.R., 1999, Issues in using the Internet as a medium for landscape preference research. Landscape & Urban Plg., 45, 209 – 217.
Wherrett, J.R., 2000. Creating Landscape Preference Models Using Internet Survey Techniques. Landscape Research, 25:1, 79-96.
Williamson D.N. & J.A. Chalmers, 1982. Perceptions of Forest Scenic Quality in Northeast Victoria. A Technical Report of Research Phases I and II. Forests Commission, Victoria.
Wu, W., 2013. Study on the value assessment and dynamic change of the dust detainment of Hangzhou scenic forestland. Research Jnl. App. Sci. Eng. & Tech., 5:19, 4677- 4680.
Zube, E.H., D.G. Pitt & G.W. Evans, 1983. A lifespan developmental study of landscape assessment. Jnl. Env. Psych., 3:2, 115 – 128.