Loading...

The implementation of sonification concept as assisting technology for students with Dyslexia

Author: Iloka Benneth  Chiemelie
Published: 23/july/2016

CHAPTER 1
INTRODUCTION
The issue of dyslexia has been a topic of great interest in modern literatures. The reason being that students with dyslexia lack the necessary competence in undertaking their class works as a result of their disability which limits them to cogitative read meaning into words and write sentences as single words. On the other hand, there are literatures that suggest that dyslexic students if given the necessary care have the potential of contributing to the economic development of a state. As such, this paper aims to accord them the necessary care by proposing sonification as an assistive technology that can be used to improve their understanding about the topic of discussion in their class and as such improve their cognitive and intellectual ability in the long run.
Basically, this paper is divided into 5 sections. The first section is the introduction, which points out a clear description of how the paper will be undertaken. Under this section, the research problem and objectives are clearly stated. The second section is the literature review. In order to understand the topic of discussion, a review of literature is presented on the three elements identified in the topic as sonification, assistive technology and dyslexia. The third section is the experimental study. In this study, students are experimented in the classroom to understand how sonification can help improve their overall intellectual competence and academic performance. The firth section is the discussion. In this section, the finding from the experimental study is linked to the literature review and used to support existing theories as well as lay the foundation for the development of new theories. The fifth section is the conclusion. In this section, a general summary of what was done in this paper will be presented in order for readers to granulate the contents in the context of this paper.
1.1.  BACKGROUND OF   DYSLEXIA
Although dyslexia was officially recognized in the UK as a disability under the Disability Discrimination Act of 1995, there have been widespread of knowledge of the problems associated with such hidden disabilities (Dale and Taylor, 2001). Dyslexia is a serious disability across the globe, and affects a huge number of people. In the UK alone, it was reported that about 4 per cent of the country’s population is severely dyslexic, with another 6 per cent being moderately dyslexic (BDA, 2006). Therefore, the total numbers of people that suffer from dyslexia in the UK make up 10 per cent of the country’s population. In such an advanced country where access to quality health care and medications is assured, it must be worrying to estimate the number of people suffering the same disease in developing and underdeveloped countries.
Taylor et al. (2007) stated the possible difficulties dyslexic patients to be: reading hesitantly; misreading, making understanding difficult; difficulty in clearly organizing thoughts; poor time management and planning; and erratic spelling.
The first issue of dyslexia was reported by Pringle-Morgan in 1896 (Pringle-Morgan, 1896). Pringle-Morgan and Hinshelwood (an ophthalmologist) made speculations that the issue of difficulty with reading and writing is caused by “congenital word blindness” (Hinshelwood, 1917), and it was widely believed that dyslexia is caused by visual processing difficulties.
While this view is not generally acceptable in the modern world, some current literatures still maintain that dyslexia is caused by a disorder in visual processing. Stein and Talcott (1999) reported on visual search difficulties that are caused by reduced ability of a person to correctly control ocular movement. Additionally, individuals who suffer from dyslexia are less sensitive to certain variables like contrast sensitivity and visual persistence when compared with normal people (Lovegrove, 1993). Notwithstanding that these literatures try to link dyslexia with visual difficulties, it is widely believed by researchers that dyslexia is a linguistic disorder, and on a more precise note it’s caused by a disorder in phonological processing (Vicari et al., 2005). People who suffer from dyslexia normally experience difficulty with analysis and processing phonological elements of spoken words (Snowling, 1987; Snow et al., 1998). For instance, a dyslexic patient might have problem with subdividing words into their single phonemes (Shaywitz, 1998; Pennington et al., 1990). Thus, it can be stated that there is a possibility of some individuals having “linguistic” causes of dyslexia, while other having “visual” causes of dyslexia or some of them might be caused by both factors. As such, it is important that researchers appreciate the differences that exist between these numbers of causing agents. To be precise, dyslexic readers differ in relation to the extent of their ability to make use of phonological reading and spelling strategies. Research has shown these differences in the seriousness of dyslexic individual’s phonological disabilities can determine their level of reading abilities (Snowling, 2001). Simmons and Singleton (2000) also commented that dyslexic students tend to experience difficulties with reading comprehension that are not usually accounted for by their inability to understand words individually in a page of text, but this difficulty can be accounted for in their construction of references when processing passage of text.
Another survey by the UK Higher Education Statistics Agency (HESA, 2006) revealed that in the academic year of 2003/2004, the number of first year undergraduate students in the UK with a stated disability of dyslexia was 15,600. Hatcher et al. (2002) stated that the number of students with dyslexia has been growing rapidly in recent years. Richardson and Wydell (2003) found that university students with dyslexia are more likely to drop from school during their first year of study and less likely to complete their course fully, but appropriate support for students can increase completion rate of students with dyslexia and it can equal that of students without disabilities. Some of the famous people with dyslexia include: Thomas Edison, Albert Einstein, Michael Faraday (Dyslexia.com, 2013). See appendix (1) for full more information about some of the famous and talented people with dyslexia.
In that notion, it is important for students with dyslexia to be assisted with any form of technology that can help booster their cognitive competence and encourage them not to drop out from school. The reason being that, they will be able to acquire necessary skills that will be used to contribute greatly towards the development of the society they live in.  
1.1.2 ASSISTIVE TECHNOLOGY
As the name implies, the definition can be coined instantly from the word “assistive” – which means to help or support somebody or something. In the context of this study, assistive technology is defined as any technology that can be used to support people with dyslexia. Such technology includes hearing aids, visual aids, sound aid, and a host of other. However, this paper will focus on the idea of adopting sonification as an assistive technology to help dyslexic students.
A research has displayed that assistive technology can recover certain skill deficits (e.g., reading and spelling) (Raskind and Higgins, 1999; Higgins and Raskind, 2000) it is extremely helpful for dyslexic people, because it provides them to access reading materials otherwise they feel problem or trouble in reading or they may not able to read. Scholars and adults with dyslexia problems who are studying in many areas, such as home, school, and on the job.  This research will explain about the technologies for disabled students in their learning process and the processes behind sonification and its relevant uses.
Assistive technology is a technology used by individuals or persons with disabilities to execute hard or unworkable functions. Assistive technology consists of mobility devices such as walkers and wheelchairs, also hardware, software, and peripherals that help disability people in accessing computers or else other information technologies. For illustration, individuals with restricted hand purpose can make use of the keyboard with large keys or a separate mouse to work on computer, Blind people can use software that recognize text on the screen to computer-generated voice, people with low vision can use software that increase the size of screen words, deaf people can use a TTY (text telephone), or individuals with speech impairments can use a tool that speaks out loud when they typing the text on keyboard (Boyle et al., 2005).
1.1.3 SONIFICATION
Information scientists are extensively studying the idea of “sound as information.” In accordance with the NSF report Kramer et al., 1999) by the International Community for Auditory Display (ICAD), sonification is the process of using non-verbal sound to convey information. For instance, auditory icons (Hermann, 2002) are used for display sound information through an automatic process that adopts commonly held meaning for everyday sounds. Let’s consider the sound of a bottle filling up, which can be used to indicate a progressing file download in the environment where the filling up is taking place.
Sonification concept is a branch of auditory display. Auditory display can generally be used to define any form of display that makes use of non-verbal sounds to communicate information. Sonification as such, is a type of auditory display that adopts non-speech audio to represent information. Kramer et al. (1999) further broadened the concept by elaborating that sonification is the conversion of data relations into perceived relations in a non-speech sound signal to help facilitate communication or interpretation. Thus, the main objective of sonification is to translate the relationship in a data into non-speech sound(s), and make use of human beings auditory perceptual abilities to make the data relationship comprehensible.
Since dyslexia as discussed above is a form of specific language disorder, which impairs a students’ ability to read and write, it can be said that sonification is the right solution for such disability. This is because, as an assistive technology, sonification conveys information into non-speech sounds and dyslexic students don’t need to be worried about reading and writing disabilities because they can process the information through non-speech sound, which will increase their overall of the information conveyed to them.
1.2 PROBLEM STATEMENTS
It is a well-known fact that, dyslexic people in Malaysia face challenges in studying, and do not have enough support to overcome their problems. Certainly people with dyslexia might get frustrated and sad as reading and spelling are so difficult for them. For youngsters, they might not the likes of the actuality of being separated with their friends during reading class or having to see a significant reading instructor. However, helping them is necessary to ensure they can role properly on as well as do positive effects in their life. Some successful people have dyslexia, but this does not stop them since completing their objectives in life.
The study of dyslexia in Malaysia is extremely limited, and information about dyslexia is not widely spread. Over there are limited researchers that have been brought out to implement an application to help to ease the life of the disabled but not dyslexia. Besides, there is no current research on sonification concept for dyslexia. Thus, this thesis came up with an idea of producing a guideline for the designer in creating assistive tools for dyslexia using sonification concepts to overcome their learning disabilities.
This scenario provides a solution about the usage of sonification concept in assistive technology to help students with dyslexia in their learning process. Therefore, it explains about the sonification concepts, assistive technologies and its contribution to dyslexic students in their learning process.
1.3  RESEARCH AIMS
The current technology, it seems bias towards people with dyslexia in terms of gaining knowledge. The equality of gaining the same knowledge from the current education system becomes an issue which than invoke the implementation of sonification application as an assistive technology for people who are dyslexia challenged. In order to achieve this, the proposed thesis will incorporate a research on the current technology and resources that are available to the people who are dyslexia challenged. In Malaysia, the study of dyslexia is extremely limited, and information about dyslexia is not widely spread. Currently, there is no research of sonification concept for dyslexia.
The aim of the thesis is to prepare a guideline and framework on sonification concept in assistive technologies for dyslexia students, which will be very useful aid to the people with dyslexia to overcome their learning disabilities.
However, in this information technology age, the computer technology becomes more advanced from day to day. If there is no awareness and attention being brought up about the disabled, the tendency of them being left out will be certain. There should not be a technology gap between people with dyslexia and the disabled. The aim of this research is to analyses and focuses on sonification concepts in assistive technologies for dyslexia and will do the following: –
1.      An understanding of the field of sonification and dyslexia.
2.      An understanding for the possible of sonification to response a range of scientific questions.
1.4  RESEARCH OBJECTIVES
The purpose of this research is to put forward guidelines for the designer in creating assistive tools for dyslexia by using sonification concepts. Thus, the main objectives of this research are as below: –
1.      To provide a set of guideline for the future designer to understand students’ perception on sonification in assistive technologies.
2.      To make a comparison on control group and dyslexic students in assistive technologies between various tasks of sonification concept.
1.5  RESEARCH SCOPES
In this research, there are two groups of participants will be involved. The first group of participants is from primary school in Labuan. This group of participants is considered as normal students (control group), which consist of thirty students. Another group of participants are dyslexic students. We assumed that the participants we received from the dyslexia association are diagnosed with dyslexia. In this group also consist of thirty students. The ages of all students are from 7 years old to 10 years old. 
1.6  EXPECTED CONTRIBUTION
The expectation from this research is that a new framework will be developed that will enhance the development of sonification as an assistive technology which will help dyslexic to be able to understanding information just like normal people and as such increase their academic performance.
1.7  HYPOTHESIS
Hypothesis 1
H0: There is no significant difference between control group and dyslexic students in terms of matching task.
H1: There is significant difference between control group and dyslexic students in terms of matching task.
Hypothesis 2
H0: There is no significant difference between control group and dyslexic students in terms of comparison task.
H1: There is significant difference between control group and dyslexic students in terms of comparison task.
Hypothesis 3
H0: There is no significant difference between control group and dyslexic students in terms of classification task.
H1: There is significant difference between control group and dyslexic students in terms of classification task.
Hypothesis 4
H0: There is no significant difference between control group and dyslexic students in terms of ordering task.
H1: There is significant difference between control group and dyslexic students in terms of ordering task.
Hypothesis 5
H0: There is no significant difference between control group and dyslexic students in terms of association task.
H1: There is significant difference between control group and dyslexic students in terms of association task.
Hypothesis 6
H0: There is no significant difference between control group and dyslexic students in terms of prediction task.
H1: There is significant difference between control group and dyslexic students in terms of prediction task.
Hypothesis 7
H0: There is no significant difference between control group and dyslexic students in terms of finding task.
H1: There is significant difference between control group and dyslexic students in terms of finding task.
Hypothesis 8
H0: There is no significant difference between control group and dyslexic students in terms of memorization task.
H1: There is significant difference between control group and dyslexic students in terms of memorization task.
Hypothesis 9
H0: There is no significant difference between control group and dyslexic students in terms of navigation task.
H1: There is significant difference between control group and dyslexic students in terms of navigation task.
Hypothesis 10
H0: There is no significant difference between control group and dyslexic students in terms of identification task.
H1: There is significant difference between control group and dyslexic students in terms of identification task.
1.8  FRAMEWORK/FLOWCHART OF RESEARCH

Figure 1.1: framework of research
 The above diagram describes the framework of how the topic of dissertation was chosen. Having conducted numerous researches on the subject of dyslexia and the impacts it could have on potentials of students to reach their peak (both socially and academically), the topic was chosen with the idea of defining new improvement in these students by adopting sonification as an assistive technology. In order to test the usability of sonification as an assistive technology for dyslexic students, a test was conducted with both the target group (dyslexics) and references group (normal students).
1.3  REPORT ORGANIZATION
Under this section, author briefly outlined every chapters of this research study. Overall of this research study, there consists of five chapters. There will be Introduction, Literature review, Methodology, Results, Discussion and Conclusion. Each of the chapters has their own subtitles and descriptions. It will carries out different areas of study. This will easier for the reader to know or understand clearly what had been included in this research study and manage the report in order which can become more systematic. The briefed outlines of each chapter are as followed: -
The first Chapter of this research study is Introduction, which introduces about the overall of the report. In this chapter, author will describe about the problem statements, aims, objectives, scopes, expected contribution and framework/flowchart of this research study.
Moreover, in Chapter 2, this is Literature Review, which is by writing idea of this research study based on journal and reading the application review. Author is utilizing other resources of information or journals articles as the references of this area of study.
Furthermore, Methodology is in Chapter 3. In this chapter, author is describing on the methods used to conduct in this research and the components are include general research design, sampling method, data collection method and other related procedures. Further discussion will be done by authors as well.
In addition, Chapter 4 is Result. The result of this research will be derived from Statistical Package Social Science (SPSS). Tables are attached together to present the results in a simpler form and a brief explanation is included for all relevant tests and results.
Last but not least, Chapter 5 is the last chapter for this research study, this is Discussion and Conclusion. Author is going to conduct a discussion regarding the result obtained. Besides, author is providing recommendation, implication and conclusion at the end of the chapter. In the conclusion, this will includes summary of the overall of this project. For the section of summary, author will summarize the whole research study.
1.4  SUMMARY
In summary, this chapter has presented the overall process and flow path for this paper. From this chapter, it can be seen that the main topic of interest is sonification (independent variable), and how it can be used as an assistive technology to help improve the cognitive processing ability of students with dyslexia (dependent variable). Additionally, this will be an experimental based research in the sense that the dyslexic students will be subjected to different case processing experiments (for both with and without sonification) to understand how the concept of sonification can be used to improve their cognitive processing ability, and how significant the technology helps in making such a possibility.
CHAPTER 2
LITERATURE REVIEW
2.1. CHAPTER INTRODUCTION
The focus of this chapter is to review relevant literatures in relation to the topic of discussion. The topic of discussion is adoption of sonification concept as assistive technology for students suffering from dyslexia. Therefore, this review of literature will provide insight in the topics of sonification, dyslexia, and adoption of sonification as an assistive technology.
2.2 FRAMEWORK FOR REVIEW
Figure 2.1:  the framework for the literature review
The framework above is a representation of how the literature review will be undertaken. The first is to understand what dyslexia and assistive technology is all about, and the second is to illustrate how sonification concept can be used as a form of support for students with dyslexia.
2.3 DEFINITION OF SONIFICATION
Sonification is defined as the transformation of data relations into perceived relations in an acoustic signal for the purpose of helping to interpret communication. While auditory display can be either speech or non-speech based, sonification deals only with non-speech sounds and is aimed at proving the listener with output that is more dense than human speech.
Thomas Hermann made the statement that a technique can only be described as sonification if:
1.      The sound reflects objective properties or relations in the input data.
2.      The transformation is systematic. This means that there is a precise definition provided of how the data (and optional interactions) cause the sound the change.
3.      The sonification is reproducible: given the same data and identical interactions (or triggers) the resulting sound has to be structurally identical.
4.      The system can intentionally be used with different data, and also be used in repetition with the same data.
Hermann’s definition is sufficient for describing the sonification system as a whole, but sonification on its own as a subject deals with transforming data into non-speech sounds to aid people with interpreting the meaning of complex data.
2.4 CONCEPT OF SONIFICATION
Sonification concept is a branch of auditory display. Auditory display can generally be used to define any form of display that makes use of non-verbal sounds to communicate information. Sonification as such, is a type of auditory display that adopts non-speech audio to represent information. Kramer et al. (1999) further broadened the concept by elaborating that sonification is the conversion of data relations into perceived relations in a non-speech sound signal to help facilitate communication or interpretation. Thus, the main objective of sonification is to translate the relationship in a data into non-speech sound(s), and make use of human beings auditory perceptual abilities to make the data relationship comprehensible.
Sonification is an approach to information display in different fields, and Kramer (1994) emphasized, a complete understanding of the field of sonification required numerous lifetime experiences across different human domains and knowledge. The theories that defined the background for research and design of sonification are from are from different field like audio engineering, audiology, computer sciences, informatics, linguistics, mathematics, music, psychology, and telecommunication, as the numbers continues, and yet, sonification is not yet based on a single of uniform principles or rules (see Edworthy, 1998). Instead, the theories that define sonification in practice can comprise of an amalgamation of different and yet important insights that are drawn from the convergences of these diverse ideologies and fields.
In 1999, a sonification report was presented by Kramer et al. (1999) and in the report; they identified four issues that needs to be tackled in order to theoretically describe sonification. These issues are: 1) there should be a taxonomic description of sonification techniques which are based on psychological principles or display application; 2) the type data and user tasks in relation to sonification should be defined; 3) a understanding should be presented on the treatment of mapping of data to acoustic signals; and 4) a discussion of the factors that limit the usability of sonification. By addressing these four topics, this paper aims to provide a wide information on sonification as well as overview of theories on sonification concept. Numerous contributions from authors in different fields have enabled the broadening of sonification concepts, and this broad view can be seen from the different fields where sonification concepts are applicable. However, this paper will focus on the applicability of sonification in academics as an assistive technology for students suffering from dyslexia.
2.5 CLASSIFICATIONS AND FUNCTIONS OF SONIFICATION
Judging from the fact that sound have inbuilt characteristics that are beneficial to human beings as a means of information display, it can be stated that some of the functions that auditory display will perform can be based on the features of sound. Function of auditory displays in terms of three broad categories: (1) alarms, alerts, and warnings; (2) status, process, and monitoring messages; and 3) data exploration (Buxton 1989; Edworthy, 1998; Kramer, 1994; Walker & Kramer, 2004) and also (4) arts and entertainment.
2.5.1 ALERT FUNCTIONS
The word alert and notification is used to refer to sounds that are used as indications of something that has occurred, or is about to occur; with the intention that the listener should take immediate actions in the environment where the alert is being made (see Buxton, 1989; Sanders & McCormick, 1993; Sorkin, 1987). Alert and notification are usually simples and straight forward, with messages that are lacking in information. For instance, an Alarm is used to indicate that the users should end a specific program and switch to a new program, and it can be seen that it contains little information as to what is going on. The Alarm doesn’t mean that the current program is finished, but it just indicates that the expected time for completing the program has elapsed.
Warnings and alarms are short form of notification of alert sounds that are meant to convey the occurrence of a program event, that are usually capable of having adverse effects and requires urgent actions from the users in order to mitigate or eliminate the adverse effects of such events (see Haas & Edworthy, 2006). Warning signals that are presented in auditory forms capture special signals attention that visual signals (Spence & Driver, 1997). Usually alarms carry more information than notifications, because it involves alert users about an issue that has the potential of yielded adverse effects or possibility endanger the life of the users.
2.5.2 STATUS AND PROCESS INDICATING FUNCTIONS
While sounds performs a more basic and alerting functions, there are cases where sounds is required to perform functions that contain detailed information about the event taking place. The current or ongoing status of a system or process is one of such cases, where sound is needed to present to the human listener an auditory display that contains information about the dynamic status or progress of a process. In this case, sound makes use of the listener’s ability to detect changes in the auditory environment or the user’s need to have sights focused on other tasks (Kramer et al., 1999 p. 3). Auditory displays have been developed for numerous uses that include monitory models for process in a state of activity (see Gaver, Smith, & O'Shea, 1991; Walker & Kramer, 2005), presentation of a patient’s data in an anesthesiologist's workstation (Fitch & Kramer, 1994), and blood pressure in a hospital environment (M. Watson, 2006), and telephone hold time (Kortum, Peres, Knott, & Bushey, 2005).
2.5.3 DATA EXPLORATION FUNCTIONS
Another function of auditory display is data exploration. This is what is generally meant by the term “SONIFICATION,” and is meant to encode or decode information about a specific data or relevant aspect of the set of data. Sonification designed for data exploration is different from status or process indicators because they contain sounds that offer a more definitive approach to the data in the system, rather than just condensing the information in order to capture a temporary state of an event like alerts and process indicators. Typical examples of sonification designed for data explorations are: Auditory graphs (see L. M. Brown & Brewster, 2003; Flowers & Hauer, 1992, 1993, 1995; Smith & Walker, 2005) and interactive sonification (see Hermann & Hunt, 2005).
2.5.4 ART AND ENTERTAINMENT FUNCTIONS
Besides the three functions discussed above, sonification can also be used as a form of arts and entertainment. With the advancement of information and communication technology, as well as other music gadgets, sounds can be played, mixed, shuffled, or created by just sitting on a computer system. Some of these sounds can be sonified to present vivid information in a non-speech format. A good example includes TECHNO-MUSIC, whereby only instruments are played and used to create a scenario for an occurring phenomenon. Additionally, sonification can also be used as a form of entertainment and art in the Opera industry, where they are viewers just need to sit back and enjoy the music presentation without hearing any sound.
2.6 SONIFICATION TECHNIQUES AND APPROACHES
de Campo (2006) proposed a design of sonification that was based on three approaches as: (1) event-based; (2) model based; and (3) continuous. Also, the definitional line of taxonomic description of sonification is wide and overlapping. The approach for sonification will be based on the de Campo’s proposal. 
2.6.1 EVENT-BASED SONIFICATION
Event-based approach is used to describe that type of sonification where data are presented by parameter mapping (de Campo, 2006; Hermann & Hunt, 2005). Parameter mapping means changes in some data dimension produced by changes in the acoustic dimension in order to produce sonification (Hermann & Hunt, 2005). In terms of definition, sonification is used to represent changes in data with some sound features (Kramer et al., 1999). Thus, the dimensions of sound that are manipulated must be mapped in such a way that it will correspond with changes in data. However, sound has numerous dimensions that can be changes to allow for large design space when mapping data to audio (see Kramer, 1994; Levitin, 1999). In order to adopt parameter mapping for sonification, the dimension of the data must be redesigned in such a way that display can be possible. As result of the redesign, parameter mapping tends to yield lower quality acoustics than the model-based approach discussed above.
2.6.2 MODEL-BASED APPROACHES
This approach is different from event-based approach in the sense that instead of mapping data parameters to sound parameters, it is designed to feature a virtual model with which the listener can interact in such a way that the model’s properties are described by the data. (de Campo, 2006, p. 2). A model is made of virtual object with which the user can interact, and the user’s input drives the sonification such that the model is “a dynamic system capable of a dynamic behaviour that can be perceived as sound” (Bovermann, Hermann, & Ritter, 2006, p. 78). Model-based approach put high reliance on active manipulation of the sonification by the user and it tends to involve high dimensionality of data.
2.6.3 CONTINUOUS SONIFICATION
This approach to sonification is possible when data are in time series and samples at a rate that semi-analog signals can be translated into sounds directly (de Campo, 2006). The most prototypical method of continuous sonification is audificaton, whereby the waveforms of data are periodically translated into sound (Kramer, 1994). For example, seismic data have been audified in order to facilitate the categorization of seismic events with accuracies of over 90% (see Dombois, 2002; Speeth, 1961). The approach might require sifting the frequency or waveform into audible that are loud enough for human ears to perceive.
2.7 SONIFICATION AND AUDITORY DISPLAY
Sonification is more of a recent subject in auditory display. In information system, auditory displays offer a passage way between the source of information and the receiver of the information (see Kramer, 1994; Shannon, 1998, 1949). In the case of auditory display, the data of interest are conveyed to the human listen in the form of sound. A good example of such conveyance is as illustrated below
Figure 2.2: General process in communication system


Source as adapted from: Bruce and Walker (2010)
While the investigation of audio as an information display system goes back to over 50 years (see Frysinger, 2005), digital display technology has made auditory display of information universal in recent years. Edworthy (1998) placed an argument on this issue by stating that the penetration of auditory display and audio interface across the globe cannot be avoided as a result of its ease to use and cost efficient nature, and computer are now capable of producing sounds. Devices that range from cars to computer and phones to microwaves in our environment now use internal sound to communicate messages to their human users. These sounds can be in the form of a system turning on/off or performing a process.
The benefits and motivations for display information that make use of sounds can be discussed in lengthy form because they offer numerous benefits to its human users. However, these benefits will be discussed briefly in this paper. On a brief note, auditory displays make use of human being’s ability to information in auditory systems, temporal changes and flow patterns (Bregman, 1990; Flowers, Buhman, & Turnage, 1997; Flowers & Hauer, 1995; Garner & Gottwald, 1968; Kramer et al., 1999; McAdams & Bigand, 1993; Moore, 1997). Therefore, it can be stated that auditory display is the most appropriate means for communicating complex information, changes in time, and warning signs. Secondly, when the view is presented from the working environment, the operator is usually unable to look at or unable to see a visual display. Thus, sight is not necessary for understanding information carried along in auditory display systems. The reason for the need to visualize auditory display system is because the visual system might be performing another task (Fitch & Kramer, 1994; Wickens & Liu, 1988), or the receiver might be visually impaired either physically or because of environmental factors such as smoke, or early morning dew (Fitch & Kramer, 1994; Kramer et al., 1999; Walker, 2002; Walker & Kramer, 2004; Wickens, Gordon, & Liu, 1998), or the visual system may be overtaxed with information (see Brewster, 1997; M. L. Brown, Newsome, & Glinert, 1989). The third benefit is that auditory and voice systems have been proven to be very compatible when the system processing the information requires verbal-categorization (Salvendy, 1997; Wickens & Liu, 1988; Wickens, Sandry, & Vidulich, 1983). Additional features of auditory perceptions that support sounds as an effective way to represent data include our individual ability to monitor and process multiples auditory data at a time (Fitch & Kramer, 1994), and our individual abilities to rapidly detect audio information especially in stressful environment (Kramer et al., 1999; Moore, 1997). Finally, as mobile devices increasingly becomes smaller in size, sound can become a compelling mode of information display as visual display continues to decrease in size as well (Brewster & Murray, 2000). For more discussion of the benefits and potential problems of auditory displays, see Kramer (1994; Kramer et al., 1999), Sanders and McCormick (1993), Johannsen (2004), and Stokes (1990).
2.7.1 Earcons
In cases where there is no clear iconic representation for the items being presented, earcons can be used to produce an effective sonification. Earcons are abstract, synthetic and usually musical tones or sound patter that can be used in structural combination. They are non-verbal audio messages that comprises of short motives, rhythmic sequences of pitches with various intensity, timbre and register (Brewster and Edwards, 1992). Blattner et al. [1989] presented a definition of system of hierarchical earcons, in the case where specific structure is given to single earcons that are grouped. Earcons can be viewed as the node in a tree which inherits all the properties of the earcons above it. As stated by Brewster and Edwards [1992], there are a maximum of five levels to this tree since there are five varying parameters: rhythm, pitch, timbre, register and dynamics. As such, earcons can be combined together to produce complex audio messages. It is also possible to automatize the process of combining the auditory properties in order to create new and yet consistent sounds. Through such means, a hierarchical system, of earcons can easily be enhanced as a “family of sounds”. There are many uses of earcons such as adding context to a menu in a user interface and helping the user maintain awareness of where his or her current location in the tree.
2.7.2 Auditory icons
Auditory icons are brief sounds used to represent functions, actions and objects (Gaver, 1986). They capitalize on users’ previous knowledge and natural auditory association with causes and sources of sounds. The main purpose is to be the auditory equivalent of visual icons which are commonly used in personal computing to represent or objects of processes with graphical symbols. Icons generally make information easier to display because of their ability to present vast information in a concise and easily recognized way (Blattner et al., 1989). Because of the capability of visual systems to process different dimensions like shape, colour etc. in parallel, numerous information can be transferred through visual icon. The same case is applicable in auditory system and its processing dimension. In accordance with Hemenway (1982), it is easier to locate and present icons than works because meaning can be derived directly from the object they represent. Kotler et al. (1969) also presented how cultural and linguistic barrier can be bypassed by using icons. As such, auditory icons can be mapped to the actual object or event that it represents either directly or indirectly. In direct relation, the sound made by the target event is used, while the indirect relation provides substitute of a surrogate for the target (Keller and Stevens, 2004). As such, objects are represented by the involved sound-producing events. For instance, the sound of running water or paper towel displacement can be used to represent restrooms. There is a variation between the icon and the actual object it represents in terms of direction and auditory similarity (Walker and Kramer, 2004). So long as the sound produced can be used to associate the sound of an object or event, it is classified as auditory icon. While the utility of auditory icons are limited in computer application as a result of problems with representing abstract concepts (Walker et al., 2006), auditory icon are still very useful in representing items in the real world.
2.7.3 Parameter mapping
Parameter mapping sonification (PMS) is the most common technique for representing multi-dimensional data as sound (Worrall, 2009). PMSs sometimes referred to as sonic scatter plot (Flowers et al., 1997; Flowers, 2005), nth-order parameter mapping (Scaletti, unknown), or multivariate data-mappings, in which multiple variables are mapped into single wounds (Kramer, 1994). This implies that data dimensions are symbolically mapped into sound parameters; either physical (e.g. frequency, amplitude), psychophysical (e.g. pitch, loudness) or perceptually coherent complexes (e.g. timbre, rhythm). Thus, parameter mapping in sonification basically involves designing the flow pattern of the sonic system as to how it will be encoded by the system and decoded by the listeners.
2.8 APPLICATIONS OF SONIFICATION FOR DYSLEXICS
The idea of using sounds to diagnose illness and possibly save life is not new or unusual in the hospital environment, where a stethoscope is normal equipment used by doctors. Medical students are taught about listening to tissues rubbing in the lungs, bubbling gasses in the intestines, and bloods pumping through the veins. Other indicators include body temperature or blood CO2 level, which can be measured and shown in graphs. However, graphs can be distracting when undertaking tasks that require huge visual concentration, and synthesized sounds can be used to represent these indicators as well. It was shown in a stimulated operation that medical students performed better when eight dynamic variables about the patient’s health were presented to them as sounds instead of graphs, and much better with sounds alone instead of sounds and graphs combined (Fitch and Kramer 1994). The images produced by X-ray Cat scans and magnetic resonance imagery (MRI) equipment are usually used to look for signs of diseases in a patient’s body. However, it is extremely difficult to detect affected regions of an unhealthy patient’s brain in an MRI image because of the nature of the brain tissue. These unhealthy regions of the brain can easily be distinguished by mapping image textures into sounds that can be heard through selection of a specific region of interest with a computer mouse (Martins et al. 1996). Thus, it can be seen that listening to sounds can help doctors to diagnose illness that can usually be dangerous.
Medicine is not the only areas where sounds can provide new insights into data relations and offer opportunity for new and better ways to undertake a task. Other areas will be discussed below as well. However the focus will be on how these concepts can be used to support students with dyslexia.
Blind people rely mainly on natural sounds more than most of us. As a result of their disability, they have learned to listen for useful sounds, and filter out those that are not useful. When they are walking through towns, the sound of a constant line of traffic is important for navigating a straight line and maintaining their direction, while the voices made by people passing around are not that useful to them (Swan, 1996). There have been developments of electronic aids that can help blind people to be more mobile and independent. One of such devices registers the travellers on a digital map that makes use of global positioning satellite (GPS), with virtual maps eliminating from landmarks and buildings services guidance for a redefined route (Loomis et al. 1994).
Multimedia computer programs have been designed to make maps and text more accessible to visually impaired people. For instance, “Audiograf” is a program that generates sounds from parts of a selected diagram. In this device, a line between two points sounds like a plucked strings, and text selections are heard as speech (Kennel 1996). “Mathtalk” is also another device that arguments a text-to-speech translator with non-verbal cues to make it easier for listeners to understand mathematical expressions. The cues also provide auditory overview of expressions through graphic symbols such as parentheses and subscripts. Opening parentheses for example have rising tone, while a closing parenthesis has falling tone (Stevens et al. 1994). There are many of these kinds of devices designed to help all kinds of disabled people.
Sounds are usually very important when people work together as a group. For example, builders on construction sites coordinate their activities in the project by paying attention to workmates hammering, shovelling and revving engines. The importance of sound was shown in an experiment where two people were paired together to produces as many bottle of coke as possible through a computer game in the form of a simulated factory. The factory was made up of nine interconnected machines, such as heater, bottler, and conveyors, with an on/off control button. Each of the person involved in the game were seated in different rooms and could see as well as control production in half section of the company, and talk to other people through microphone. It was discovered that the co-workers were able to produce more coke when they could hear the status of the machine through clanking of bottles, boiling of water and other activities taking place in the company. These sounds helped them to tract on-going process, monitor the performance of these machines individually, be aware of activities going on in the factor and talk about the factor more fluently. Additionally, the activities were also more enjoyable when the sounds were turned on (Gaver et al. 1991).
Sounds can be extremely useful in circumstances where the need to maintain eye movement is important for gaining information and slows down performance (Ballas 1994), like driving an emergency vehicle or piloting a jet plane. In an experiment that dates back to 1945, it was found that it took pilots just an hour to learn how to fly by using sonified instrument panel in which turning was heard through a sweeping pan, tilt by change in pitch, and speed by differences I the rate of sound (Kramer 1994a, p. 34).
As can be seen from the above analysis, there are numerous concepts of sonification, and the choice of adopted concept depends on the information being processed, the audience that will decode the information and the environment where such information is being decoded.
Data sonification is also known as auralisation. It is the process of converting visual information into NON-SPEECH sounds (Shepherd, 1995). An innovative approach to this problem is the vOICe system (Meijer, 2000), that is capable of reading both printed and screen images.
Thus, the question is this case is whether sonification can be used to help students with dyslexia, and the answer is a bold yes. This is because, students with dyslexia haven been identified to experience difficulties with reading and writing, as they are not able to process single words. Thus, this issue is only applicable to reading and writing, which implies that these students have no problem with listening and speaking. On that note, it must be reiterated once again that sonification involves using sound to communicate the real meaning of complex message (listening ability required). When exposed to the right sounds, dyslexic students can improve their knowledge about a subject through sonification because this is similar approach used to guide the blind and deaf in modern day life. Why dyslexic students are neither blind nor deaf, they exhibit the characteristics in the senses that they can’t process individual words and find it difficult to read or write.
Figure 2.3: the process of sonification
Source as adapted from: Thomas (2008).
From the above figure (3), it becomes clearly evident that sonification is an important assistive technology for dyslexic students because it brings information together, and processes this information into sonification algorithm that these students can used to adapt to learning environment and improve their overall academic achievement. Additionally, this technology has been practiced for year on both deaf and blind people, and it has advanced in efficiency of development and effectiveness of use. 
Although it is possible to adopt sonification as an assistive technology to help students with dyslexia, this paper question the degree of success of such tool. This is because, for a student to be competent with speech and writing, the student needs to acquire these skills through training (understand each alphabet). Such is not the objective of sonification – which is only used to pass information about complex messages. So, the question is how can a student actually understand the message while he or she doesn’t understand the alphabets and phonetics? As a normal person, we know that “A” is the first letter in English and can be used to make words like “Apple” or sentences like “I eat Apple.” However, this is the basic problem of dyslexic students as discussed above – they don’t actually know that “A” is an “A”. For instance, maybe they think “A” to be “C,” which means that “Apple” (a fruit) becomes “Capple” (a meaningless word). So when you are trying to decode information from a message to someone who doesn’t know the alphabets used in decoding this message, how effective will such process be? That is the question of this paper. In another case, if the student even learns how to process the sounds, there is another question of representing the processed sounds. This is because, dyslexic students find writing difficult because they can’t process single word. Thus, it can be stated that the adoption of sonification as a correction measure to help students with dyslexia is limited in application, but is still possible if well implemented because; sonification (sound message) can also be used to teach them the alphabet and will make them competent once they learn the individual alphabets.
2.9 DATA PROPERTIES AND TASK DEPENDENCY
The nature of data presentation and task to be undertaken by the listeners are important to determine the system that will be used for sonification of information display. The display designer must consider, among other things: what the user needs to accomplish; the relevant parts of information for the user’s task; the amount of information needed by the user to accomplish the task; the kind of display to present; and how to manipulate the data (e.g., filtering, transforming, or data reduction).
All these issues yield major challenges in sonification design, since the nature of the data and the task will necessarily constrain the data-to-display mapping design space. Some of the reasons is that some parts of the data design are defined as categorical (e.g., timbre), whereas other attributes of sound are perceived along a perceptual continuum (e.g., frequency, intensity). Some of the challenges are yielded by more cognitive or “top down” components of sonification usage. For example, Walker (2002) has shown that conceptual dimensions (like size, temperature, price, etc.) determine how listeners will interpret the data.
2.9.1 Data types
Information can be classified as quantitative (numerical) or qualitative (verbal), and the design of an auditory display to accommodate quantitative data may differ from the design of a display that presents qualitative information. Another description for data can also be in terms of the scale in which they are made. Nominal data classify or categorize; no meaning beyond group membership is attached to the magnitude of numerical values for nominal data. Ordinal data take on a meaningful order with regards to some quantity, but the distance between points on ordinal scales may vary.
Interval and ratio scales have the characteristic of both meaningful order and meaningful distances between points on the scale (see S.S. Stevens, 1946). Data Principles of Sonification: An Introduction to Auditory Display and Sonification can also be discussed in terms of its existence as discrete pieces of information (e.g., events or samples) versus a continuous flow of information. Barrass (1997, 2005) is one of the few researchers to consider the role of different types of data in auditory display and make suggestions about how information type can influence mappings. As one example, nominal/categorical data types (e.g., different cities) should be represented by categorically changing acoustic variables, such as timbre. Interval data may be represented by more continuous acoustic variables, such as pitch or loudness (but see S. S. Stevens, 1975; Walker, in press, for more discussion on this issue).
Nevertheless, there remains a paucity of research aimed at studying the factors within a data set that can affect perception or comprehension. For example, data that are generally slow-changing, with relatively few inflection points (e.g., rainfall or temperature) might be best represented with a different type of display than data that are rapidly-changing with many direction changes (e.g., EEG or stock market activity). Presumably, though, research will show that data set characteristics such as density and volatility will affect the best choices of mapping from data to display. This is beginning to be evident in the work of Hermann, Dombois, and others who are using very large and rapidly changing data sets, and are finding that audification and model-based sonification are more suited to handle them. Even with sophisticated sonification methods, data sets often need to be pre-processed, reduced in dimensionality, or sampled to decrease volatility before a suitable sonification can be created. On the other hand, smaller and simpler data sets such as might be found in a high-school science class may be suitable for direct creation of auditory graphs and auditory histograms.
2.10 TAXONOMIC DESCRIPTION OF AUDITORY DISPLAY AND SONIFICATION
The taxonomic description of auditory display in a general view and sonification in a precise view can be done through numerous classifications and categories. The categories are often based on either the functions of the display system or the approached of sonification uses, and both of them can be used to present a logical background for taxonomy. In this paper however, the discussion and classification of auditory display and sonification will be done in relation to both the function and techniques.
While sonification is clearly a subset of auditory display, it is not very clear as to where the boundaries that exist between the two concepts should be drawn. Definition by category in the field of sonification seems to be lose in background and somewhat flexible,  for instance, an auditory representation in a box-and-whisker plot, equal-interval times series data, and diagrammatic information are all referred to as sonification, but the display format for all of these are clearly and distinctively different. Therefore, the name sonification should be view with less importance than its ability to communicate complex messages to intended audience. In that case, the taxonomic description presented in this paper is meant to represent conventional naming styles in the literature, but it should not be taken to be a boundary for the description of the gap between auditory display and sonification, neither should it be taken to be very important to successfully create displays.
2.11 MODEL OF INTERACTION IN SONIFICATION
In order to understand the interaction between sonification and users, it is important to base the discussion of approaches to sonification on the nature of the interaction. Interaction in this case can be considered as the pathway through which different display are classified, and it can range from completely non-interactive to completely user-initiated. For instance, in some cases, the listener may make use of a display, without having the option of manipulating the display. Such kind of display which is simply triggered and played in its own line of dimension is known as “concert mode” (Walker & Kramer, 1996) or “tour based” (Franklin & Roberts, 2004). On the other hand, the listener may also be able to actively control the presentation of the sonification. Sonifications more toward this interactive end of the spectrum have been called “conversation mode” (Walker & Kramer, 1996) or “query based” (Franklin & Roberts, 2004) sonification. Thus, the mode of interaction between the user and the device is necessary to understand how the approach can be carried on. Mostly, where the user is allowed to manipulate the system are in art and entertainment types of sonification. In data exploration which is used to test a user’s ability, the user is not usually allowed to manipulate the interaction.
2.12 LIMITATIONS OF SONIFICATION
Although researches are shading lights on the level to which a given task and data set can be amended to represent information with sound, the major limiting factors in the adoption of sonification have been and seems to continue being perceptual and information processing capability of the user. The limitations are as described below.
2.12.1 AESTHETICS AND MUSICALITY
Edworthy (1998) vividly made a point about the independence of display performance and aesthetics. Although sound can aesthetically enhance listeners’ interaction with a system, performance of the person might not necessary be influenced by the presence or absence of sound. In the field of sonification, aesthetics and musicality is still a heavily debated topic. However, the use of musical sound as opposed to pure sound has been recommended because of the ease at which musical sounds are perceived (L. M. Brown et al., 2003), but it is yet to be proven as to whether the use of musical sounds improve the performance of listeners than presumably less aesthetically desired sounds. While bringing up issues about aesthetics and musicality is important, it is still advices that aesthetically pleasing sonification should be designed to any possible extent in order to convey the intended message. Kramer (1994) identified listener as a factor that can potentially deter the use of auditory display, and as such, the designer should input all necessary efforts to ensure that annoyance is avoided in the program as much as possible.
2.12.2 INDIVIDUAL DIFFERENCES AND TRAINING
Another factor that can limit the usability of sonification is individual differences and training. Naturally, people are different in their perception of a program, mode of adoption, skills and competence. Additionally, sonification programs requires individuals to be competent in processing the program’s essentials such as adjusting volumes, switching between programs, and recoding or playing back certain programs. Therefore, the level of understanding possessed by an individual in this area determines the potential of that person making great use of the sonification program. Thus, it can be stated that the more competent an individual is in relation to sonification, the more capable the individual will be to make great deal out of the program and advance his or her performance.
Thus, it can be seen from the above limitation that extra measure should be put into the design framework for any sonification program, because there is a differences in terms of understanding of the sonification process, adoption of sonification for decoding messages, and aesthetical understanding of the sound system. However, as the above literatures suggest, these differences should not hinder any program designed to adopt sonification as an information processing system, because it has numerous benefits that can aid the improvement of an individual’s performance.
2.13 DYSLEXIA: AN INTERNATIONAL PERSPECTIVE
The term “dyslexia” was coined in 1887, following the health case of boy who experienced extremely difficulty learning to read and write, notwithstanding his display of both intellectual and physical abilities. In line with the discovery, researches on the topic throughout the 20th century focused on the idea of dyslexia as a product of visual disorder that involves people reading backward or upside-down. In 1970s however, an new suggestion emerged that dyslexia is a product of difficulty in processing phonological form of speech, which makes it difficult for people to associate word sounds with visual letter that make up the written word. Recent studies done with imaging techniques have shown that there are differences in the development and functioning of the bran of a dyslexic person when compared with that of a normal person. Up till this moment, and coupled with a century of research, dyslexia is still one of the most controversial topics in the field of education, psychology and developmental neurology. The controversy is a result of the incomplete and varying elements of dyslexia, which are contradictory with theories about its causes, subtypes and characteristics (Ministry of Health – New Zealand, 2010).
Dyslexia is broadly accepting as a form of learning disability with certain biological traits that makes it different from other learning disabilities. Dyslexia is the most common learning disability across the world, and it is estimated to affect 3 to 20% of the world’s population (Ministry of Health – New Zealand, 2010). In New Zealand, the Specific Learning Disabilities Federation of New Zealand (SPELD NZ) estimates that 7.1% of all students in the country suffer from specific learning disabilities (Chapman et al., 2003). Similar studies have been conducted in UK and it reveals that 10% of students in the UK higher education level suffer from dyslexia.
Although the term dyslexia is used certain countries across the globe, there is no international agreement on what it means and how it can be diagnosed. Some countries such as New Zealand are yet to accept the term dyslexia and the country’s ministry of health don’t officially adopt it as a disability, but believes that it is diagnosable (Ministry of Health – New Zealand, 2010). These differences in the view of countries showcases that this topic is very complex and careful definition of ideas is necessary in order to ensure that the paper doesn’t contradict with the subject in its global perspective. These definitions are as discussed below.
2.13.1 INTERNATIONAL DEFINITION OF DYSLEXIA
All the definitions presented in this paper are from English speaking countries, and it emphasises little differences in the view of the topic of discussion and how it is caused. It must be stated that the term dyslexia is a medical term, and usually not used to describe educational issues. For instance, the North Americans prefer to use the term “learning disability” or “specific learning disability.” The UK and Australia also prefer “specific learning disability.” However, the increasing use of the term “dyslexia” in research and by the public implies that these terms are usually used interchangeably. That will not be the case for this paper, as this paper will strictly adopt the word “dyslexia” in order to ensure coherences and flow.
In the United States, the Office of Special Education and Rehabilitative Services within the US Department of Education provides funding and responsible for improving the results and outcomes of people suffering from disability in all ages. In line with the government’s governments No Child Left Behind agenda (US Department of Education, 2001) and the Individuals with Disabilities Education Act (US Department of Education, 2004) the Office of Special Education and Rehabilitation offer numerous supports and services to parents, individuals, and school districts and states as a means of helping people with learning disabilities.
There is a reproductive metamorphosis in the USA definition of disability, which is moving away from its traditional IQ achievement fit to definitions that are based on identification and elements that constitutes the disability (Aaron, 1997; Stanbnovich, 1998, 1999). This move in definition from the basic IQ achievement can be seen from the change in the National Institute of Child Health and Human Development’s (NICHD) definition of dyslexia over the past decades. In the 1980s, dyslexia was defined by NICHD as:
When a child’s difficulty can be not be linked low intelligences, poor eye sight, poor hearing, inadequate educational opportunities or any other problem, then the child is said to be dyslexic.
Numerous people such as parents, teachers, and researchers viewed this definition as unsatisfactory, and it once again led to a new definition of the term by NICHD in 1994 as:
Dyslexia is one of the extremely distinct learning disabilities, it is a form of specific language-based disorder that originates constitutionally and characterized by difficulties in single word decoding, which is usually a result of insufficient phonological processing. These difficulties in single word decoding are not usually expected in relation to age, and other cognitive and academic abilities; they are not caused by general developmental disability or sensory disorder. Dyslexia is notices easily as a result of difficulty with difference forms of languages that usually include as an addition to reading, a conspicuous problem with gathering proficiency in spelling and writing. This definition was coined up by (Lyon et al., 2003).
The above definition is a working definition, and it was later revised in 2003 as:
Dyslexia is a specific learning disability that is neurobiological in nature. It comprises of difficulties with fluent and/or accurate recognition of words and poor spelling and decoding abilities. These difficulties are usually produced as a result of disorder in the phonological components of languages that are often unexpected in relation to other cognitive abilities and the provision of an effective medium of instruction. Additionally, consequences can include comprehension reading problems and reduces reading experience that can reduce the growth of vocabulary and background knowledge.
From the definitions reviewed, it can be seen that the first 1980s definition has now moved to a more non-categorical definition. Dyslexia is now seen as a specific learning disability and it shows the change in the primary understanding of dyslexia since its first definition. The noticeable change is the shift from a single word decoding in the previous definition to a new definition that focuses specifically on difficulties with accurate recognition and decoding of word. Also, it recognises poor spelling and the inability to read fluently as elements that constitutes dyslexia.
Another extra addition in the current definition is the need for children to be provided with necessary and effective classroom instruction. This addition illustrates phonological difficulties as a casual model that is used to guide assessment. The 2003 definition by NICHD has also been adopted by the International Dyslexia Association.
The issue and history of dyslexia in Canada can be seen as something similar to that of the USA (Klassen, 2002). In both USA and Canada, the operation definition of dyslexia is done by the state and provinces accordingly, and the services and definitions used to determine access to services vary amongst states.  These inconsistences even on a country level have increased the confusions that surround dyslexia (Shaw et al., 1995). Also in Canada, there are variations in the definition of learning disability, but one unique element of the Canadian system is that there is an agreed definition adopted by the country and it was defined by the Learning Disabilities Association of Canada (Learning Disabilities Association of Canada, 2002) as:
Learning disabilities is defined as a number of disorders that affect the acquisition organization, retention, understand and or use of verbal or nonverbal information. These disorders affect people who are also capable of demonstrating abilities necessary for thinking and/or reasoning on an average level at least. Thus, learning disabilities are different from intellectual deficiency.
While the concept is generalized in relation to learning disabilities, there is also a specific definition of dyslexia as is adopted and used by the Canadian government, which was taken from the British Columbia Health Guide (British Columbia Health Guide, date unknown) as:
Dyslexia is defined as difficulty with the alphabet, reading, writing and spelling irrespective of intelligence that can be between normal to average, conventional teaching, and adequate socio-cultural opportunity. Dyslexia can be both genetic and hereditary. It is not caused by poor vision. Dyslexia can be identified through psychological and educational tests that determine language and other academic abilities, IQ and problem solving skills, and is only identified if the reading disability is not a product of other conditions.
The Canadian Dyslexia Association offers a varying definition by stating that:
Dyslexia is a result of differences in brain organization. It can cause problems with reading, writing, spelling and speaking, irrespective of average or superior intelligence, traditional reading, instructional and sociocultural opportunities. The biological condition for dyslexia was stated by the association to be hereditary.
The term “dyslexia” was initially avoided in British education as a result of preferential attention that was given to “specific learning difficulties.” However, the term has gained attention in daily adoption and has recently been included in British key policy documents (Department of Education and Skills, 2001, 2004). The department or education worked closely with the British Psychological Society on a report to place clarifications on dyslexia within an educational context (British Psychological Society, 1999). The report stated the importance of defining dyslexia descriptively, and not with any explanatory elements. A working definition was presented as the starting point and different from rationales and research initiative. The working condition is also the current definition of dyslexia by the British Psychological Society, and they defined it as:
Dyslexia is said to exist as a result of lack of accurate and fluent word reading and/or spelling development, which occurs incompletely or with great difficulty. The focus is on literacy level at the “word level” and is of the notion that the problem is server and reoccurring despite learning opportunities offered to the person. It provides the background for staged process of assessment through teaching.
If this paper traces the record in Australia and other English speaking countries, it can also be seen that the definitions offered by these countries slight vary with what has been presented in this paper, but the lack of time and spaces will mean that this paper will not trace these countries individually. As such, this lack of consensus in what dyslexia really mean is troubling because it doesn’t give researches on the subject the right track to understand the topic of discussion.
However, there are reoccurring elements from the definition as: dyslexia occurs only when there is no other explanations for the lack of cognitive an intellectual deficiency present in an individual, the focus is on recognition of words, it is not visually impaired as any visual disorder can be linked to sight issues, and it is genetic and hereditary.
From these elements, this paper will define dyslexia as: the lack of intellectual and cognitive ability to process single words irrespective of learning and sociocultural opportunities offered to the affected individual. It can be either genetic or hereditary, and occurs only when there are no identifiable causes of the individual’s ability to process single words. It is focused mainly on the basics of word processing.
2.13.2 CAUSES AND CHARACTERISTICS OF DYSLEXIA
There are variation amongst associations and countries on the definition of dyslexia, and there are no agreement on its causes and characteristics. The own agreement is that dyslexia is an unexpected difficulty in learning to read; where reading on its own can be defined as the process of extracting information and using it to construct meaning in written text (Vellutino et al., 2004). While this is one of the characteristics that individuals with dyslexia can display, there are other possible characteristics that have been reported in literatures that can be used as indications for dyslexia. They characteristics can be but not limited to experienced difficulties with:
1.        Formation of letters;
2.        Meaning of letters;
3.        Association of sounds (phonetics) with symbols (grapheme);
4.        Writing letter of the alphabet in its proper order;
5.        Spelling and writing;
6.        Finding a word in the disctionary;
7.        Following instructions;
8.        Expression opinions and ideas in writing;
9.        Distinguishing left from right, east from west;
10.    Telling time, days of the week, months of the year;
11.    Short term or working memory;
12.    Inconsistent performance and grades;
13.    Lack of organization;
14.    Tasks automatisation; and
15.    Balance (Davis and Braun, 1994; British Psychological Society, 1999; Bright Solutions for Dyslexia, date unknown).
It is important to note that these characteristics can vary amongst individuals, and no particular individual will have problem with all these issues (that is, the problem will be specific to one or combination of these issues, but not a combination of all of them). Also, individuals who have problems with one or more of these issues might not be dyslexic because dyslexia is said to occur only when there is no explanation for the cause of these issues.
The precise causes of dyslexia which results in the problems stated above are still not vividly clear. However, this literature review presented three main issues that might cause the identified characteristics of dyslexia. These deficits in theories are: 1) the phonological theory (Ramus et al., 2003; Lyon et al., 2003; Shaywitz et al., 1999; Blomert et al., 2004; Padget, 1998; Frith, 1997), this is the most researched and developed theories in the past decades; 2) the cerebellar theory (Ramus et al., 2003; Nicolson et al., 2001); and 3) the magnocellular (auditory and visual) theory (Ramus et al., 2003; Blomert et al., 2004; Heiervang et al., 2002; Pammer & Vidyasagar, 2005; Stein, 2001). From numerous literatures over the past decades, there are different versions for each of these theories. However, this paper will present the most current and prominent version of these theories.
1) The phonological theory – this theory is based on verbal sounds, and stated that dyslexic people experience difficulties with representing, storing, and/or retrieving sounds. In accordance with this theory, the difficulty experienced by dyslexic patient in learning to read alphabets is related to a disorder in their ability to learn to read an alphabet system that requires them to learn the grapheme-phoneme relationship. This implies that there is and impairment in the individual’s ability to related written letters with their speech sounds. This denotes a direct link between cognitive deficiency and reading difficulties.
This theory can be supported by evidence from the fact that dyslexic patients perform very poor on tasks that requires them to adopt phonological competency. There are also evidence that supports the idea that dyslexic patients have poor verbal short-term memory and slow ability to make meaning from words, which represent basic phonological deficiency (Snowling, 2000; Ramus et al., 2003). On a neurological ground, anatomical work and brain imaging illustrate a clear dysfunction on the left side of the brain and is a basic sign of phonological deficit (Lyon et al., 2003; Temple et al., 2001; Marshall, 2003; Frith, 1997). Although there are evidences that support the phonological theory, the quote from Frith (1997) summarizes the current status of the theory by stating that; “the exact nature of phonological deficit is still hugely elusive.”
2) The cerebellar theory – the theory states that the cerebellum of dyslexic people is slightly dysfunctional and there are a number of cognitive difficulties such as balance; motor skills; phonological skill and rapid processing (Nicolson et al., 2001; Ramus et al., 2003; Fawcett, 2001). Some of these skills are not based on language and the phonological theory couldn’t explain all the issues associated with dyslexia.
While the problem of motor skills and automatization points to the cerebellum, it has been dismissed in dyslexia because there is no link between cerebellum and language. Note that dyslexia occurs when there is no explanation of the course of the problem, but in this case, the cause is known (cerebellum). Nevertheless, there seem to be evidence in modern studies that the cerebellum can be linked to both language and cognitive skills, and it includes involvement in reading (Fulbright et al., 1999). The support for this theory is based on the evidence that of poor performance in dyslexics can be in different forms such as motor, time estimation and ability to balance tasks (Fawcett et al., 1996; Fawcett & Nicolson, 1999). There are evidences from brain imagery that show anatomical, metabolic and activation differences in the cerebellum of dyslexics (Brown et al., 2001; Ramus et al., 2003).
3) The magnocellular (auditory and visual) theory – previously, visual and auditory disorder were attended to separately, but there are now new evidence that they are as result of magnocellular dysfunction (Stein and Walsh, 1997; Ramus et al., 2003; Tallal et al., 1998). This theory is of the notion that the causes li in the perception of short or rapidly varying sounds or difficulty with processing the letter and words on a page of text. This theory doesn’t exclude the possibility of phonological deficit, but instead stresses on the contribution of reading problems by visual and auditory systems.
Evidence that support this theory is the differences in a dyslexic person’s brain autonomy for both visual and auditory magnocellular pathways (Stein, 2001), and the co-existence of visual and auditory problems in certain dyslexics (van Ingelghem et al., 2001).
Summarizing these theories, it can be stated that the phonological theory offers an explanation that the difficulties faced by dyslexic people can be linked to sounds with symbols in reading and spelling, the cerebellar theory states that the central processing issue is linked to learning and automaticity, while the magnocellular theory is of the notion that problems a dyslexic patient display is a result of visual and auditory deficits.
There are also weaknesses associated with these theories. For instance, the phonological theory doesn’t present a clear explanation of the occurrence of sensory or motor disorders that are significantly present in dyslexic people; the magnocellular theory doesn’t present a clear explanation of the absence of sensory and motor disorders which are significantly present in dyslexic people; and the cerebellar theory combine both problems even though it is has been stated that dyslexia doesn’t combine both elements.
Recent studies have been emerging with findings that develop a new theory of dyslexia as being based on deficit theory, and its known as transactional theory of dyslexia. The transactional theory is drawn from the view point of cognition (Anderson, 2003), socio-cultural (Gee, 2001) and learning theories with a more instructional focus (Clay, 2001). In this notion, the theory postulates that the ability to read is not inherent in the reader but varies in relation to complex social contents and events in which these contexts occur. The transactional view of reading difficulties is of the notion that understanding the natural differences of readers is of more importance and productive than the common diagnostics categories (McEneaney et al., 2006).
The advancements in anatomical and brain imagery has also been recognized, but is not o a universal level, that dyslexia is a neurological disorder with possible genetic origin because it often occurs in families (Ramus et al., 2003; Lyon et al., 2003). There are researchers how also believe to have identified the gene responsible for dyslexia, and since the gene is dominant, it makes dyslexia an inheritable condition (Cardon et al.,1994; Grigorenko et al., 1997). However, recent studies have found no evidence that the identified gene is associated with or linked to dyslexia (Field & Kaplan, 1998). Thus, the possibility of genetic linkage of dyslexia (if there is one) is still strongly debated and continues to be the focus of researches nowadays.
Researchers  also agree that brain image studies has been able to illustrate differences in anatomy, organization and functioning of a dyslexic person’s brain, but it is still unknown as to whether these differences are the causes or effects of these difficulties experienced with reading (Lyon et al., 2003; Brown et al., 2001; Stein, 2001). Also, there are a number of reports that dyslexia is more inherent in men than females, and this frequency ranges from 1.5:1 to 4.5:1 depending on the study (Wadsworth et al., 1992; Shaywitz et al., 1990; Ansara et al., 1981; Miles et al., 1998), however, it is still not clear whether this is a result of selection factors and/or bias. Unless a new research is carried out with examination of the genders in an equal ground, this paper will maintain the notion that dyslexia occurs in both men and women at an equal rate.
In the last decades, researchers have made significant advances on the possible causes of dyslexia, and it has been linked to neurological basis of the disability as being recognised. Unfortunately, there is no agreement or answers on the exact causes of dyslexia. However, there is a uniform agreement that the problems causing dyslexia can be linked to phonology; however, it is increasingly clear that phonology is not the only problem. Thus, for this paper, the three theories: phonology, cerebellum, and magnocellular will be adopted as the causes of dyslexia.
2.14 ASSISTIVE TECHNOLOGY: WHAT IS IT ALL ABOUT?
As the name implies, the definition can be coined instantly from the word “assistive” – which means to help or support somebody or something. In the context of this study, assistive technology is defined as any technology that can be used to support people with dyslexia. Such technology includes hearing aids, visual aids, sound aid, and a host of other. However, this paper will focus on the idea of adopting sonification as an assistive technology to help dyslexic students.
It has been stated earlier that it is important to help dyslexic students because they are capable of contributing to economic development of a nation, but studies have found that dyslexic students are more likely to drop at their first year of study, and more likely not to complete their whole courses as a result of frustration with their inability to read and process words (Richardson and Wydell, 2003).
Assistive Technology (AT) is used to help human beings with many forms of disabilities from cognitive difficulties to physical disability. Assistive technology for kids with Learning disabilities is defined as any mechanism, equipment piece or system that assists circumvent, work around otherwise recompense for an individual's certain learning deficits. In common, assistive technology balances for a student's skills deficits or disability area. A student could use corrective reading software as well as listen to audio books. A research has displayed that assistive technology can recover certain skill deficits (e.g., reading and spelling) (Raskind and Higgins, 1999; Higgins and Raskind, 2000) it is extremely helpful for dyslexic people, because it provides them to access reading materials otherwise they feel problem or trouble in reading or they may not able to read. Scholars and adults with dyslexia problems who are studying in many areas, such as home, school, and on the job.  This research will explain about the technologies for disabled students in their learning process and the processes behind sonification and its relevant uses.
Assistive technology is a technology used by individuals or persons with disabilities to execute hard or unworkable functions. Assistive technology consists of mobility devices such as walkers and wheelchairs, also hardware, software, and peripherals that help disability people in accessing computers or else other information technologies. For illustration, individuals with restricted hand purpose can make use of the keyboard with large keys or a separate mouse to work on computer, Blind people can use software that recognize text on the screen to computer-generated voice, people with low vision can use software that increase the size of screen words, deaf people can use a TTY (text telephone), or individuals with speech impairments can use a tool that speaks out loud when they typing the text on keyboard (Boyle et al., 2005).
2.15 USABILITY EVALUATION OF SONIFICATION
Although sonification has been associated with numerous positive impacts on the performance of users as discussed in this paper, the limitation questioned the extent of applicability and positive outcome from such application. Thus, it is important to evaluate the usability of sonification as a program for improving academic performance of students with dyslexia. In other to present a critical evaluation, usability is defined in this paper as the extent to which a product can be used by an individual to achieve specific goals with certain level of effectiveness, efficiency and satisfaction under a given or specific condition. There are five elements used to evaluate the usability of a program, and these elements will be used to evaluate the usability of sonification. These factors are as described below:
2.15.1 LEARNABILITY
In order for an application to be considered usable, it should allow new users to easily start making use of it. In the case of sonification, this is possible because the sonification involves mapping information in an acoustic form for user to decode complex information easily. As such, not much skill or training is required, and the requirement for usability can be as low as just turning on the system and listening to the emanating sounds. Thus, it can be stated that sonification can easily be used.
2.15.2 EFFICIENCY
The second element is the ability of the system to increase user’s performance when compared with similar existing applications. This is an area that has drawn much attention from the field of research, as researchers are yet to prove that adoption of acoustic sounds is capable of improving performance. This is because, while music is pleasing to the ear, it is unclear as to whether the actual information contained in the music can be decoded into simpler messages and as such improve individual performance. The reason for such has been attributed to training and individual differences. Thus, it can be stated that sonification is capable of improving performance of a user’ but the level of improvement experienced vary between users. However, it has been stated by researchers (L. M. Brown et al., 2003) that this shouldn’t be a stumbling block in the development and adoption of sonified systems, as the associated benefits overview the little potential limitations.
2.15.3 MEMORABILITY
This factor is of the notion that the application should be easy to memorize and recall, and it should allow users who have been using it previously to reuse the application without problems. This is a clear characteristic of sonification because it is easy to memorize and recall, as well as allows users to reuse it without any problem. Let’s take the beep of a microwave for instance, the beep indicates that the microwave has finished. This is easy for users to memorize, recall and reuse without any problem.
2.15.4 ERRORS
This element is of the view that the application should be free from errors, especially catastrophic errors. Well, this case is somewhat controversial in the sense that the product of error is from the designer and not the program because; the program performs whatever it has been designed to perform. On that ground, it can be stated that sonification fulfils this attribute because if it is probably programed to undertake a function, it will be capable of doing so without any error.
2.15.5 SATISFACTION
The users of a program should be satisfied with the application and enjoy it. Also, this is not predictable because the level of satisfaction and enjoyment obtained from a program is dependent on the user. Thus, it can be stated that under normal conditions, sonification is capable providing the best user experienced that will yield positive satisfaction and high enjoyment. This is because, it involves the transformation of information into acoustics, and music has been described as an element of man’s joy. Thus, sonification has the potential of increasing this joy by actually helping the individual to understand complex messages in a simple and entertaining way.
Basing on the elements discussed above, it can be seen that sonification is usable and it can yield a high level of satisfaction from the user. This is because soninifeid programs are easy to use, efficient, and memorable, free of errors, and increases the level of an individual’s satisfaction. Thus, sonification is can be recommended as the right tool for assistive dyslexic students in order to improve their academic performance and self-esteem.
2.16 FRAMEWORK ON SONIFICATION CONCEPT IN ASSISTIVE TECHNOLOGIES FOR DYSLEXIA STUDENTS
From the above discussion, it can be seen that dyslexia is a serious illness in terms of the fact that it limits the ability of students to undertake their education under normal setting in terms that they are not able to either read, write, speak, listen or a combine inability of any of the or all of the listed disabilities. Based on the understanding gained from the above analysis, it is clear that differences exist in terms of cases for people suffering from dyslexia and as such the framework for designing of sonification concept as an assistive technology for such people is as illustrated below.
Figure 2.4: framework for designing sonification as an assistive technology
From the above figure, it can be seen that the framework is cantered on three approaches. The first is that the designer should investigate through researcher on the case of dyslexia being suffered by the student and base the design from the finding from such investigation. Once the design is completed, the second phase should be initiated and it is to test the final product and amend it where necessary in relation to fluctuations that it might be having in terms of meeting the standards for design. The test should be conducted with the same subject that will be using the device. Once the second phase is complete, the device can be implanted and adopted for use.
2.17 KEY FINDINGS FROM LITERATURE REVIEW
Numerous findings have been made from the above analysis, and all these findings are significant for this research. They include that: dyslexia is a serious disability that affects millions of people across the globe and should be treated seriously because it limits the possibility of people completing their studies and contributing to the economic development of their nation.
Additionally, it was noted that developing technologies to assist these patients is important because it will help the affected students by decoding complex messages into a simple and easy to understand format. However, the effectiveness of this approach to supporting dyslexia was questioned following the fact that sonification is all about sound, while dyslexic students have problems with reading and writing. So how can someone who doesn’t read and write actually understand the message being communicated? This was proven to be possible because language is about culture and u learns to speak and listen from early childhood.
2.18 SUMMARY OF CHAPTER 2
In this chapter, a concise literature review has been produced in relation to the three elements (sonification, dyslexia and assistive technology) contained in the topic of discussion. The presentation has been in-depth and broad, but it can be ascertained from this chapter that dyslexia is a form of disability that is affecting millions of people across the world, and it comes in the form of limiting individual’s ability to process information, read and write. On the other hand, it was found that helping these disabled people might potential increase economic performance of a state, and sonification was deemed the right approach to such. This is because, sonification adopt acoustics designs that are naturally pleasing to humans as a means of decoding information. Since the process involves adoption of auditory systems, it can allow users to multi-task which is not usually possible with visual system. Thus, this chapter has been successful in meeting the objectives of this paper by presenting clear definitions and approach to sonification as assistive technology for dyslexic students.
CHAPTER 3
RESEARCH METHODOLOGY
3.1. CHAPTER INTRODUCTION
It must be reiterated that the previous chapter has been successful in defining necessary theories related to the topic of discussion, and these theories will be used in the research process. This chapter is still focused on achieving the research objective, which was set to be to prepare a guideline and framework on sonification concept as assistive technologies for dyslexia students, which will be very useful aid to the people with dyslexia to overcome their learning disabilities. This is a reflection of the research topic and it will be the background from which the primary research methodology will be developed.
3.2 OBJECTIVES OF EVALUATION
The objective of this paper is to test the numerous approaches in designing sonification as an assistive technology for students with dyslexia and define the best approach that can be used to help these students. As such, the objective will centre on applying the framework designed earlier, which is to understand the need for such, design and amend the design to ensure it meets the need and then adopt the design for such purposes. The value will be measured by how the sonification concepts aid different forms of dyslexia disability.
3.3 INDEPENDENT AND DEPENDENT VARIABLES
For this paper, Independent variable will be defined as the factor which is measure, manipulated or selected by the experiment, in order to understand its relation with the phenomenon that is seen to occur from the experiment. Dependent variable on the other hand, is that factor which is observed and measured to determine the effect of the independent variable.
From the figure (4) above, it can be seen that the sonification concept is the independent variable and dyslexia is the dependent variable. Sonification is independent because it is not influenced by any change in dyslexia; while on the other hand, dyslexia can be influenced by change in sonification (for those that don’t use it and those that use it). Even at this stage, it is not known as to whether sonification can actually improve academic performance in people with dyslexia, so that is the concept behind the Yes-, No-, and Maybe- attributes in relation to dyslexia being reduced.
3.4 SUBJECTS
A total of twenty students (10 dyslexic students and 10 normal students) were selected to perform in the test. The extent of dyslexia suffered by the dyslexic students is unknown. The fact that they are dyslexic was determined from personal confession and statement from their classmates. A common feature of all the individuals is that they experience difficulty in processing individual words, reading and writing, and this was also used to define them as dyslexic.
The main element of investigation amongst these students is to understand to extent of improvement sonification can have on their academic performance by improving their level of understand, assimilation and memorization of a topic or message. As such, these students were subjected to the same text, under the same environment and with the same material to conduct the experiment as discussed.
The subject will be based on 10 tasks, designed to test their identification abilities. These tasks are as listed and discussed below.
3.4.1 Matching Task – this task is designed to test the ability of the participant to match sonified elements in its real world application. For instance, matching the sound of drum beating to a drum drawn on the task table, or being able to identify where a sound is “high” or “low” in terms of volume.
3.4.2 Comparison Task – this task is similar to the matching task but it is designed to test their sense of comparison. The participants will be provided with two sounds (e.g. the sound of car and the sound of trumpet) and asked to compare against each sound by identifying the sounds in relation to their real life application.
4.3 Classification Task – the classification task is designed to understand how they can relate sound with an event that has taken place, is taking place or will be taken place in the nearest future. For instance the sound of an alarm to denote danger or notification.
4.4 Ordering (Sorting Task) – this task will test the participants ability to put sounds or related events in an order. For instance, in a manufacturing company, the mixing sound will come before the processing sound, which will also come before the packing, capping, and transportation sound.
4.5 Association Task – association sound tests the participants’ ability to associate related events. For instance, the sound of a car speeding heavily can be associated with rush hours, while the sound of bells can be associated with end of break in school or an alarm in hotels.
4.6 Prediction Task – at this stage, the objective is to test the participants’ ability to predict next event from current event. For instance, end of music in a party could summon the end of the party or a gap for announcement.
4.7 Finding Task –the participants will also be tested on their ability to identify missing sounds in a group of related sounds. In a music concert, the presence of all instruments except for drums means that the drum is missing, and this task is designed to test the participants ability to understand that there is no drum being played in that concert.
4.8 Memorization Task – there are some natural sounds that we have been exposed to and these sounds seem easy to remember and apply in our daily lives. However, there are certain sounds which we have not heard for the first time but can easily associate it will happening events. Thus, this section is designed to test the participants’ ability to memorize new sounds that they have been exposed to.
4.9 Navigation Task – in a given task, the current activity could be a clear indication of the next activity. This is also similar with sounds as one sound can prove a clue of what will happen next. For instance, the sound of thunderstorms is a clue of possible sounds of rainfall. This section is designed to test the participants’ ability to apply and related these sounds with what will happen next.
4.10 Identification Tasks – finally, the last test is the identification sound. While this seems easy, the sound of related items might be quite difficult to identify, and this is a good cognitive processing ability. Thus, this task is designed to test participants ability to identify events and happenings with our common environment.
3.5 PARTICIPANT
The participants were grouped in the following order:
3.5.1 Participant group A – the group “A” participants comprised of normal students. The definition of normality in this sense is based on the criterion that they are not impaired in any form that might limit their cognitive processing abilities such as reading, writing, memorizing and applying educational terms.  
5.5.1 Participant group B – the group “B” comprises of dyslexic students. The terms dyslexic was used to defined these students based on the criterion that they are currently suffering from dyslexia and have possible signs of cognitive impairment such as inability to process single words, reading and writing. This group is made up of 11 dyslexia students with different levels of cognitive processing disabilities.
3.6 STIMULI
The stimuli for this analysis are cognitive processing abilities. In order to understand the importance of sonification in improving the academic competence of students, the participants were subjected to different sound tests on the computer. Prior to the computer-based sound test, the participants were presented with a mini lecture and asked to answer the questions by writing. The idea was to understand if dyslexia is a product of reading and writing impairment, and to accurately measure the level of improvement sonification can input on dyslexic students.
3.7 EXPERIMENTAL DESIGN
The experiment is designed to test the participants in 10 different areas of cognitive processing, and these areas are as discussed above. All of the questions asked are meant to test the cognitive ability of the students in different areas such as information processing, matching, memorizing, assimilation, thinking and recalling. The experiment is computer based and designed with Microsoft Power Point Presentation tool. The ideas of PPT are to present visuals and ease of use as all participants are expected to have use PPT at some point in their life. Navigation is also easy by just clicking the left allow, enter or next button. The tasks are as discussed and illustrated below.
3.7.1 SECTION A: MATCHING TASK
The purpose of this task is to investigate whether the correspondences are able to match the answers in the sound with the non-speech sound played.
Question 1
(a) High (1 note)
(b) Low (1 note)
Question 2
(a) Ascending (Sequence - High)
(b) Descending (Sequence - Low)
Question 3
(a) Train
(b) Car
3.7.2 SECTION B: COMPARISON TASK
The purpose of this task is to test whether the correspondences are able to differentiate the non-speech sound played. The non-speech sound played will be divided into different pitches and different volume.
Question 4 – Which note is HIGH?
(a) High (1 note)
(b) Low (1 note)
Question 5– Which is ASCENDING order?
(a) Ascending (Sequence - High)
(b) Descending (Sequence - Low)
Question 6 – Which sound is LOUD?

(a) Loud
(b) Slow
3.7.3 SECTION C: CLASSIFICATION TASK
The purpose of this task is to enable the correspondences to categorize the non-speech sound played according to different category.
Question 7 – Which 2 notes are LOW?
(a) High (1 note)
(b) Low (1 note)
(c) Low (1 note)
(d) High (1 note)
Question 9 – Which 2 notes are ASCENDING order?
(a) Decreasing (Sequence - Low)
(b) Ascending (Sequence - High)
(c) Ascending (Sequence - High)
(d) Decreasing (Sequence - Low)
3.7.4 SECTION D: ORDERING TASK
The purpose of this task is to test whether the correspondences are able to differentiate the sequence of the non-speech sound played.
Question 10 – Which is ASCENDING order?
(a) Ascending (2 notes)
(b) Descending (2 notes)
Question 11 – Which is DESCENDING order?
(a) Descending (Sequence)
(b) Ascending (Sequence)

Question 12 – Which is ASCENDING order?
(a) Loud to Slow (Near to Far)
(b) Slow to Loud (Far to Near)
3 3.7SECTION E: ASSOCIATION TASK
The purpose of this task is to allow correspondence to relate the non-speech sound played with the graph or image.
Question 13 – Low (1 note)
(a) Point up (Graph)
(b) Point down (Graph)
Question 14 – Ascending (Sequence)
(a) Going up (Escalator)
(b) Going down (Escalator)
Question 15 – Raining
(a) Umbrella
(b) Cap
3.7.6 SECTION F: PREDICTION TASK
(NO SPECIFIC ANSWER – depends from the respondents)
The purpose of this task is to predict what the correspondences will prefer. The choices given to the correspondences are different in pitches.
Question 16 – Descending (2 notes)
(a) Low (1 note)
(b) High (1 note)
Question 17 – Ascending (Sequence)
(a) Ascending (2 notes)
(b) Descending (2 notes)
Question 18 – Descending (Random)
(a) Descending (Random)
(b) Ascending (Random)
3.7.7 SECTION G: FINDING TASK
The purpose of this task is to investigate the correspondences’ abilities whether they are able to find out the hidden pattern in the non-speech sound played.
Question 19 – 4 notes
(a) 2 notes
(b) 2 notes
Question 20 – 8 notes
(a) 4 notes
(b) 4 notes
Question 21 – 9 notes
(a) 6 notes
(b) 6 notes
3.7.8 SECTION H: MEMORIZATION TASK
The purpose of this task is to measure the ability of correspondences memorizing non-speech sound played. The time is manipulated for each of the question in this task given.
Question 22 – Low (1 note) 30 second
(a) Low
(b) High
Question 23 – Ascending (Sequence) 60 second
(a) Descending
(b) Ascending
Question 24 – Door Bell i 90 second
(a) Door bell ii
(b) Door bell i
3.7.9 SECTION I: NAVIGATION TASK
The purpose of this task is to test whether the correspondences are able to recognize the direction of non-speech sound played from left to right or vice versa.
Question 25 – Select which speaker the sound starts. (Left to Right - 1 note)
(a) Left
(b) Right
Question 26 – Select which speaker the sound stops. (Left to Right - 4 notes) Random Descending
(a) Left
(b) Right
Question 27 – Select which speaker the sound stops. (Right to Left)
(a) Left
(b) Right
3.7.10 SECTION J: IDENTIFICATION TASK
The purpose of this task is meant to test the correspondences’ abilities to identify different pitches, objects and etc. from the non-speech sound played.
Question 28 – What is the pattern of the notes?
(a) High, Low, High
(b) Low, High, Low
Question 29 – How many sounds are there? (Drum, Guitar, Piano, Guitar, Piano, Trumpet, Drum - 12 notes)
(a) 3
(b) 4
Question 30 – Pick the animals that you hear in the sound. (Forest/Jungle)
(a) Owl
(b) Cat
(c) Cricket
(d) Wolf
3.8 MATERIAL USED AND EXPERIMENTAL PROCEDURE
3.8.1 Material – the material used for this study is a computer. The students were required to have their earphones on in order not to distract each other, and they were required to present answers to the question by directly clicking on the boxes presented in each question.
3.8.2 Procedures – the participants had to answer each question based on the procedure above. Once a question has been completed, the participant will click next to see the next question and the process continues until the participant completes all the questions. All attended questions are automatically graded, and the participants have the opportunity of seeing their scores for each of the questions.
3.9 SUMMARY OF CHAPTER 3
In this chapter, an experimental study was conducted on selected students with dyslexia. Numerous experiments where designed to test the participants in all level of cognitive processing, and to understand how capable they will be in decoding messages through sonification. The experiment was designed to allow them insight into the course before undertaking the sonification task. The idea is to understand whether dyslexia is a result of visual, writing and reading impairment as stated in the review of literature.
3.10 RESULT ANALYSIS
On an observational level, I noticed that both groups of participants seemed to be comfortable with the computer basics, that is, they understand the necessary actions to be taken in order to go to the next page of the test after answering questions in a specific page. This is important because I don’t want their cognitive processing ability to be affected in anyway by unfamiliarity with computer basics. Additionally, I provided earphones for all the students in order to ensure that the sounds emanating from their system is not disturbing others and that their answers is not influenced by the answer of other students. In generally, it was a wonderful experiment because all went as planned. The empirical finding will be analysed in the next chapter.
CHAPTER 4
RESULTS AND DISCUSSION
4.1. CHAPTER INTRODUCTION
In this chapter, all the performance of both groups of participants in the 10 tasks designed will be reviewed. The format for the review will be based on t-Test analysis. All the questions will be analysed on the grounds of performance of dyslexic students against normal students in order to understand the specific areas that they are impaired in.
4.3 T-TEST ANALYSIS
4.3.1 MATCHING TEST
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
MatchTask
Equal variances assumed
3.247
.077
1.592
58
.117
.06667
.04188
-.01717
.15050
Equal variances not assumed


1.592
56.795
.117
.06667
.04188
-.01720
.15054
Interpretation
From the significance level of the Levene's Test for Equality of Variances, it can be seen that the result obtained is .077. This figure is higher than .05. Thus, the interpretation is that there is no significant difference between the normal students and dyslexic students in identification of matching tests. This is encouraging because it implies that dyslexic students will be able to match events in relation to sounds played. That is, they will understand that somebody is travelling by hearing the sound of an airplane of moving vehicle.
4.3.2 COMPARISON TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
CompTask
Equal variances assumed
15.448
.000
2.408
58
.019
.14444
.05998
.02439
.26450
Equal variances not assumed


2.408
48.542
.020
.14444
.05998
.02389
.26500
Interpretation
Significantly, the result obtained in the comparison task is .000. This is disturbing in relation to the application of sonificattion as an assistive technology for students with dyslexia. The reason is that the figure is by far lower the normal figure of .05. The meaning of such figure is that the dyslexic students are not able to compare between non-speech sounds. Dyslexic students are not able to different between a low tune and high tune. The implication is that there is possibility of dyslexia being and hearing impaired disability, which could affect students ability to process the degree of non-speech sounds they are exposed in their daily lives. For instance, if there is a loud alarm that is sounding for warning, the dyslexic students might not be able to process the sound as warning and this can affect their survival rate in dangerous condition.
4.3.3 CLASSIFICATION TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
ClassTask
Equal variances assumed
.249
.619
-.324
58
.747
-.06667
.20596
-.47893
.34560
Equal variances not assumed


-.324
57.939
.747
-.06667
.20596
-.47894
.34561
Interpretation 
The revelation from this analysis at a value of .619 is that there is no significant difference between the two groups of participants in classifying sound. This implies that dyslexic students are capable of differentiating between a bird and a horse by associating their non-speech sounds classes. This is good because it implies that an event can be written into non-speech sounds which can easily be classified by dyslexic students and used to create meaning from complicated texts. 
4.3.4 ORDERING TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
OrderTask
Equal variances assumed
.018
.893
2.193
58
.032
.13333
.06079
.01165
.25501
Equal variances not assumed


2.193
57.724
.032
.13333
.06079
.01164
.25503
Interpretation
So far, the only matching task that has yielded difference between the two groups of responders is comparison task. The reason for recalling on that right now is because it is closely related to the ordering task. The ordering task involves putting an event from currently happening to “going to happen” later in the future. In this case, it was found that the value obtained is .892 which is significantly higher than the normal value used for measuring differences amongst variables (0.05). Although the study reveals that dyslexic students are not capable of identifying which sound is higher or louder, this finding from the order task shows that they can differentiate non-speech sounds in their order of loudness (i.e. louder – louder – more louder – loudest – most loudest etc.). Therefore, this finding can be used to mitigate the fact that dyslexic students can’t compare sounds. Thus, alarm sounds can slowly be increased to alert them of a happening event in times of danger.
4.3.5 ASSOCIATION TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
AssocTask
Equal variances assumed
.973
.328
.592
58
.556
.03333
.05632
-.07940
.14606
Equal variances not assumed


.592
57.662
.556
.03333
.05632
-.07941
.14608
 Interpretation
So far, the results from this research are increasing becoming heart-warming. This is because; there is a high possibility of sonfication being used as an assistive technology for students suffering from dyslexia. This clean is further proved by the score of .328, which is an implication that there is no significant difference between dyslexic and normal students in associating non-speech sounds with an event. This implies that dyslexic students can associate “syringes” with someone who needs an emergency medical attention or with special aid forces transporting money or VIPs. As such, it is expected that they will not come in collision with such people or events as they people they might be colliding with will not know that they are dyslexic. Thus, their safety as well as cognitive processing ability has some green light with sonification as an assistive technology in this area.
4.3.6 PREDICTION TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
PredictTask
Equal variances assumed
5.993
.017
1.266
58
.210
.10000
.07897
-.05808
.25808
Equal variances not assumed


1.266
53.537
.211
.10000
.07897
-.05836
.25836
 Interpretation
The purpose of this task is basically to understand what sounds participants will chose against the other. That is to say, the participants will be given two sounds and asked to choose between them in relation to the one they think is high, lower, ascending, and/or descending. There is no specific answer to this as the responders will have to choose which sound they will like to appear first. However, the finding is lower than 0.05 which is similar to the comparison task. The implication is that there is high significance between normal students and dyslexic students. This implies that normal students are better positioned in arranging non-speech sounds in their rightful order of speech levels. This is very disappointing in relation to the objectives of this paper as there is a clear indication of sound level impairments in dyslexic students.
4.3.7 FINDING TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
FindTask
Equal variances assumed
1.343
.251
1.429
58
.158
.11111
.07775
-.04452
.26675
Equal variances not assumed


1.429
55.522
.159
.11111
.07775
-.04467
.26689
 Interpretation
As stated earlier, the objective of this test is to understand the level of identification that correspondence possesses in relation to finding missing events from a group of events. For instance, it was demonstrated in the case of noticing that there is no “drum” from a musical sound. Another example is respondent's ability to notice that there should be a “key” tone before “engine” tone in starting a car. Significantly, there is no difference between the two groups of respondents in this area as the score can be seen to be .251 which is higher than .05 by a great margin. Therefore, it can be stated that dyslexic students have the ability to find a missing event from a group of events.
4.3.8 MEMORISING TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
MemoTask
Equal variances assumed
9.560
.003
.430
58
.668
.02222
.05162
-.08111
.12556
Equal variances not assumed


.430
37.434
.669
.02222
.05162
-.08234
.12678
 Interpretation
With the score of .003, there are high significant differences in the ability of dyslexic students to memorise non-speech sounds as compared with normal students. In the task, the test was based on understanding their ability to remember the non-speech sound being played in relation to time differences. For instance, ascending non-speech sound will be played for 30 second and be replaced by a descending non-speech sound. The task of the responders is to identify which sound was played first or last. However, dyslexic students scored very low in this task and the implication is that while they can process non-speech sounds, they can’t actually memorize the sound. This adds to the already identified issue of comparison of non-speech sound identified above. Since this is also a sort of comparative tests also, it can be stated that dyslexic students’ inability to compare non-speech sounds in relation to their volume can also be attribute with their inability to memorize these sounds. As such, dyslexia can be linked to sensory related disability.
4.3.9 NAVIGATION TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
NaviTask
Equal variances assumed
1.183
.281
.000
58
1.000
.00000
.06485
-.12982
.12982
Equal variances not assumed


.000
56.137
1.000
.00000
.06485
-.12991
.1299
Interpretation
The objective of this task is to understand whether responders have the ability of identifying non-speech sound’s directions. The idea is to understand whether they can easily notice direction that sounds are emanating from. From the above score of .281, there is no significant difference between both groups of students, and this implies that dyslexic students can identify the direction of emanating sound. While is important in education, it is also significant in real life application because it means that dyslexic students can identify the direction of alarm sounds and take necessary actions not to be present in the place the alarm is coming from.
4.3.10 IDENTIFICATION TASK
Independent Samples Test


Levene's Test for Equality of Variances
t-test for Equality of Means




95% Confidence Interval of the Difference


F
Sig.
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
Lower
Upper
IdenTask
Equal variances assumed
3.324
.073
.848
58
.400
.27778
.32765
-.37808
.93363
Equal variances not assumed


.848
54.475
.400
.27778
.32765
-.37898
.93454
Interpretation
The last task in this paper is to understand students’ ability to identify non-speech sounds and link it with representation of activities in the real world. For instance, it will test the ability of students to identify the sound of a moving car, an airplane, a gun, people clapping hands, laughs etc. Although the score is higher than .05 with a significance score of .073, it can be seen that there difference is not much and this can be an indication that dyslexic students might be experiencing problems with identifying non-speech sounds in relation to real life application. For instance, they might take sound from a “horse” to mean “car” or vice versa. This is disturbing as it can limit the application of sonification as an assistive technology based on the fact that sonification adopts non-speech sounds to represent complex messages in relation to its real world application.
4.4 DISCUSSION
From the data analysis above, it can be seen that there is a strong link between dyslexia and memory related impairment. This is because, dyslexic students find it difficult to identify the volume level of non-speech sounds (whether it is low or higher), to differentiate between tow sounds played at an interval and to memorize sounds in relation to real world application. As such, the reason behind the first two difficulties can be linked to the third difficulty. This is because, when they can’t memorize the sounds, then they can’t determine whether it is high or low.
However, there was no significant difference between dyslexic students and normal students in other questions. For instance, dyslexic students can associate non-speech sounds with missing events, navigate events through sound and associate sounds with happening events. As such, since the educational aspects involves teaching students in relation to what should be done and what should not be done, it can be stated that sonification can be used as an assistive technology to improve the academic performance of dyslexic students. This is because, dyslexic students can associate sounds to real life events, navigation tough tasks through non-speech sounds, match non-speech sounds to real life events, classify these events and place these events in their order of occurrence. As such, sonification will be a good assistive technology as dyslexic students will learn more with it can they will do without it.
4.5 IMPLICATION
The findings above have been able to support theories that state that sonification is capable of improving the academic performance of dyslexic students. This implies that dyslexic students should be exposed to more of sonified information because they can lead more meaning to the information. Additionally, this paper support theories that dyslexia is a product of memory related impairment which makes it difficult for students to read meaning into situations and this result in their experienced difficulties with reading and writing. Thus, the sonification program should be designed to incorporate improvement in memory related activities such as repetition to improve the student’s ability to memorize situations at hand.
CHAPTER 5
CONCLUSIONS
5.1. CHAPTER INTRODUCTION
With the objective of this paper – evaluating if sonification can be used as assistive technology for students suffering from dyslexia – already meet, this chapter will look to analyse the whole contents of this paper and presented a clear conclusion that will serve as the background for future research and overall findings from this paper.
5.2. IMPLICATIONS OF THIS RESEARCH
From the findings, this research was found to be significant for both theories and application. In theories, this paper supports the notion that dyslexia is a product of visual disorder to limits ability of people to process single words, read sentences and write. In practices, it was found that sonification is a good assistive technology for helping people with dyslexia.
On the other hand, it was found that dyslexic students are capable of matching the performance or normal students if given the right support. This was clear in the question number 23 of the experiment where both groups of students averaged the same score in most tasks.
5.3. DIRECTIONS FOR FUTURE RESEARCH
With sonification already linked to improved academic performance in this paper, it is suggested that future researches on supporting dyslexia should look into another assistive technologies, or sonification should be linked to other forms of disability in order to understand whether it can also help other disabled people.
5.4 SUMMARY OF THE STUDY
Right from the beginning of this paper, it was stated that the main purpose of this paper is to study sonification as an assistive technology for students with dyslexia. In order to undertake the project, a literature review was presented which defined dyslexia as an impairment which limited students’ ability to process single words and read meaning into sentences. It was also stated that it is genetic and hereditary. Also, dyslexia was found to be present only when there is no explanation for the student’s inability to process words, read and write. It also became clear that dyslexia is not cognitive inefficiency, but rather a disability which some countries like New Zealand as yet to accept as a disability in their Ministry of Health and Education.
The literature review presented a definition of assistive technology as technologies that can be used to support people with disability and further improve their ability to perform tasks they don’t usually perform before. Sonification on the other hand was defined as the process of transforming complex information into sounds that can be used to teach people and improve their ability to read meaning from information. At that point, it became clear that sonification might be the right solution to improving academic performance of students with dyslexia. However, this paper went on to conduct a primary research.
An experimental study was conducted to review the impact of sonification on academic performance of both normal and dyslexic students. The finding revealed significant positive impact on both students are they scored relatively high, and it also showed that if given the right support, dyslexic students are capable of matching the performance of normal students which can be seen in the question number 23, where both groups of students averaged the same score for that particular question.
8.0 REFERENCES
Aaron, P. G. (1997). The Impending Demise of the Discrepancy Formula. Review Educational Research, 67, 461-502.
Anderson, M. L. (2003). Embodied Cognition: A Field Guide. Artificial Intelligence, 149, 91-130.
Ansara, A.; Geschwind, N.; Galaburda, A. M.; Albert, M.; Gartrell, M. (1981). Sex Differences in Dyslexia. Towson, MD: Orton Dyslexia Society.
B. N. Walker and G. Kramer, "Ecological psychoacoustics and auditory displays: Hearing, grouping, and meaning making," in Ecological psychoacoustics, J. Neuhoff, Ed. New York: Academic Press, 2004, pp. 150-175.
B. N. Walker, A. Nance, and J. Lindsay, "Spearcons: Speech-based earcons improve navigation performance in auditory menus," presented at International Conference on Auditory Display, London, England, 2006.
Ballass JA (1994) Delivery of Information Through Sound. In: Kramer G (ed) Auditory Display: Sonification, Audification and Auditory Interfaces, SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison-Wesley, Reading, Mass., pp 79–94
BDA (2006), “The British Dyslexia Association”, available at: www.bda-dyslexia.org.uk.
Blomert, L.; Mitterer, H.; Paffen, C. (2004). In Search of the Auditory, Phonetic, and/or Phonological Problems in Dyslexia: Context Effects in Speech Perception. Journal of Speech, Language, and Hearing Research, 47, 1030-1047.
Bovermann, T., Hermann, T., & Ritter, H. (2006). Tangible data scanning sonification model. Proceedings of the 12th International Conference on Auditory Display (ICAD06) (pp. 77-82), London, UK.
Boyle, B et al., 2005. A Model for Training in Trans-European Assistive Technology Projects, in Pruski & Knopps, Assistive Technology: From Virtuality to Reality, IOS Press, 2005, Amsterdam, pp. – 705 – 710.
Bregman, A. S. (1990). Auditory scene analysis: The perceptual organization of sound. Cambridge, MA: MIT Press.
Brewster, S. (1997). Using non-speech sound to overcome information overload. Displays, 17, 179-189.
Brewster, S., & Murray, R. (2000). Presenting dynamic information on mobile computers. Personal Technologies, 4(4), 209-212.
Bright Solutions for Dyslexia. (date unknown). Symptoms of Dyslexia, http://www.dys-add.com/symptoms.html
British Psychological Society. (1999). Dyslexia, Literacy and Psychological Assessment.
Brown, E. E.; Eliez, S.; Menon, V.; Rumsey, J. M.; White, C. D.; Reiss, A. L. (2001). Preliminary Evidence of Widespread Morphological Variations of the Brain in Dyslexia, Neurology, 56, 781-783.
Brown, J., Culkin, N. and Fletcher, J. (2001), “Human factors in business-to-business research over the internet”, International Journal of Market Research, Vol. 43 No. 4, pp. 425-40.
Brown, L. M., & Brewster, S. A. (2003). Drawing by ear: Interpreting sonified line graphs. Proceedings of the International Conference on Auditory Display (ICAD2003) (pp. 152-156), Boston, MA.
Brown, M. L., Newsome, S. L., & Glinert, E. P. (1989). An experiment into the use of auditory cues to reduce visual workload. Proceedings of the ACM CHI 89 Human Factors in Computing Systems Conference (CHI 89) (pp. 339-346).
Buxton, W. (1989). Introduction to this special issue on nonspeech audio. Human-computer Interaction, 4, 1-9.
Cardon, L. R.; Smith, S. D.; Fulker, D. W.; Kimberling, W. J.; Pennington, B. F.; DeFries, J. C. (1994). Quantitative Trait Locus for Reading Disability on Chromosome 6. Science, 266, 276-279.
Chapman, J. W.; Tunmer, W. E. (2003). Reading Difficulties, Reading-Related Self-Perceptions, and Strategies for Overcoming Negative Self-Beliefs. Reading and Writing Quarterly, 19, 5-24.
Clay, M. M. (2001). Change over Time of Children’s Literacy Achievement. Portsmouth, NH: Heinermann.
D.R. Worrall, “An introduction to data sonification,” in R. T. Dean (ed.), The Oxford Handbook of Computer Music and Digital Sound Culture, Oxford: Oxford University Press, 2009.
Dale, M. and Taylor, B. (2001), “How adult learners make sense of their dyslexia”, Disability and Society, Vol. 16 No. 7, pp. 997-1008.
Davis, R. D.; Braun, E. M. (1994). The Gift of Dyslexia. Perigee: New York.
de Campo, A. (2006). Data sonification design space map. Unpublished manuscript.
Department of Education and Skills. (2001.) Special Education Needs Code of Practice.
Department of Education and Skills. (2004). Removing Barriers to Achievement.
Dombois, F. (2002). Auditory seismology - On free oscillations, focal mechanisms, explosions, and synthetic seismograms. Proceedings of the 8th International Conference on Auditory Display (pp. 27-30), Kyoto, Japan.
Dyslexia.com (2013), “Famous people with the gift of dyslexia.” Available at: http://www.dyslexia.com/famous.htm [Accessed on: 10/03/2013].
Edworthy, J. (1998). Does sound help us to work better with machines? A commentary on Rautenberg's paper 'About the importance of auditory Principles of Sonification: An Introduction to Auditory Display and Sonification Page 26 of 32 alarms during the operation of a plant simulator'. Interacting with Computers, 10, 401-409.
Fawcett, A. J. (2001). Dyslexia: Theory and Good Practice. Whurr: London.
Fawcett, A. J.; Nicolson, R. I. (1999). Performance of Dyslexic Children on Cerebellar and Cognitive Tests. Journal of Motor Behaviour, 31, 68-78
Fawcett, A. J.; Nicolson, R. I.; Dean, P. (1996). Impaired Performance of Children with Dyslexia on a range of Cerebellar Tasks. Annals of Dyslexia, 46, 259-283.
 Fe, NM: International Community for Auditory Display (ICAD).
Field, L. L.; Kaplan, B. J. (1998). Absence of Linkage of Phonological Coding Dyslexia to Chromosome 6p23-p21.3 in a Large Family Data Set. The American Journal of Human Genetics, 63, 1448-1456.
Fitch WT, Kramer G (1994) Sonifying ther Body Electric: Superiority of an Auditory over a Visual Display in a Complex Multivariate System. In: Kramer G (ed) Auditory Display: Sonification, Audification and Auditory Interfaces. SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison Wesley, Reading, Mass., Web proceedings <http://www.santafe.edu/˜icad>
Fitch, W. T., & Kramer, G. (1994). Sonifying the body electric: Superiority of an auditory over a visual display in a complex, multivariate system. In G.
Flowers, J. H., & Hauer, T. A. (1992). The ear's versus the eye's potential to assess characteristics of numeric data: Are we too visuocentric? Behavior Research Methods, Instruments & Computers, 24(2), 258-264.
Flowers, J. H., & Hauer, T. A. (1993). "Sound" alternatives to visual graphics for exploratory data analysis. Behavior Research Methods, Instruments & Computers, 25(2), 242-249.
Flowers, J. H., & Hauer, T. A. (1995). Musical versus visual graphs: Crossmodal equivalence in perception of time series data. Human Factors, 37(3), 553-569.
Flowers, J. H., Buhman, D. C., & Turnage, K. D. (1997). Cross-modal equivalence of visual and auditory scatterplots for exploring bivariate data samples. Human Factors, 39(3), 341-351.
Franklin, K. M., & Roberts, J. C. (2004). A path based model for sonification. Proceedings of the Eighth International Conference on Information Visualization (IV '04) (pp. 865-870).
Fricker, R.D. Jr and Schonlau, M. (2002), “Advantages and disadvantages of internet research surveys: evidence from the literature”, Field Methods, Vol. 14 No. 4, pp. 347-67.
Frith, U. (1997). Brain, Mind and Behaviour in Dyslexia. In Hulme, C.; Snowling, M. Dyslexia: Biology, Cognition and Intervention. Whurr: London.
Frysinger, S. P. (2005). A brief history of auditory data representation to the 1980s. Proceedings of the International Conference on Auditory Display (ICAD 2005), Limerick, Ireland.
Fulbright, R. K.; Jenner, A. R.; Mencl, W. E.; Pugh, K. R.; Shaywitz, B. A.; Shaywitz, S. E.; Frost, S. J.; Skudlarski, P.; Constable, R. T.; Lacadie, C. M.; Marchione, K. E.; Gore, J. C. (1999). The Cerebellum’s role in reading: A Functional MR Imaging Study. American Journal of Neuroradiology, 20, 1925-1930.
Furrer, O. and Sudharshan, D. (2001), “Internet marketing research: opportunities and problems”, Qualitative Marketing Research, Vol. 4 No. 3, pp. 123-9.
Garner, W. R., & Gottwald, R. L. (1968). The perception and learning of temporal patterns. The Quarterly Journal of Experimental Psychology, 20(2).
Gaver WW (1994) Using and Creating Auditory Icons. In: Kramer G (ed) (1994) Auditory Display: Sonification, Audification and Auditory Interfaces. SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison Wesley, Reading, Mass., pp 417–446
Gaver, W. W., Smith, R. B., & O'Shea, T. (1991). Effective sounds in complex systems: The ARKola simulation. Proceedings of the ACM Conference on Human Factors in Computing Systems CHI'91, New Orleans.
Gee, J. P. (2001). Reading as Situated Language: A Sociocognitive Perspective. Journal of Adolescent & Adult Literacy, 44, 714-725.
Greenspan, R. (2003), “Google gains overall, competition builds niches,” June 2, available at: www.clickz.com/stats/sectors/software/article.php/3362591
Grigorenko, E. L.; Wood, F. B.; Meyer, M. S.; Hart, L. A.; Speed, W. C.; Shuster, A.; Pauls, D. L. (1997). Susceptibility Loci for Distinct Components of Developmental Dyslexia on Chromosomes 6 and 15. American Journal of Human Genetics, 60, 27-39.
Grossnickle, J. and Raskin, O. (2001), “What’s ahead on the internet”, Marketing Research, No. Summer, pp. 9-13.
Haas, E., & Edworthy, J. (2006). An introduction to auditory warnings and alarms. In M. S. Wogalter (Ed.), Handbook of Warnings (pp. 189-198).
Hatcher, J., Snowling, M. and Griffiths, Y. (2002), “Cognitive assessment of dyslexic students in higher education”, British Journal of Educational Psychology, Vol. 72 No. 1, pp. 119-33.
Heiervang, E.; Stevenson, J.; Hugdahl, K. (2002). Auditory Processing in Children with Dyslexia. Journal of Child Psychology and Psychiatry, 43, 931-938.
Hermann, T. (2002), “Sonification for exploratory data analysis”, dissertation thesis, available at: http://sonification.de/publications/media/Hermann2002-SFE.pdf (accessed April 27, 2008).
Hermann, T., & Hunt, A. (2005). An introduction to interactive sonification. IEEE Multimedia, 12(2), 20-24.
Higgins, E. L. and Raskind, M. H, 2000. Speaking to Read: The Effects of Continuous vs. Discrete Speech Recognition Systems on the Reading and Spelling of Children With Learning Disabilities. Journal of Special Education Technology, 15 (1), 19-30.
Hinshelwood, J. (1917), Congenital Word Blindness, H.K. Lewis, London.
Ilieva, J., Baron, S. and Healey, N.M. (2002), “Online surveys in marketing research: pros and cons”, International Journal of Marketing Research, Vol. 44 No. 3, pp. 361-76.
J.H. Flowers, “Thirteen years of reflection on auditory graphing: Promises, pitfalls, and potential new directions,” in Proceedings of the First Symposium on Auditory Graphs, Limerick, Ireland, July 10, 2005
J.H. Flowers, D.C. Buhman and K.D. Turnage, “Crossmodal equivalence of visual and auditory scatterplots for exploring bivariate data samples,” in Human Factors, Volume 39, 1997, pp. 341-351.
Johannsen, G. (2004). Auditory displays in human-machine interfaces. Proceedings of the IEEE, 92(4), 742-758.
K. Hemenway, "Psychological issues in the use of icons in command menus," presented at CHI'82 Conference on Human Factors in Computer Systems, New York, 1982.
Kennel AR (1996) AudioGraf: A Diagram reader for Blind People. In: Proceedings of ASSETS’96 Second Annual ACM Conference on Assistive Technologies, April 11–12, 1996, Vancouver, Canada. ACM Press, New York, pp 51–56
Klassen, R. M. (2002). The Changing Landscape of Learning Disabilities in Canada: Definitions and Practice from 1989-2000. School Psychology International, 23, 1-21.
Kortum, P., Peres, S. C., Knott, B., & Bushey, R. (2005). The effect of auditory progress bars on consumer's estimation of telephone wait time. Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp. 628-632), Orlando, FL.
Kramer G (1994a) Auditory Display: Sonification, Audification and Auditory Interfaces. SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison Wesley, Reading, Mass.
Kramer, G. (1994). An introduction to auditory display. In G. Kramer (Ed.), Auditory display: Sonification, audification, and auditory interfaces (pp. 1-78). Reading, MA: Addison Wesley.
Kramer, G., Walker, B. N., Bonebright, T., Cook, P., Flowers, J., Miner, N., et al. (1999). The Sonification Report: Status of the Field and Research Agenda. Report prepared for the National Science Foundation by members of the International Community for Auditory Display. Santa
Kramer, G., Walker, B., Bonebright, T., Cook, P., Flowers, J., Miner, N. and Neuhoff, J. (1999), “Sonification report: status of the field and research agenda”, technical report, International Community for Auditory Display, Santa Fe, NM, available at: www.icad.org/node/400 (accessed April 27, 2008).
Levitin, D. J. (1999). Memory for musical attributes. In P. Cook (Ed.), Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics. (pp. 209-227). Cambridge, MA: MIT Press.
Loomis JM, Gollege RG, Klatzky RL, Spiegle JM, Teitz J (1994) Personal Guidance System for the Visually Impaired. In: Proceedings of ASSETS’94 First Annual ACM Conference on Assistive Technologies, Oct 31–Nov 1, 1994. Los Angeles, Calif. ACM Press, New York, pp 85–91
Lovegrove, W. (1993), “Weakness in the transient visual system: a causal factor in dyslexia”, in Tallal, P. and Galaburda, A.M. (Eds), Temporal Information Processing in the Nervous System: Special Reference to Dyslexia and Dysphasia, Academy of Sciences, New York, NY, pp. 57-69.
Lyon, R. G.; Shaywitz, S. E.; Shaywitz, B. A. (2003). Defining Dyslexia, Comorbidity, Teachers Knowledge of Language and Reading. Annals of Dyslexia, 53, 1-14.
M. M. Blattner, D. A. Sumikawa, and R. M. Greenberg, "Earcons and icons: Their structure and common design principles," Human-Computer Interaction, vol. 4, pp. 11-44, 1989.
Malhotra, N.K. (2004), Marketing Research: An Applied Orientation, 4th ed., Prentice Hall, Englewood Cliffs, NJ.
Marshall, A. (2003). Brain Scans show Dyslexics Read Better with Alternative Strategies. www.dyslexia.com/science/different_pathways.htm
Martins ACG, Rangayyan RM, Portelo LA, Amaro E, Ruschioni RA (1996) Auditory Display and Sonification of Textured Images. In: Frysinger S, Kramer G (eds) Proceedings of the Third International Conference on Auditory Display ICAD’96, Palo Alto, Calif. Web proceedings <http://www.santafe.edu/˜icad>
McAdams, S., & Bigand, E. (1993). Thinking in sound: the cognitive psychology of human audition. Oxford: Oxford University Press.
McDaniel, C. and Gates, R. (2005), Marketing Research, 6th ed., John Wiley & Sons, New York, NY.
McEneaney, J. E.; Lose, M. K.; Schwartz, R. M. (2006). A Transactional Perspective on Reading Difficulties and Response to Intervention. Reading Research Quarterly, 41, 117-128.
Meijer, P. (2000) Sensory substitution, http://ourworld.compuserve.com/homepages/ Peter_Meijer/sensub.htm.
Miles, T. R.; Haslum, M. N; Wheeler, T. J. (1998). Gender Ratios in Dyslexia. Annals of Dyslexia, 48, 27-55.
Miller, T.W. (2001), “Can we trust the data of online research?”, Marketing Research, Vol. 13, Summer, pp. 26-32
Moore, B. C. J. (1997). An introduction to the psychology of hearing (4th ed.). San Diego, Calif.: Academic Press.
Nicolson, R.; Fawcett, A. J.; Dean, P. (2001). Dyslexia, Development and the Cerebellum. Trends in Neurosciences, 24, 515-516.
P. Keller and C. Stevens, "Meaning from environmental sounds: Types of signal-referent relations and their effect on recognizing auditory icons," Journal of Experimental Psychology: Applied, vol. 10, pp. 3-12, 2004.
P. Kolers, "Some formal characteristics of pictograms," American Scientist, vol. 57, pp. 348- 363, 1969.
Padget, S. Y. (1998). Lessons from Research on Dyslexia: Implications for a Classification System for Learning Disabilities, Learning Disability Quarterly, 21, 167-178.
Pammer, K.; Vidyasagar, T. R. (2005). Integration of the Visual and Auditory Networks in Dyslexia: A Theoretical Perspective. Journal of Research in Reading, 28, 320-331.
Pennington, B., Orden, G. and Smith, S. (1990), “Phonological processing skills and deficits in adult dyslexics”, Child Development, Vol. 61 No. 6, pp. 1753-78.
Pringle-Morgan, W. (1896), “A case of congenital word blindness”, British Medical Journal, Vol. 2, p. 178.
Ramus, R.; Rosen, S.; Dakin, S.; Day, B.; Castellote, J.; White, S.; Frith, U. (2003). Theories of Developmental Dyslexia: Insights from a Multiple Case Study of Dyslexic Adults. Brain, 126, 841-865.
Raskind, M. H. and Higgins, E. L, 1999. Speaking to Read: The Effects of Speech  Recognition Technology on the Reading and Spelling Performance of Children with Learning Disabilities. Annals of Dyslexia, 49, 251-281.
Ray, N.M. and Tabor, S.W. (2003), “Cyber surveys come of age”, Marketing Research, Spring, pp. 32-7.
Richardson, J. and Wydell, T. (2003), “The representation and attainment of students with dyslexia in UK higher education”, Reading and Writing, Vol. 16 No. 5, pp. 475-503.
S. Brewster, P. C. Wright, and A. D. N. Edwards, "A detailed investigation into the effectiveness of earcons," presented at First International Conference on Auditory Display, Santa Fe, New Mexico, 1992.
Salvendy, G. (1997). Handbook of human factors and ergonomics (2nd ed.). New York: Wiley.
Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and design (7th ed.). New York: McGraw-Hill.
Scholl, N., Mulders, S. and Drent, R. (2002), “Online qualitative market research: interviewing the world at a fingertip”, Qualitative Market Research, Vol. 5 No. 3, pp. 210-23.
Shannon, C. E. (1998/1949). Communication in the presence of noise. Proceedings of the IEEE, 86(2), 447-457. Smith, D. R., & Walker, B. N. (2005). Effects of auditory context cues and training on performance of a point estimation sonification task. Journal of Applied Cognitive Psychology, 19(8), 1065-1087.
Shaw, S.; Cullen, J.; McGuire, J.; Brinckerhoff, L. (1995). Operationalising a Definition of Learning Disabilities. Journal of Learning Disabilities, 28, 586-597.
Shaywitz, S. (1998), “Current concepts: dyslexia”, New England Journal of Medicine, Vol. 338, pp. 307-12.
Shaywitz, S. E.; Fletcher, J. M.; Holahan, J. M.; Shneider, A. E.; Marchione, K. E.; Stuebing, K. K.; Francis, D. J.; Pugh, K. R.; Shaywitz, B. A. (1999). Persistence of Dyslexia: The Connecticut Longitudinal Study at Adolescence. Pediatrics, 104, 1351-1359.
Shaywitz, S. E.; Shaywitz, B. A.; Fletcher, J. M.; Escobar, M. D. (1990). Prevalence of Reading Disability in Boys and Girls. Journal of the American Medical Association, 264, 998-1002.
Shepherd, I.D.H. (1995) Multi-sensory GIS: mapping out the research frontier, in: Waugh, T. (Ed.) Proceedings of the 6th International Symposium on Spatial Data Handling, pp.356- 410 (London: Taylor & Francis).
Simmons, F. and Singleton, C. (2000), “The reading comprehension abilities of dyslexic students in higher education”, Dyslexia, Vol. 6 No. 3, pp. 178-92.
Snow, C., Burns, M. and Griffin, P. (1998), Preventing Reading Difficulties in Young Children, National Academy Press, Washington, DC.
Snowling, M. (1987), Dyslexia: A Cognitive Developmental Perspective, Basil Blackwell, Oxford.
Snowling, M. (2001), Dyslexia, Blackwell, Oxford.
Snowling, M. J. (2000). Dyslexia. 2nd ed. Oxford: Blackwell.
Sorkin, R. D. (1987). Design of auditory and tactile displays. In G. Salvendy (Ed.), Handbook of human factors (pp. 549-576). New York: Wiley & Sons.
Spence, C., & Driver, J. (1997). Audiovisual links in attention: Implications for interface design. In D. Harris (Ed.), Engineering Psychology and Cognitive Ergonomics Vol. 2: Job Design and Product Design (pp. 185- 192). Hampshire: Ashgate Publishing.
Stanovich, K. E. (1998). Refining the Phonological Core Deficit Model. Child Psychology and Psychiatry Review, 3, 17-21.
Stanovich, K. E. (1999). The Sociopsychometrics of Learning Disabilities. Journal of Learning Disabilities, 22, 350-361.
Stein, J. (2001). The Magnocellular Theory of Dyslexia. Dyslexia, 7, 12-36.
Stein, J. and Talcott, J. (1999), “Impaired neuronal timing in developmental dyslexia: the magnocellular hypothesis”, Dyslexia, Vol. 5 No. 1, pp. 56-77.
Stein, J.; Walsh, V. (1997). To See but not to Read: The Magnocellular Theory of Dyslexia. Trends in Neuroscience, 20, 147-152.
Stevens RD, Brewster SA, Wright PC, Evans ADN (1994) Design and Evaluation of an Auditory Glance at Algebra for Blind readers. In: Kramer G, Smith S (eds) Proceedings of the Second International Conference on Auditory Display ICAD ‘94, Santa Fe Institute, Santa Fe, New Mexico. 7–9 Nov, 1994
Stokes, A., Wickens, C. D., & Kite, K. (1990). Display Technology: Human Factors Concepts. Warrendale, PA: Society of Automotive Engineers.
Swan N (1996) Ways of Seeing. The Health Report. Radio National Transcripts, Monday, 19 February 1996; http://www.abc.net.au/rn/talks/8.30/helthrpt/hstories/hr190201.htm
Tallal, P.; Merzenich, M. M.; Miller, S.; Jenkins, W. (1998). Language Learning Impairments: Integrating Basic Science, Technology, and Remediation. Experimental Brain Research, 123, 210-219.
Taylor, M., Duffy, S., and Hughes, G. (2007), “The use of animation in higher education teaching to support students with dyslexia.” Education þ Training Vol. 49 No. 1, 2007 pp. 25-35.
Temple, E.; Poldrack, R. A.; Salidis, J.; Deutsch, G. K.; Tallal, P.; Merzenich, M. M. (2001). Disrupted Neural Responses to Phonological and Orthographical Processing in Dyslexic Children: an fMRI Study. Neuroreport, 12, 299-307.
Thomas, H. (2008), “Taxonomy and definitions for sonification and auditory display.” Proceedings of the 14th International Conference on Auditory Display, Paris, France June 24 - 27, 2008. Available at: http://pub.uni-bielefeld.de/download/2017235/2280244 [Accessed on: 3-3-2013).
Tingling, P., Parent, M. and Wade, M. (2003), “Extending the capabilities of internet-based research: lessons from the field”, Internet Research, Vol. 13 No. 3, pp. 223-35.
UK Higher Education Statistics Agency (HESA) (2006), “First year UK domiciled HE students with a disability”, available at: www.hesa.ac.uk/
US Department of Education. (2001). No Child Left Behind Act.
US Department of Education. (2004). Individuals with Disabilities Education Act.
van Ingelghem, M.; van Wieringen, A.; Wouters, J.; Vandenbussche, E.; Onghena, P.; Ghesquière, P. (2001). Psychophysical Evidence for a General Temporal Processing Deficit in Children with Dyslexia. Neuroreport, 12, 3603-3607.
Vellutino, F. R.; Fletcher, J. M.; Snowling, M. J.; Scanlon, D. M. (2004). Specific Reading Disability (Dyslexia): What have we Learned in the Past Four Decades? Journal of Child Psychology and Psychiatry, 45, 2-40.
Vicari, S., Finzi, A., Menghini, D., Marotta, L., Baldi, S. and Petrosini, L. (2005), “Do children with developmental dyslexia have an implicit learning deficit?”, Journal of Neurology, Neurosurgery and Psychiatry, Vol. 76 No. 10, pp. 1392-7.
W. W. Gaver, "Auditory icons: Using sound in computer interfaces," Human-Computer Interaction, vol. 2, pp. 167-177, 1986.
Wadswoth, S. J.; DeFries, J. C.; Stevenson, J.; Gilger, J. W.; Pennington, B. F. (1992). Gender Ratios among Reading Disabled Children and their Siblings as a Function of Parental Impairment. Journal of Child Psychology and Psychiatry, 33, 1229-1239.
Walker, B. N. (2002). Magnitude estimation of conceptual data dimensions for use in sonification. Journal of Experimental Psychology: Applied, 8, 211-221.
Walker, B. N., & Kramer, G. (2004). Ecological psychoacoustics and auditory displays: Hearing, grouping, and meaning making. In J. Neuhoff (Ed.), Ecological psychoacoustics (pp. 150-175). New York: Academic Press.
Walker, B. N., & Kramer, G. (2005). Mappings and metaphors in auditory displays: An experimental assessment. ACM Transactions on Applied Perception, 2(4), 407-412.
Watson, M. (2006). Scalable earcons: Bridging the gap between intermittent and continuous auditory displays. Proceedings of the 12th International Conference on Auditory Display (ICAD06), London, UK.
Wickens, C. D., & Liu, Y. (1988). Codes and modalities in multiple resources: A success and a qualification. Human Factors, 30(5), 599-616.
Wickens, C. D., Gordon, S. E., & Liu, Y. (1998). An introduction to human factors engineering. New York: Longman.
Wickens, C. D., Sandry, D. L., & Vidulich, M. (1983). Compatibility and resource competition between modalities of input, central processing, and output. Human Factors, 25(2), 227-248.
Wilson, A. and Laskey, N. (2003), “Internet-based marketing research: a serious alternative to traditional research methods?”, Marketing Intelligence & Planning, Vol. 21 No. 2, pp. 79-84. 
Technology 3814459283310079503

Post a Comment

Tell us your mind :)

emo-but-icon

Home item

Popular Posts

Random Posts

Click to read Read more View all said: Related posts Default Comments