Show simple item record

dc.contributor.advisorGilbert, Juan
dc.contributor.advisorSeals, Cherylen_US
dc.contributor.advisorGrandjean, Peteren_US
dc.contributor.authorWilliams, Andreaen_US
dc.date.accessioned2009-02-23T15:53:42Z
dc.date.available2009-02-23T15:53:42Z
dc.date.issued2007-08-15en_US
dc.identifier.urihttp://hdl.handle.net/10415/1386
dc.description.abstractIn the software development cycle, it is often the practice that developers will hold usability studies to test the accuracy and effectiveness of the software and to retrieve user response as to the satisfaction or usability of that software. In practice, usability studies can give developers insight into the mind of the user as well as unveil errors, major and minor, within the system. Of course, as with anything that involves users and studies, planning and budgeting to assess the cost of usability testing and users in the study must be done. Planning studies can be time consuming because activities such as designing studies, enlisting participants, and possibly implementing several runs of a study must take place. In planning, developers must consider different methods of usability, heuristic evaluation, and observation of tasks done in the study; these activities can become burdensome and intimidating to companies not familiar with this practice or not sure which practices will benefit their company most. Budgeting for studies within the development cycle is often a tug of war because although several tests might prove beneficial in the long run of the project scheme, in the short term the budget might not allow for testing at all or it might allow for a single test with a select number of participants. Often there are numerous problems with planning and budgeting that ultimately cause studies to either be drastically cut down in size or eliminated altogether. Determining the best factors for a study can be problematic because studies should be designed to fit the particular company, its size, and its goal for the study. Some companies are not familiar enough with design specifications and often have to hire someone to implement their study or they neglect it altogether. In some cases they even implement their own study. In all cases, the outcomes can become costly if proper judgment is not used in selecting the type of study, the number of participants, the type of participants, or even the number of runs (trials) needed for that particular study. The aim of this experiment was to help find a plausible solution to selecting participants for studies by using Applications Quest. --Y΄Applications Quest is data mining software that clusters applications based on holistic comparisons.‘ The notion of holistically reviewing an application means considering each and every attribute of the application such that no single attribute weighs heavier than another. For committees the action of holistically reviewing an application is time consuming and difficult because humans do not possess the ability to effectively compare attributes subjectively and with reproducibility of results. Applications Quest achieves the goal of holistically comparing applications and recommending applicants that represent diversity with diversity not being defined by race or ethnicity. Because the algorithm compares each application with the same rigor, the results are reproducible and justifiable. Applications Quest would take a group of size N and from that group select participants that would be representative of the population. The selected participants would help reduce costs by minimizing the number of participants necessary while still maintaining result quality. Two approaches were used for comparison, random selection and Applications Quest selection. Random sampling would take samples intended to be representative of the larger population. Each method was used to evaluate the same data in this experiment, although the random trials in this experiment were able to compete with the results of Applications Quest, Applications Quest was able to present results that were insignificantly different as well as consistently reproducible. The random trials were unpredictable and that fact does not lend to guarantee certainty or reliability when necessary in selecting participants. These findings sen_US
dc.language.isoen_USen_US
dc.rightsEMBARGO_NOT_AUBURNen_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleUsability Size Nen_US
dc.typeThesisen_US
dc.embargo.lengthMONTHS_WITHHELD:24en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2011-02-23en_US


Files in this item

Show simple item record