"Hardware add-ons, software updates, modifications, and new releases are commonplace in computing life. As a result, the number and types of decisions have grown considerably in the past 20 years" (Piccano, 2011, p. 187). It's obvious to state that in today's world, technology hardware and software evaluation is a major task of school leaders, administrators and technology specialists. In addition to hardware and software, now there are also decisions to be made about applications (apps), web 2.0 tools, cloud storage and instructional technology usage in general. Considering Apple apps alone, there were 20,000 education and learning apps developed by the year 2012 (Rao, 2012). Three years later, we can only imagine how many more education applications are available (and we're just talking Apple apps here!) According to Piccano (2011), "even after hardware or software has been selected, new models or new versions quickly appear - requiring yet another decision" (p.187).
To help sort through all of these decisions, Piccano (2011) offers seven criteria for selecting hardware and six criteria for software. Among the criteria for hardware evaluation are: "performance (how well does the hardware work?); compatibility (does the hardware work with other equipment?); modularity/expandibility (can the hardware grow as applications grow?); ergonomics (is the hardware designed with people in mind?); software availability (is the software you wish to use currently available?); vendor (what is the reputation of the manufacturer in terms of technical support, maintenance, and industry position?); cost (what are the costs?)" (p. 190). Software evaluation includes, "efficiency (how well are the programs written?); ease of use (how easy is the software to use?); documentation (what is the quality and quantity of the documentation?); hardware requirements (what hardware is needed to run the software?); vendor (same) and cost (same)" (p. 197).
More important than hardware and software decisions, are decisions concerning the educational effectiveness of the program (whether it be hardware or software). At first glance, it seemed that this important factor was left out of Piccano's hardware and software criteria guidelines. However, In Appendix C of the text, Piccano compiled an extensive list of criteria for developing a software evaluation form or checklist for schools or districts. According to Piccano (2011), "an evaluation form helps define the evaluation procedure itself, and administrators and teachers should work together to develop what they feel will work best in their schools" (p. 208). The evaluation factors are grouped into eight categories: general, content, appropriateness, questioning techniques, approach/motivation, evaluator's field test results, creativity, learner control, learning objectives, goals and outcomes, feedback, simulations, teacher modifiability, evaluation and record keeping, documentation and support material, technical quality, start-up and implementation, graphics and audio, probeware and peripherals included in the software package, and hardware and marketing issues (p.289-294). All of these factors together, along with the hardware and software criteria, will greatly help any school or district determine the most effective technology components to enhance their curriculum.
To help sort through all of these decisions, Piccano (2011) offers seven criteria for selecting hardware and six criteria for software. Among the criteria for hardware evaluation are: "performance (how well does the hardware work?); compatibility (does the hardware work with other equipment?); modularity/expandibility (can the hardware grow as applications grow?); ergonomics (is the hardware designed with people in mind?); software availability (is the software you wish to use currently available?); vendor (what is the reputation of the manufacturer in terms of technical support, maintenance, and industry position?); cost (what are the costs?)" (p. 190). Software evaluation includes, "efficiency (how well are the programs written?); ease of use (how easy is the software to use?); documentation (what is the quality and quantity of the documentation?); hardware requirements (what hardware is needed to run the software?); vendor (same) and cost (same)" (p. 197).
More important than hardware and software decisions, are decisions concerning the educational effectiveness of the program (whether it be hardware or software). At first glance, it seemed that this important factor was left out of Piccano's hardware and software criteria guidelines. However, In Appendix C of the text, Piccano compiled an extensive list of criteria for developing a software evaluation form or checklist for schools or districts. According to Piccano (2011), "an evaluation form helps define the evaluation procedure itself, and administrators and teachers should work together to develop what they feel will work best in their schools" (p. 208). The evaluation factors are grouped into eight categories: general, content, appropriateness, questioning techniques, approach/motivation, evaluator's field test results, creativity, learner control, learning objectives, goals and outcomes, feedback, simulations, teacher modifiability, evaluation and record keeping, documentation and support material, technical quality, start-up and implementation, graphics and audio, probeware and peripherals included in the software package, and hardware and marketing issues (p.289-294). All of these factors together, along with the hardware and software criteria, will greatly help any school or district determine the most effective technology components to enhance their curriculum.
References
Piccano, A.G. (2011). Educational leadership and planning for technology. (5th ed.) Upper Saddle River, NJ: Pearson.
Rao, Leena. (2012). Apple: 20,000 Education iPad Apps Developed; 1.5 Million Devices in Use at Schools. http://techcrunch.com/2012/01/19/apple-20000-education-ipad-apps-developed-1-5-million-devices-in-use-at-schools/
Image retrieved from: http://sellingwithsean.com/wp-content/uploads/2014/05/Decisions-Decisions-910x1024.png
Instructional Software Evaluation Factors: http://www-personal.umich.edu/~sdbest/techplan/maps/App_e.htm
Instructional Software Evaluation Factors: http://www-personal.umich.edu/~sdbest/techplan/maps/App_e.htm
Piccano, A.G. (2011). Educational leadership and planning for technology. (5th ed.) Upper Saddle River, NJ: Pearson.
Rao, Leena. (2012). Apple: 20,000 Education iPad Apps Developed; 1.5 Million Devices in Use at Schools. http://techcrunch.com/2012/01/19/apple-20000-education-ipad-apps-developed-1-5-million-devices-in-use-at-schools/
20,000 education apps? It makes you wonder how many "perfect" apps there are that go unnoticed. Scholastic has put together a list of 50 apps for teachers that they think are worth trying out (http://www.scholastic.com/teachers/article/50-fab-apps-teachers). That is less than 1% of the apps you mentioned. No matter how the evaluation plays out, there is no way we can successfully evaluate all. I know it only takes one successful app to make a difference, but I would always have to wonder what else is out there.
ReplyDeleteIt's amazing that there are so many criteria that need to be "graded" when determining software and hardware choices. I love the idea mentioned of a software evaluation form. It falls in line with the idea that planning for technology and researching what is best for students should be the first priority in acquiring educational technology. Picciano (2011) says that technology should be used to enhance instruction and not supplement it (p. 251). I agree that the tools discussed above would help stakeholders answer many important questions in regards to software purchases.
ReplyDelete