Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2018 | 6/2018 (80), t.2 | 23-38

Article title

Evaluation as Reflective Practice

Authors

Content

Title variants

Languages of publication

PL EN

Abstracts

EN
Reflective practice has become an influential concept in the evaluation field. A greater use of reflective practice is advocated in reference to both evaluators’ own professional development, and as a means to enhance dialogue, stakeholders’ involvement and organisational learning in the evaluation process. The aim of the paper is to examine the evaluation endeavour from the latter perspective, i.e. to present evaluation approaches which offer the opportunity for collaborative reflective practice. To this end, evaluation as reflective practice is discussed at three levels: (1) the organisational level – the model of single, double and triple-loop learning is discussed in reference to formative, summative and developmental evaluation, (2) the evaluator’s level – different roles performed by the evaluator are considered from the point of view of promoting collaborative reflective practice, and (3) the broader socio-political level, in relation to the concept of civil society, as evaluation can contribute not only to a greater rigour and effectiveness of public spending but to social empowerment, to appreciating diversity or building trust (improving democratic policy-making).

Year

Pages

23-38

Physical description

Dates

online
2019-01-08

Contributors

  • Poznań University of Technology, Faculty of Management Engineering, Chair of Entrepreneurship and Business Communication

References

  • 1. Alkin, M. (2013). Evaluation Roots. A Wider Perspective of Theorists’ Views and Influence. Thousand Oaks, CA: SAGE Publications.
  • 2. Argyris, C. & Schön, D. (1978). Organizational learning: A theory of action perspective. Reading, Mass: Addison Wesley.
  • 3. Bateson, G. (1973). Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution and Epistemology. London: Paladin, Granada.
  • 4. Campbell, D. (1984). Can we be scientific in applied social science. Evaluation Studies Review Annual, 9. Beverly Hills, CA: Sage Publications.
  • 5. Chen, H.-T. (1996). A comprehensive typology for program evaluation. Evaluation Practice, 17(2), 121–130. https://doi.org/10.1177/109821409601700204.
  • 6. Cruise, P. (1999). Values, Program Evaluation and the New Public Management. International Journal of Organization Theory and Behavior, (2/3&4), 383–412.
  • 7. De Laat B. & Williams, K. (2014). Evaluation Use Within the European Commission (EC): Lessons for the Evaluation Commissioner. In: M. Loud, J. Mayne (eds),Enhancing Evaluation Use. Insights from Internal Evaluation Units (p. 147–174). Beverly Hills, CA: Sage Publications.
  • 8. Dickson, R. & Saunders, M. (2014). Developmental evaluation: Lesson for evaluative practice from the SEARCH Program. Evaluation, 20(2), 176–194.
  • 9. Donaldson, S. & Picciotto, R. (2016). Evaluation for an Equitable Society. Charlotte, NC: Information Age Publishing, Inc.
  • 10. Fiol, C. & Lyles, M. (1985). Organizational Learning. The Academy of Management Review, (10/4), 803–813.
  • 11. Flood, R. & Romm, N. (1996). Diversity Management: Triple-loop Learning. Chichester: John Wiley & Sons.
  • 12. Gamble, J. (2008) A Developmental Evaluation Primer. Ottawa: The J.W. McConnell Family Foundation.
  • 13. Horsch, K. (1998). Evaluation in the 21st Century. Interview with Carol H. Weiss. The Evaluation Exchange, (IV/2), 5–6.
  • 14. Isaacs, W. (1993). Taking flight: Dialogue, collective thinking, and organizational learning. Organizational Dynamics, 22(2), 24–39.
  • 15. Jones, D. & Stubbe, M. (2004). Communication and the reflective practitioner: A shared perspective from sociolinguistics and organisational communication. International Journal of Applied Linguistics, (14/2), 185–211. https://doi.org/10.1111/j.1473-4192.2004.00059.x.
  • 16. Kubera, P. (2017). Conceptual Framework for Evaluation of Economic Impacts of RDI Instruments. Journal Association SEPIKE, 17, 90–96.
  • 17. Luo, H. (2010). The Role for an Evaluator: A Fundamental Issue for Evaluation of Education and Social Programs. International Education Studies, (3/2), 42–50.
  • 18. Markiewicz, A. & Patrick, I. (2016). Developing Monitoring and Evaluation Frameworks. USA: SAGE Publications Inc.
  • 19. Mc David, J., Huse I. & Hawthorn, L. (2013). Program Evaluation and Performance Measurement. An Introduction to Practice (2nd ed.). Thousand Oaks: SAGE Publications.
  • 20. Morabito, S. (2002). Evaluator Roles and Strategies for Expanding Evaluation Process Influence. American Journal of Evaluation, 23(3), 321–330.
  • 21. OECD. (2010). Glossary of Key Terms in Evaluation and Results Based Management. Paris: OECD.
  • 22. Patton, M. (1996). A world larger than formative and summative. Evaluation Practice, 17(2), 131–144. https://doi.org/10.1177/109821409601700205.
  • 23. Patton, M. (2007). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
  • 24. Patton, M. (2011). Developmental evaluation. Applying Complexity Concepts to Enhance Innovation and Use. New York: Guliford Press.
  • 25. Patton, M. (2012a). Developmental Evaluation: Applying complexity concepts to enhance innovation and use. Report from an Expert Seminar with Dr Michael Quinn Patton, Centre for Development Innovation, Wageningen University & Research centre.
  • 26. Patton, M. (2012b). Essentials of Utilization-Focused Evaluation (3rd ed.). Thousand Oaks, CA: SAGE Publications.
  • 27. Patton, M. (2017). Developmental evaluation. In: J. Pokorski, Z. Popis, K. Herman-Pawłowska (eds), Theory-based evaluation in complex environments (pp. 7–19). Warsaw: PARP.
  • 28. Patton, M.Q. (2007). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
  • 29. Preskill, H. & Torres, R. (1999). Evaluative Inquiry for Learning in Organizations. Thousand Oaks, CA: SAGE Publications.
  • 30. Preskill, H., Zuckerman, B. & Matthews, B. (2003). An exploratory study of process use: Findings and implications for future research. American Journal of Evaluation, 24(4), 423–442. https://doi.org/10.1177/109821400302400402.
  • 31. Reynolds, M. & Holwell, S. (2010). Introducing Systems Approaches. In: M. Reynolds, S. Holwell (eds), Systems Approaches to Managing Change: A Practical Guide(pp. 1–24). London: Springer.
  • 32. Reynolds, M. & Williams, B. (2012). Systems thinking and Equity-focused evaluations. In: M. Segone, M. Bamberger (eds), Evaluation for equitable development results (pp. 115–141). New York: UNICEF.
  • 33. Reynolds, M. (2007). Evaluation based on critical systems heuristics. In: B. Williams, I. Imam, (eds), Using Systems Concepts in Evaluation: An Expert Anthology(p. 101– 122). Point Reyes CA, USA: Edge Press.
  • 34. Reynolds, M. & Holwell, S. (2010). Introducing Systems Approaches. In: M. Reynolds, S. Holwell (eds), Systems Approaches to Managing Change: A Practical Guide,Springer, 1–23.
  • 35. Reynolds, M. & Williams, B. (2012). Systems thinking and Equity-focused evaluations. In: M. Segone, M. Bamberger (eds), Evaluation for equitable development results (p. 115–141). New York: UNICEF.
  • 36. Rossi P, Freeman, H. & Lipsey, M. (2004). Evaluation. A Systemic Approach (7t ed.). Thousand Oaks: SAGE Publications.
  • 37. Schön, D. (1983). The reflective practitioner. New York: Basic Books.
  • 38. Schwandt, T. (2001). Dictionary of qualitative inquiry (2nd ed.). Thousand Oaks, CA: SAGE.
  • 39. Scriven, M. (1986). New frontiers of evaluation. Evaluation Practice, 7, 7–44.
  • 40. Scriven, M. (1991). Beyond formative and normative evaluation. In: G. McLaughlin, D. Philips (eds), Evaluation and Education: At quarter century (pp.19–64). Chicago, Il: University of Chicago Press.
  • 41. Shadish, W., Cook, T. & Leviton, L. (1991). Foundations of program evaluation. Newbury Park, CA: Sage.
  • 42. Skolits, G., Morrow, J. & Burr, E. (2009). Reconceptualizing Evaluator Roles. American Journal of Evaluation, 30(3), 275–295.
  • 43. Smith, T., (2014). Reflective Practice, Collaboration, and Stakeholder Communication: Where Does the Field of Evaluation Stand? PhD diss. University of Tennessee. Retrieved from: http://trace.tennessee.edu/utk_graddiss/2860/.
  • 44. Stake, R. & Trumbull, D. (1982). Naturalistic generalizations. Review Journal of Philosophy & Social Science, 7, 1–12.
  • 45. Stame, N. (2004). Theory-based Evaluation and Types of Complexity. Evaluation, (10/1), 58–76.
  • 46. Stevahn, L., King, J., Ghere, G. & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59, http://dx.doi.org/10.1177/1098214004273180.
  • 47. Stufflebeam, D., Madaus, G. & Scriven, M. (2000). Program Evaluation. A Historical Overview. In: D. Stufflebeam, G. Madaus, T. Kellaghan (eds), Evaluation ModelsViewpoints on Educational and Human Services Evaluation (2nd ed., p. 3–18). Boston: Springer, Kluwer.
  • 48. Thompson, N. & Pascal, J. (2012). Developing critically reflective practice. Reflective Practice, (13/2), 311–325. http://dx.doi.org/10.1080/14623943.2012.657795.
  • 49. Thompson, N. (2000). Theory and practice in the human services (2nd ed.). Buckingham: Open University Press.
  • 50. Tosey, P., Visser, M. & Sounders, M. (2012). The origins and conceptualizations of ‘triple-loop’ learning: A critical review. Management Learning, (43/3), 291–307.
  • 51. Ulrich, W. (2000). Reflective Practice in Civil Society: The Contribution of Critically Systemic Thinking. Reflective Practice, (1/2), 247–268. Retrieved from: http://wulrich.com/downloads/ulrich_2000a.pdf.
  • 52. Ulrich, W. (2005). A brief introduction to critical systems heuristics (CSH). Web site of the ECOSENSUS project, Open University, Milton Keynes, UK, 14 October 2005. Retrieved from: http://www.ecosensus.info/about/index.html.
  • 53. Ulrich, W. (2008). Reflections on Reflective Practice. The Mainstream Concept of Reflective Practice, (1/7). Retrieved from: http://www.wulrich.com.bimonthly_march2008.html.
  • 54. Vince, R. (2002). Organizing Reflection. Management Learning, (33/1), 63–78. https://doi.org/10.1177/1350507602331003.
  • 55. Weiss, C. (1988). If Program Decisions Hinged Only on Information: A Response to Patton. Evaluation Practice, (9/3), 15–28.
  • 56. Yuthas, K., Dillard, J. & Rogers, R. (2004). Beyond agency and structure: Triple-loop learning. Journal of Business Ethics, 51(2), 229–243.

Document Type

Publication order reference

Identifiers

ISSN
10.7172/1644-9584.80.2
DOI

YADDA identifier

bwmeta1.element.desklight-d58b7dc5-4efb-44b2-9912-83aec53440c0
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.