Educational Research and Reviews

  • Abbreviation: Educ. Res. Rev.
  • Language: English
  • ISSN: 1990-3839
  • DOI: 10.5897/ERR
  • Start Year: 2006
  • Published Articles: 2006

Full Length Research Paper

Application of context input process and product model in curriculum evaluation: Case study of a call centre

Derya Kavgaoglu*
  • Derya Kavgaoglu*
  • YTU, Istanbul, 34349, Turkey.
  • Google Scholar
Bulent Alci
  • Bulent Alci
  • YTU, Istanbul, 34349, Turkey.
  • Google Scholar


  •  Received: 11 April 2016
  •  Accepted: 15 August 2016
  •  Published: 10 September 2016

 ABSTRACT

The goal of this research which was carried out in reputable dedicated call centres within the Turkish telecommunication sector aims is to evaluate competence-based curriculums designed by means of internal funding through Stufflebeam’s context, input, process, product (CIPP) model. In the research, a general scanning pattern in the scope of descriptive research is used. The data collection instrument consists of the professional competence development curriculum’s CIPP evaluation scale developed by researchers. Participants are 622 call centre agents who served in the Black Sea, Central Anatolia and Eastern Anatolia Regions in 2014 and 2015. Statistical analyses of the research were conducted by applying statistical package for social sciences (SPSS) v23.0 and Amos v21.0 software. In addition to gap analyses, the Structural Equation Modelling (SEM) was applied as well. For the construct validity of the scale, analyses of the illustrative and confirmatory factors are conducted respectively. In scoring, in focusing on the dimensions of the CIPP evaluation scale, significant variations by gender and education background have been observed between the opinions of the participants.  

Key words: CIPP model, curriculum evaluation, competence-based curriculum development, adult education, talent management.


 INTRODUCTION

Nowadays, competence-based education and talent management have become the most important areas, on which modern businesses sensitively put emphasis with the aim to maximally benefit from skilled labour. And the process which provides the most reliable information on how well the efforts in these spheres run is an evaluation practice which is at the centre of these applications. These processes running in the area of responsibility of any businesses’  Human  Resources,  Training,  Learning and Development, or Career and Competence units may transfer into an educational business in non-competent hands. Being aware of this demand, the market manages to get unearned income from this need through routine curriculums (education programs) by creating brands and fashion.

However, education is a scientific process, not a fashion trend. Noticing of needs, determining of competences, discovering    of    development     areas     and     making instructional designs to the purpose, and most importantly, finding out whether the efforts provide input for a real development, that is, ‘curriculum evaluation’ necessitate domain expertise and professional knowledge. In this sense, measuring the findings of entertaining the participants with popular activities also through satisfaction questionnaires by allocating extensive budgets to education would never be an evidence of existence of a real educational activity. With reference to the points of definitions in literature, we can explain the curriculum evaluation as a scientific process containing a range of systematic researches focused on the efficiency of an applied curriculum, integrated data collection, analysis, comparison, decision-making and judgment making practices (Demirel, 2006; Doll, 1992; Erden, 1993; Erturk, 1975; Sonmez, 2010; Taba, 1962; Tyler, 1949; Worthen et al., 1987; Varis, 1996). A curriculum evaluation specialist should first know that:

1. The curriculum evaluation which is the most important step of curriculum development is an evidence-based reasoning process. Ornstain and Hunkins (2014) explain this situation through the Hourglass Metaphor comprising of cognition, observation and interpretation.

2. Curriculum evaluation should be cyclic, but not linear. Taba (1962) and Varis (1996) particularly emphasize the interaction between the components of a curriculum.

3. Curriculum evaluation is shaped with the questions of an evaluation specialist, and the philosophy taken as a basis has an immediate effect on the evaluation (Doll, 1992; Talmage, 1985; Ornstain and Hunkins, 2014).

4. Evaluation is an irrevocable supplement which enables not only a learner, but also the curriculum, education and instructor to renovate themselves.

5. Curriculum evaluation is an area of specialization (Erturk, 1975). Curriculum development and evaluation is a profession requiring specialization in subject matters from education psychology to social psychology, from education statistics to education economy, and from education philosophy to curriculum design.

6. Curriculum evaluation requires team work. It is an interaction process, which requires sharing not only with decision-makers, but also all partners (Usun, 2012).

7. An evaluation specialist may also evaluate his/her curriculum, especially in the context in the beginning and course of the curriculum as well as at its end according to the evaluation vision s/he determines. This triple goal classification is known as Diagnostic Evaluation, Formative Evaluation and Summative Evaluation (Demirel, 2006; Erturk, 1975; Karip, 2007; Ozcelik, 1998; Sonmez, 2010; Tekin, 2007).

8. Curriculum evaluation approaches may be studied in two key dimensions and two breakdowns as objective and subjective from the philosophical point of view, and as qualitative and quantitative from the methodological point  of  view.  In  an  objective  evaluation,  the  point  in question is gathered through objective measurement instruments information from the outside; while in the subjective evaluation, the point in question is gathering information through qualitative research techniques such as ethnography, case study, observation, negotiation and so on from the inside (Worthen et al., 1987).

More than fifty models are recommended in curriculum evaluation. The key reason for this variance is the difference in evaluation philosophies (Worthen et al., 1987). Furthermore, the 22-component classification focused on approaches to the evaluation conducted by Stufflebam (2001) which may be seen as the most comprehensive view. Stufflebam (2001) when considering both the operating time and modern curriculum evaluation needs – argued that the nine approaches among the approaches in question (in the scope of development and responsibility centred evaluation: Decision/Responsibility, Customer-oriented and Accreditation; In the scope of Social Mission and Defence: Beneficial, Customer-oriented, Democratic-Thinking and Constructivist; In the scope of Answers and Methods: Case Study, Outcome Monitoring) are the most robust and promising approaches, and introduced a very positive evaluation related to the Average Service Score, Utilization Ratio, Applicability Ratio, Compliance Ratio and Accuracy Ratio of these nine evaluation approaches. 

Target-oriented evaluation, management-oriented evaluation, cooperation-oriented evaluation, participant-oriented evaluation, competitor-oriented evaluation, qualitative evaluation, specialist-oriented evaluation and customer-oriented evaluation may be seen as key curriculum evaluation approaches. Constraints relating to these curriculums may be expressed as follows:

(i) In the target-oriented evaluation approach, the attention is attached to the targets and their achievability. Neglecting the context and unexpected products (outputs) and encouraging the linear and solid approaches as well as participant to study not for learning, but success in tests may be considered as the weaknesses of the model (Worthen and Sanders, 1987).

(ii) In the management-oriented evaluation approach, the attention of the curriculum evaluation moves from the targets to the management. Being restricted to the qualification of the manager in such issues as its possibility to be unbiased, fair and democratic and possibility of determination of the educational needs properly may be considered as the weaknesses of the model (Worthen and Sanders, 1987).

Cooperation-oriented evaluation approaches are in principle based on participation of all partners in evaluation. This evaluation model here is restricted to data receiving by partners, focused on mainly curriculum development, rather than an active participation  while  in the participant-focused evaluation, an active participation of the participants is at stake. But, the objectivity and consolidation of the evaluations by participants may also be limited here (Worthern et al., 1997; Karatas, 2007).

In the competitor-oriented evaluation approaches, the key philosophy is to get the opinions of two different evaluation specialist being for and against the curriculum (Unal, 2013). As stated by Usun (2012), these models may be evaluated as disadvantageous as they are costly, necessitate hard efforts for the time and preparation of evidences, and have difficulties in the points of finding unbiased juries and in terms of potential addressing and presentation skills.

In quantitative evaluation approaches, the subjective evaluations of a specialist are handled as a priority evaluation strategy (Worthen et al., 1987). The Educational Criticism Model (Eisner) and the Specialist/ Accreditation Model are examples of this approach. The qualification of the evaluation is restricted to the specialist’s knowledge of specialization and the analysis competence in both the quantitative evaluation approach and educational criticism model.

And in the Customer-Oriented Approach, education is applied in evaluation of the product and program by public and private entities. In the model, the program management with a market culture and market mentality is discussed. The profile of the learner turns into a customer profile and the demand of the learner turns into a customer demand. Education centers ceaselessly compete in order to protect their market shares (Celik, 2010).

In this model, the context, input, process, product (CIPP) model is recommended for the program evaluation processes of modern businesses in contradiction to all of these models. The reasons for giving preference to the CIPP Model may be briefly expressed as follows: it has been observed that the model is applied in 134 PhD dissertations at 81 universities, including 39 disciplines. Moreover, quotes were made from 55 published study samples applying this model in such disciplines as agriculture, management, communication, distant education, primary education, secondary education and higher education; public management; health services; international development; Law; Philanthropy; Psychology; Religion; and Sociology. Furthermore, the area of application of this model is very extensive. Among those using this model or making agreements for use of it, there are public and private sector officials, program and project personnel, international distribution personnel, agricultural distribution agents, school managers, church officers, doctors, nurses, military leaders and evaluators (Stufflebeam, 2014). The CIPP evaluation model is an education evaluation model focused on improvement and accountability. It is a comprehensive structure enabling to evaluate programs, projects, personnel, products, entities, principles  and  evaluation  systems  in  formative and summative manner (Stufflebeam and Coryn, 2014). It is a rational approach enabling the cost effectiveness at the commencement, planning, implementation and completion stages of necessary development studies (Stufflebeam, 2014). The model is based on professional standards containing the principles of effectiveness, applicability, authenticity, accuracy and evaluation accountability. (Stufflebeam, 2014). In the model, the root term of the evaluation is value. This term refers to the scope of the ideals that a society, group or individual holds. The CIPP model expects from an evaluator and customer to define and clarify the evaluative values and the values that may support the relevant evaluation of the customer (Stufflebeam, 2014).

Key concept of the CIPP model comprises of evaluations of context, input, process and product expressed through acronym letters, and summarizes the key functions of these categories as follows (Stufflebeam, 2014):

1. In contextual evaluations, the evaluator studies the needs, problems, and gains and opportunities, and related contextual conditions and dynamics in addition to these. Decision-makers use this stage for establishing targets and priorities and monitor how the program targets correspond to the determined needs and problems. (Stufflebeam, 2014). The targets, issues, the harmony of interests-needs-expectations, the education environment, the education periods, and the time schedule may be seen as examination spheres that may evaluate the contextual dimensions of an instructional design. 

2. In evaluation of input, the evaluators pay attention to the evaluation of all resources allocated for the meeting of the targeted needs and achieving the targets. Program-based alternative approaches, procedural plans, staffing terms and conditions, budget and cost effectiveness may be considered in this scope (Stufflebeam, 2014). And in evaluation of instructional designs, educational materials, content-themes, and the participant views focused on facilitation by the instructor may be considered as the key examination areas.

3. In process evaluations, the evaluators monitor, document, study and report on the application of program plans. These evaluators make feedbacks in the implementation process of a program, and upon completion of the program, report on the continuation of the program as targeted and required (Stufflebeam, 2014). And in the process evaluation dimension of an instructional design, the process management by the instructor; the activities; and the used instructional methods and techniques may be examined.

4. The product evaluation at the end of the program serves as determination and review of all the program achievements. The key questions of the product evaluation are as follows: Has the  program  achieved  its targets? Have it handled the targeted needs and problems successfully? What are the side effects of the program? Were there also positive results in parallel to the negative results? Are the achievements of the program worth the expenses? (Stufflebeam, 2014). And in the product evaluation aspect of the instructional design, questions evaluating all of the evaluation activities and self-evaluation questions may be used, and the investment decision may be reconsidered by these data. 

In researches conducted in relation to the CIPP model domestically and abroad, the comments of the partners of the program for evaluation through a measurement instrument were built by the researcher for the model. The comments did not only demonstrate the participant satisfaction, but also provided information on how steadily the program continued on the context, input, process and product aspects. The achieved data may guide the program development process (Akozbek, 2008; Al-Kkathami, 2012; Bachenheimer, 2011; Bayhan, Chen, 2009; 2011; Dincer, 2013; Farsi and Sharif, 2014; Gelen, 2015; Karatas, 2007; Mahshid et al., 2015; Oncu, 2014; Reeves and Michael, 1973; Selvi, 2009; Sercek, 2014; Smith and Benjamaporn, 2012; Tseng et al., 2010; Tugba, 2010; Tunç, 2010; Usmani et al., 2012; Unal, 2013).

In this research, the Call Center Professional Competence Development Program (CCPCDP) applied specific to the CIPP Model call center is evaluated through the CIPP Model as well. Each training activity under the CCPCDP Program is developed based on the competences required by the positions by the researcher (result-orientation, reassurance, domestic/foreign customer-orientation, team-work, communication, continuous learning and development, quality-orientation, flexibility, resolution and energy, use of initiative, and analytical thinking) diction and rhetoric, active communication skills, customer-focused selling skills, customer service and quality, and overcoming stress are training included in the program. For this developed training, again the researcher chooses learning environments supplied with modern education methods and techniques, and moves the experiential learning to the center, which aims at developing the competences of the participants to meet work practices directed to application and overcome real work and life problems.

 

The purpose, importance and problem of the study

The research criticizes the labelling of motivational activities which has fairly become a fashion trend, of which significant part do not provide any intellectual knowledge and experience, which fail to address professional competence and have no effect on achieving a corporate vision, under the name of ‘Training’ with significant budgets. It argues that educational intentions focused on achieving such unscientific targets as person/time practices per person, unit performance targets (CPIs) and educational cost concept cannot replace a ‘training need analysis’ study in reality; the programs evaluated through participant satisfaction questionnaires upon completion of training cannot develop human resources, and any activities which are distant from the scientific education management concept and which only focus on entertaining the participants cannot go beyond generating a motivation in a short time when returning back to start working. In the research, the issue how the science-based program evaluation process may be applied in such a way that enables making all partners maximally benefitted through accepting these comments. The research analyses the CIPP program evaluation model and is exemplified with by the evaluation practice performed specific to the Call Center Professional Competence Development Education Program, CCPCDEP.   

The conducted research describes an evaluation process that may guide the program evaluation activity in modern businesses. Through the research questions meeting four different categories of the CIPP model, the framework on how a training program should be evaluated with its all aspects is drawn. In this context, the relevance of the research may be summarized as follows:

The research shows that the program development and evaluation is an area of specialization in education; underlines that there is a need for a model and scientific methods of program evaluation; proves that the training activities of businesses should be built on targets based on professional competences and on formulas that may ensure achievement of corporate visions, in contradiction to motivational activities; and is a practical manual that may be used by modern business in the process of program evaluation in training.         

The problematic sentence of the research is as follows:

What are the comments of the participants in relation to the evaluation of the Call Centre Professional Competence Development Training Program through the CIPP model?

The research seeks for the following sub-problems:

1. Are the opinions of the participants of the Call centre professional competence development training program in relation to the context, input, process and product aspects of the program differ by sex? 

2. Are the opinions of the participants of the Call centre professional competence development training program in  relation  to  the  context,  input,  process  and   product  aspects of the program differ by training spheres?  


 METHODOLOGY

Research model

The research data are collected through the raster pattern in the scope of the descriptive research. Karasar (2016) defines the raster models as research approaches that are suitable for describing a situation that existed previously or still exists. The important point in this research approach is to study the existing thing not changing it. Finally, this research also prefers the general raster pattern to determine the existing approaches of the participants in relation to the participants.

 

Data collection instruments

As the data collection instrument, the professional competence development program CIPP evaluation scale developed for the research problem by researchers is used in the research. The scale used for the participants covered by the research comprises of 59 clauses and the response options are designed as five point likert scales. The clauses included in the scale are scored as ‘Completely agree’ (5) and ‘Completely disagree’ (5).  The reliability coefficient of the scale is calculated as 0.98. The values obtained as a result of an analysis also ensures the 0.60 sub limit criterion envisaged in literature (Cronbach, 1990; Punch, 2005).

 

Participants

The participants are 622 call centre communication agents who were serving in the operations of the Call Centre in Black Sea, Central Anatolia and Eastern Anatolia throughout Turkey, the Centre where the research was conducted in 2014 and 2015. 41.3% of the call centre agents participated in the research was in Çorum, 39.7% in AÄŸrı and 19% in Samsun; of which 63.2% were women, 41.8% were high-school graduates, 70.3% were between 20 and 25, 44.9% were with equally-weighted education, 98.9% was with experience of 5 years or less in the call centre, and 78.3% was with total work experience of 5 years and less.

For development of the data collection instrument of the research, Stufflebeam’s (2014) principles in relation to the CIPP model and 77 scales included in the “Endustri ve Orgut Psikolojisi Alaninda Kullanılan Olcekler El Kitabi” Manual written by Çelik and Telman (2013) with 302 scales included in the “Psikoloji ve Egitimde Kullanilan Guncel Olcekler” publication written by Akin (2012) are studied and the scales of graduate and post-graduate theses developed by applying the CIPP model are analyzed, the opinions of instructors and professions of the sphere are obtained, and the questionnaire clauses are developed according to the model in the scope of this information. The questionnaire revised by expert opinions was implemented as a pilot questionnaire, the comprehensibility of the questionnaire was tested, and as no problem was faced in the process, the stage of application in the sphere was shifted. The questionnaires for participants were distributed to total 865 participants on April 15, 2015 through the support and instructions of the relevant Operational Managers; of which 155 were distributed to the Samsun Province from the Black Sea Region, 330 were distributed to the Çorum Province from the Central Anatolia Region; and 380 were distributed to the AÄŸrı Province from the Eastern Anatolia Region. 

The research was completed at all locations as of May 15, 2015. In order to achieve sound data, 243 questionnaires filled incorrectly and incompletely were not accepted for the evaluation and the research was performed on 622 questionnaires completed accurately and completely. The construct validity of the scale is tested through the factor analysis. In order to demonstrate the factor pattern, the varimax from the upright spinning methods is also chosen as the factorizing method as a key components analysis and spinning method. In order to test the compliance of the data set to the factor analysis, the Kaiser-Meyer-Olkin (KMO) sampling efficiency test and the Bartlett globosity test were applied. The KMO value was determined as 0.97 above 0.70 which is the allowable limit, and as the Bartlett globosity test was above 0.50 and was meaningful at the 0.05 importance degree, the data set was considered compliant to the factor analysis. The Professional Competence Development Training Questionnaire, which was determined as one comprising 6 aspects together with the clarifying factor analysis was evaluated through the corroborative factor analysis.

The track diagram is provided in Figure 1. According to the analysis results, the ways and regression weights in the model are significant. It is determined that the GFI, CFI and NFI values obtained in the analysis of the structural equality model of the research questionnaire are well suited to the researched correlation; and the x2/sd and RMSEA values are at an acceptable harmony level. The standardized values in relation to the route diagram concerning the model are provided in Figure 1.

 

 

In the subsequent part, the correlation analysis provided in Table 1 also gives an idea in relation to the direction and strength of the correlation between research variables. Therefore, there is a strong positive correlation between the aspects of the research and the Professional Competence Development Training Program’s CIPP evaluation levels (general situation) at the 0.01 significance level. 

 

 

Data analysis

The research findings were obtained as a result of analyses conducted by applying SPSS v23.0 ve Amos v21.0 software programs. The statistical solutions of the research were made by applying One Way ANOVA, Tukey’s Test, Tamhane 2 and Independent Groups T Test, and correlation analysis techniques. In the research, the Structural Equation Modelling is applied, in addition to the difference analyses. The construct validity of the scale applied in the research is evaluated by the corroborative factor analysis as a part of first by the clarifying factor and then, the Structural Equation Modelling.

 

 


 RESULTS

Findings in relation to the first sub problem of the research

According to the analysis findings, it is detected that the males’ product evaluation – 1, context evaluation – 1, input evaluation, product evaluation – 2, context evaluation – 2 and professional competence development training program CIPP evaluation scale (general situation) levels are higher than those of the females. The correlation between the CIPP evaluation scale of the professional competence development training program of the participants and its aspects and sex is analyzed by the Independent Groups T Test.

According to Table 2, there is not a significant difference between the process evaluation and the sexes of the participants (p>0.05). However, there is a statically significant difference between the product evaluation – 1, context evaluation – 1, input evaluation, product evaluation – 2, context evaluation – 2 and the professional competence development training program’s CIPP evaluation scale (general situation) (p<0.05). When reviewing the findings, it is detected that the males’ product evaluation – 1, context evaluation – 1, input evaluation, product evaluation – 2, context evaluation – 2 and professional competence development training program CIPP evaluation scale (general situation) levels are higher than those of the females. 

 

 

Findings in relation to the second sub problem of the research

There is not a significant statistical difference between the participants’ context evaluation – 1,  input  evaluation,

and context evaluation – 2 variables and the training sphere (p>0.05); while there is a significant statistical difference between the process evaluation, product evaluation – 1, product evaluation 2- and the professional competence development training program’s CIPP evaluation scale (general status) and the training area (p<0.05).

The correlation of the CIPP evaluation scale of the participant’s Professional Competence Development Training Program and its aspects and the training area is analyzed according to the One Way ANOVA Analysis. For the homogeneity analysis of the group variances the Levene Test is applied. The results of the Levene Test are provided in Table 3. When reviewing Table 3, it is detected that the group variance of the variables is equal (p>0.05). In order to determine that the groups are statistically different from each other, the Tukey Test is applied from paired comparison tests. The findings of the One Way ANOVA Analysis are provided in Table 4. 

 

 

 

In Table 4, there is a statistically significant difference between  the  participants’  context  evaluation  – 1,  input evaluation and context evaluation – 2 variables and the training area (p>0.05). However, there is a statistically significant difference between the process evaluation, output evaluation – 1, output evaluation – 2 and the professional competence development program’s CIPP evaluation scale (general status) and the training area (p<0.05).

As a result of the One Way ANOVA analysis, it is found out that the distribution of the training is different from others for at least one group. In order to determine that the groups are statistically different from each other, the Tukey Test and the Tamhane T2 Test from paired comparison tests were performed. In Table 5, the findings of the Tukey and Tamhane T2 Tests are provided. According to the analysis, finding, it is detected that the levels of process development, product evaluation – 1, product evaluation – 2 and professional competence development training program CIPP evaluation scale (general status) of those verbally trained are higher than those of the persons trained in equal weight. (p<0.05)

 


 DISCUSSION

It has been determined that the scorings focused on the CIPP Model’s context, input and product aspects vary  by sex, and the females made a lower scoring than the males. This situation may be interpreted with this that the elaborative and normative thinking style are more dominant in women. In other words, the stereotypes in relation to sex are also very effective on the thinking styles. There are also researches supporting to this finding in literature (Kus and Altun, 2012; Kavgaoglu and Altun, 2016; Deaux, 1985; Dinc and Bal, 2008; Sternberg, 2009; Hogg and Vaughan, 2007; Kaufman, 2002; Saracaloglu et al., 2008; Tucker, 1999). Similarly, in the research conducted by the Scial Structures Research and Development Association (Tokageder, 2014), among the common features of successful women managers, the ambitious and elaborative (26.7%) visions in the private lives and elaborative (13.8%) visions in the business lives, distinctively from males, are emphasized. 

Another attractive finding is that there is not any significant difference between the visions of the females and the males in the process aspect of the CIPP model. The Professional Competence Development Program is designed in such a manner that enables instructors to apply frequently and appropriately to the principles of the adult training program and the activities focused on the participant interaction and active participation. Therefore, the participants may enrol the education irrespective of sex and education level and change the direction of the process, solve problems together and make judgments and most importantly, entertain while doing all of these. Another finding supporting to this finding is that an inversely proportional correlation has been detected between the input and product aspects of the CIPP model in the corroborative factor analysis of the research. This finding may be interpreted specific to the research as follows: even when there are not any location backgrounds and training materials, as well as physical and digital application environments; when participants think that they have gained significant gains, they may make very positive feedbacks focused on their products.      

Another finding of the research is that the scoring focused on the CIPP aspects varies by training area. The verbally trained students have scored the soft skill learning environments appropriate to their thinking and learning styles higher than the equal weight supporters. It is also supported by the observations in the application processes that the students trained in the digital and equal weight areas are more willing and successful in technical education. And the departments of the instructors participated in the research are mainly digital and equal weight, and it was observed that they are also more willing and successful similarly in the educational processes of the technical education which is their specific area and have difficulties in managing their soft skills.

In literature, there are also researches that verify the correlation between the department and the thinking style. For example, in the research  conducted  by  Ticker (1999), the thinking styles of the students in the accounting department are studied by age, department, education period and sex varieties. It was found out in the research that the dominant thinking styles of the area students are rule-based, elaborative, hierarchic, traditional and extroversive. In the research conducted by Kaufman (2002), the thinking styles of the students getting education in the journalism and creative authorship departments of the authorship profession are studied. The Difference in the dominant thinking styles by gained education is found out.

Therefore, it is determined that the students of the journalism department dominantly use the rule-based style, while the students gaining the creative authorship education dominantly use the self-reliant thinking style. In the research conducted by Saracaoglu et al. (2008), it is found out that the thinking styles of the Education Department students vary by both the departments of the high schools they graduated from and the departments in the university. Therefore, the students graduated from the Science and Math spheres in the high schools dominantly use the self-reliant, elaborative and traditionalist thinking styles; the alumni of the Turkish-Math department dominantly use the integrated and innovative thinking styles; and the graduates of the Verbal department dominantly use the rule-based, hierarchic and singular thinking styles. And when studying the thinking styles of the students in their departments in the university, it is observed that the integrated thinking styles of the students of the Primary School Teaching are more dominant than those of the students of the Science and Social Sciences instructors.

In the research conducted by Durdukoca (2011), the thinking styles of the prospective teachers are analyzed by their departments, and a significant difference is observed in all other thinking styles except hierarchic, introvert and traditionalist thinking styles between the Primary Education Mathematic Teaching and the Social Sciences Teaching in favour of the Primary Education Mathematic Teaching. And in this research, in which the CIPP model is applied, the verbal area participants scored these learning environments which are appropriate to the thinking and learning styles, as exemplified in the literature, higher than the equal weight participants. Both the discussions conducted by the instructors and the direct observations performed for the education processes support these findings. Therefore, while the students studied in the digital and equal weight spheres were more willing and successful in technical education, the interests, attentions and successes of the verbally educated students in soft skills were more. Particularly, it is observed that in the process activities in which the self-expression, team works and social interaction was essential, the learning motivation of the verbally educated students is higher than those of the equal weight and digital students.


 CONCLUSION

The research findings may be summarized as follows: In the scorings based on the dimensions of the CIPP evaluation scale, significant differences have been found out by sex, education level and education sphere. In different aspects of the CIPP questionnaire, the males made higher scorings than the females; the high school graduates made higher scorings than the high school and upper secondary school students; and the verbal education students made higher scorings than the equal weight supporters.    


 SUGGESTIONS

1. It should be taken into account that sex is an important predictor in the expectation and perception in instructional design processes. Both the instructional design and the operation and evaluation processes should also be considered by the elaborative vision, in addition to the integrated vision. Using also open-ended questions that may enable the elaborative profile to express itself instead of using only multiple-choice, matching or gap filling questions in the sample evaluation processes; planning of the class and program periods by considering individual differences, arranging learning environments not only functionally, but also in such a manner that may enable the participants to feel comfortable and within social interaction, where they may study by entertaining; being of the teaching style of instructors comprehensive; and domination of empathy and tolerance in class management may be considered among the products of this research that may be recommended for practitioners and other researchers, specific to this sub problem examining the sexual variable.     

2. If the research findings are considered and arranged by the adult learning principles of the process aspect of the education, it may be considered as an aspect in which participants entertain more and most of them score at the highest level. Therefore, practitioners may be recommended to arrange the instructional design and education materials in such a way supporting active participation.

3. Researchers and practitioners may consider the difference in the department/education area and thinking style in design of Soft Skill training programs, and work on homogenous groups. When working with hetero-geneous groups, while illustrations from business processes work weighted in the instructional designs to be arranged for participants in the digital-equal weight areas, the low detail intensity, big images, pragmatic philosophy, showing and having something made, and problem solution; for participants in the verbal area, open-ended questions, processes enabling examination, discussion, and processes enabling social interaction and self-expression may be used.         


 CONFLICT OF INTERESTS

The authors have not declared any conflict of interests.



 REFERENCES

Akozbek A (2008). Evaluation of High School 1st Class Mathematics Instruction Program Using CIPP Evaluation Model and According to the Views of Teachers and Students (Common High Schools, Trade Vocational High Schools, Industrial Vocational High Schools), Yildiz Technical University, Institute of Social Sciences, Department of Educational Programs and Instruction, Unpublished master's dissertation.

 

Bachenheimer BA (2011). A management-based CIPP evaluation of a Northern New Jersey school district's Digital Backpack program. Ph.D. thesis, University of Florida.

 

Celik V (2010). School Culture and Management. Ankara: PEGEMA Publications.

 

Bayhan M (2011). Evaluation of In-Service Training Program Applied to Contractual Execution and Protection Officers Using CIPP Evaluation Model. Atatürk University, Institute of Educational Sciences, Educational Programs and Instruction. Unpublished master's dissertation.

 

Smith B, Benjamaporn P (2012). Evaluation Of Public Health Communication Performance By Stufflebeam's Cipp Model: A Case Study Of Thaıland's Department Of Disease Control. ASBBS Annual Conference: Proceedings of ASBBS, Volume 19 Number 1 Las Vegas, February 2012.

 

Chen C (2009). A case study in the evaluation of English training courses using a version of the CIPP model as an evaluative tool, Durham theses, Durham University. 

View

 

Cronbach LJ (I990). Essentials of Psychological Testing. Fifth Ed., New York: Harper CoIIins.

 

Deaux K (1985). Sex and Gender. Annual Rev. Psychol. 36:49-81.
Crossref

 

Demirel O (2006). Curriculum Development in Education. 9th ed. Ankara: Pegema Publications.

 

Dinc P, Bal P (2008). Success of High School Students in Geometry and Comparison of Their Thinking Style. J. Institute Soc. Sci. Çukurova University 17(1).

 

Dincer B (2013). Evaluation of English Curriculum for the 7th Classes According to Stufflebeam's Context, Input, Process and Product (CCIP Model, Adnan Menderes University, Institute of Social Sciences. Unpublished doctoral dissertation.

 

Durdukoca SF (27-29 April 2011). Evaluation of Thinking Style of Prospective Teachers According to a Range of Variables. 2. International Conference on New Trends and Their Implications, Antalya.

 

Erden M (1993). Curriculum Evaluation in Education 3rd ed. Ankara: Anı Publications.

 

Erturk S (1975). Curriculum Development in Education. 2nd ed. Ankara: Cihan Press.

 

Farsi M, Sharif M (2014). Stufflebeam's Cıpp Model And Program Theory: A Systematıc Revıew Int. J. Language Learn. Appl. Linguistics World 6(3):400-406 EISSN: 2289-2737 & ISSN: 2289-3245.

 

Gelen KN (2015). Competence of Sports Management Training Program According to CIPP Evaluation Model and Commission Standards of Sports Management Accreditation. Abant Izzet Baysal University, Institute of Social Sciences, Department of Sports Management. Unpublished doctoral dissertation.

 

Hogg M, Vaughan G (2006). Social Psychology. (Translated by: Ä°. Yildiz and A. Gelmez). Ankara: Utopya Publications.

 

Karatas H (2007). Curriculum Evaluation of Lesson: English II of the Department of Modern Languages of Yıldız Technical University According to the Views of Teachers and Students and Using Context, Input, Process and Product (CCIP Model. YTU, Faculty of Education, Unpublished master's dissertation.

 

Kaufman JC (2002). "Thinking Styles in Creative Writers and Journalists". Unpublished doctoral dissertation. Yale University, Connecticut.

 

Kavgaoglu D, Altun S (2016). "Examination of Thinking Styles of Teachers According to Their Branch and Gender", J. Int. Educ. Sci. 3(6):136-149.

 

Kus D, Altun S (2012). "Examination of Thinking Styles of Teachers with According to Their Branch and Gender", 1. EJER: Eurasian Educational Research Congress, 24-26 April, Istanbul.

 

Ornstain AC, Hunkins FP (2012). Curriculum: Foundations, Principles and Issues. PEARSON.

 

Ornstain AC, Hunkins FP (2014). Curriculum: Foundations, Principles and Issues. Translated by Asım Arı. Konya: Eğitim Publications

 

Oncu S (2014). Instance of CIPP model in the evaluation of clinical skills training, Ege University, Institute of Health Sciences, Department of Medicine Training Unpublished doctoral dissertation.

 

Ozcelik DA (1998). Educational Programs and Instruction: General Teaching Method. Ankara: ÖSYM Publications.

 

Punch K (2005). Introduction to Social Research - Quantitive and Qualitive Approach. Second Ed., Sage Publications Inc., California.

 

Reeves JM, Michael WB (1973). Decision-Making Model to the Evaluation of a DentalTeam Training Program Involving Use of Paraprofessionals. ERIC database; A paper presented at the 1973 Northeastern Educational Research Association Convocation, November 2, 1973 at Ellenville, New York.

 

Reeves JM, Michael WB (1973). The Application of the Stufflebeam Educational Decision-Making Model to the Evaluation of a Dental Team Training Program Involving Use of Paraprofessionals. 1973 (Speeches/Meeting Papers).

 

Saracaloglu AS, Yenice N, Karasakaloglu N (2008). Comparison of Thinking Styles of Students of the Faculty of Education with Respect to a Range of Variables. Paper presented in International Social Sciences Education Symposium, Canakkale.

 

Selvi H (2009). Evaluation of Training Programs Used in Driving Courses of the Ministry of National Education Using Stufflebeam's Program Evaluation Model. Abant Izzet Baysal University, Department of Educational Sciences, Department of Measurement and Evaluation in Education, Unpublished master's dissertation.

 

Sercek GO (2014). Evaluation of Associate Degree Tourism Training Program According to CIPP Program. Dicle University, Institute of Educational Sciences, Department of Educational Sciences, Department of Educational Programs and Instruction. Unpublished doctoral dissertation.

 

Mahshid SA, Soheila E, Nikoo Y, Shahnaz K, Babak H (2015). The Evaluation of Reproductive Heaalth PhD Program in Iran: A CIPP Model Approach 7th World Conference on Educational Sciences, (WCES-2015), 05-07 February 2015, Novotel Athens Convention Center, Athens, Greece Procedia – Soc. Behav. Sci. 197(2015) 88-97. 

View

 

Sonmez V (2010). Instructor's Handbook in Curriculum Development. Ankara: Anı Publications.

 

Sternberg Robert J (2009). Thinking Styles. (Translated by E. Gungor). Istanbul: Redhouse Training Books.

 

Stufflebam DL (2001). Evaluation Models. New Directions For Evaluation: A Publication of the American Evaluation Association. 89, Spring 2001.

 

Stufflebam DL, Coryn LSC (2014). Evaluation, Theory, Models and Applications. Jossey-BASS: USA

 

Taba H (1962). Curriculum Development; Theory and Practice. Harcourt, Brace And World.

 

Tekin H (2007). Measurement and Evaluation in Education. Ankara: Yargı Publishing House.

 

Tokageder (2014). Young Women and Employment Investigation Report. United Nations Development Program, Sponsorship of Sabancı Foundation. 

View

 

Tseng Kuo-Hung C, Ray D, Shi-Jer L, Hua-Lin T, Tien-Sheng T (2010). Using the Context, Input, Process and Product Model to Assess an Engineering Curriculum, World Transactions on Engineering Technol. Educ. 8:3.

 

Tucker RW (1999). "An Examination of Accounting Students Thinking Styles". Unpublished doctoral dissertation. University of Idaho, Moscow.

 

Tugba C (2010). Evaluation of Anadolu University E-Certification Program of Technology Application in Primary Education using Context, Input, Process and Product (CCIP) and According to the View of Learner. Anadolu University, Institute of Social Sciences, Department of Distance Education, Unpublished master's dissertation.

 

Tunç F (2010). Evaluation Of An English Language Teaching Program At A Public University Using CIPP Model, Middle East Technical University, Department of Educational Sciences, Unpublished master's dissertation.

 

Tyler RW (1949). Basic Principles of Curriculum and Instruction. London: The University of Chicago Pres, Ltd.

 

Usmani MAW, Suraiya KM, Zamil AM, Shammot MM (2012). Meta Evaluation of a Teachers' Evaluation Programme Using CIPP Model. Arch. Des Sci. 65(7).

 

Usun S (2012). Curriculum Evaluation in Education. Ankara: Anı Publications.

 

Unal M (2013). Evaluation of Erasmus Program of the European Union for the Student Mobility for Studies According to Context, Input, Process and Product (CCIP) Model. Gazi University, Institute of Educational Sciences, Department of Educational Sciences. Unpublished doctoral dissertation.

 

Varis F (1996). Curriculum Development in Education: Theory and Technique. Ankara: Alkım Publishing House.

 

Worthen BR, James R Sanders (1987). Educational Evaluation: Alternative Approaches and Practical Guidelines. New York: Longman Inc.

 




          */?>