Research on variables related to academic performance is becoming increasingly important in universities; with special interest the study of those factors that may be subject of intervention for the improvement of the academic performance of their students (Davis & Thomas, 1999). The purpose of the study, among others, was to identify the main factors associated with the accreditation of the students of the Bachelor's Degree in Educational Sciences in a standardized test called the General Graduation Exit Examination in a Mexican public university. The sample was made up of 272 supporting students; Descriptive analyzes were carried out, chi-square (X2) and a logistic regression model was generated. The main results show that the age of the supporters, the schooling of the mother and the average in the Degree are the factors that most relate to the accreditation of the exam. The study presents empirical evidences that can be useful for the obtaining of more favorable results of the students in this type of standardized evaluations and for the decision making for the improvement of the educational processes in the university.
Research on the factors associated with academic achievement of students is a phenomenon historically studied and increasingly relevant in universities, mainly in terms of variables that can be analyzed and intervention for institutional improvement (Davis & Thomas, 1999). The indicators more used to measure academic performance have been the grades and objective tests or performance tests created "ad hoc". However, in the last decades several approaches have been reconsidered and large-scale educational evaluation practices have been analyzed. According to Jimenez (2017), the use of large-scale evaluation has been implemented and maintained by educational and social needs for standardization of content and its verification through the learning achieved by students, relating to instruments to determine the educational quality and accountability, both of the educational centers of the different levels as well as the educational systems.
According to Tournon (2009), the construct academic performance considered the main indicator of the learning that students achieve in their transit through formal education. Its conceptual and operational delimitation has been a complex task, since this is the result of the relationship between multiple psychosocial, affective, cognitive, socio-family factors, among others. Given the complexity of said construct, educational research proposes multiple and diverse models, both theoretical and empirical, referring to different sets of variables linked to the student, its family, their school and the education system. Identifying what variables explain academic performance is a complex task, since they make up a strongly interwoven network, so it becomes necessary to know which relationships they keep and to delimit their effects and their explanation.
The influence of the socioeconomic level and family factors on academic performance is particularly relevant. From the Coleman Report (1966); pioneering study whose results show that the students' learning was almost entirely due to the socioeconomic factors in which the students develop and that the school reverberated very little in the academic achievement of its students. In this sense, it is considered that the level of commitment to homework, the type of work and level of schooling of parents, as well as the climate, structure, cultural environment and family income and parenting styles, are variables that explain, to a large extent, the success or failure of a student's school (Casanova, Cruz, de la Torre and de la Villa, 2005, Jones & White, 2000, Ruiz, 2001).
Although, traditionally, academic performance is reflected by means of a quantitative and / or qualitative qualification, a grade that could be the result of certain learning; these grades, in the institutional scope, constitute the social and legal criterion of the academic performance of the students. Where the most direct way to establish is through exams or tests prepared by the teacher that may present elaboration defects or subjective criteria, so it is impossible to make comparisons within the same educational center and above all, with other educational centers (Page, 1990). Hence the importance of large-scale evaluations to the tames of the instrumentation of reliable and valid standardized tests, not only to determine the academic performance of students and the educational quality offered by schools, but also as a reference used in the educational research to compare the results of these evaluations with the different variables that explain academic achievement.
The standardized evaluation of Higher Education in Mexico
At the end of the decade of the eighties of the last century, the evaluation was especially important in government public policies. A process of transition from the benefactor state to the evaluative state was presented (Neave, 1990). Since then, the importance that has been given to evaluative practices has been decided in the different instances of government and agencies of the public sector, as a need for government agencies to be accountable and transparent, and as a function of these, condition and allocate budget resources.
In the educational field, federal agencies and programs have been created to evaluate the quality of Higher Education Institutions (IES) in the country, such as: the National Commission for the Evaluation of Education (CONAEVA), the National Center for Evaluation of Higher Education (CENEVAL), the Interinstitutional Committees for the Evaluation of Higher Education (CIEES), the Council for the Accreditation of Higher Education (COPAES), among others. According to Tuiran (2011), because of the work carried out by these organizations, they turn towards the culture of quality in Mexico and constitute one of the most important pillars of the modernization of higher education in the country.
In this context of increasing boom in educational evaluation, given the dissatisfaction in large sectors of society regarding the quality of education and facing a demand for higher education to meet the quality of professional training, the National Association of Universities and Institutions of Higher Education (ANUIES) formulated the proposal for the creation of an institution external to the institutions responsible for evaluating the quality of professional training through the performance of its students. It is in 1994 when the National Center for the Evaluation of Higher Education (CENEVAL) is created, a non-profit civil association whose main activity is design and the application of instruments for the evaluation of knowledge, skills and competences, as well as the analysis and the dissemination of the results of the tests (CENEVAL, 2018). However, the most critical opinions suggest that the creation of CENEVAL represented an operational response to the establishment of an educational policy that privileges accountability and competitiveness in an increasingly globalized market (Silva, 2007).
Currently most state public universities contract CENEVAL services to evaluate both the process of admission of their applicants (National Entrance Examination, EXANI), and the evaluation of the quality of graduation (General Graduation Exam, EGEL). With regard to the latter, CENEVAL has constructed 41 exams of different degree programs offered by the public and private universities of the country. Only in 2015, 80 application dates were carried out in 706 user institutions, where 181,437 exams were applied (CENEVAL, 2018).
These data show for this type of standardized test for IES and the results obtained by students are increasingly important in the evaluation policies of the quality of their Educational Programs. In addition, IES to determine the quality of their education not only use the performance of those who leave their institutions as an indicator, but also self-evaluations are carried out through the Inter-Institutional Committees for the Evaluation of Higher Education (EHE) as well as the accreditation of organisms recognized by the Council for the Accreditation of Higher Education (CAHE). These organizations consider the results in the EGEL as part of the indicators to evaluate the quality of the educational programs and each time they have a greater weight for the accreditation of the quality of said programs. (De Vries, 2007).
According to Gago Hughet (2000), the importance of EGEL lies in its usefulness as an external evaluation tool, which allows students and training institutions to verify its effectiveness and achievements in light of national standards. It also allows institutions to use the instrument as an option for titling, as well as having elements of valid and reliable judgments, that support the processes of curriculum planning and evaluation, to undertake actions that improve the academic training of their graduates, adapting plans and study programs. However, there is no empirical evidence to show which is the use that the universities give to the results of the EGEL and how these institutions implement concrete actions for educational improvement.
The type of evaluation that is implemented through the EGEL is mainly summative, that is, it is an evaluation that is always carried out at the end of a process with the purpose of determining the value of a final product. This is frequently linked to the rendering of accounts and its ultimate purpose is to provide information that serves as a basis for decision making (Jornet, 2017). The results of these standardized evaluations have important consequences for IES, not only linked to their financing but also to their prestige. Universities invest a significant amount of human and financial resources in each of the evaluation processes to which they are submitted; mainly to achieve the accreditation of the quality of their educational programs, as well as for their educational programs to enter and remain in the Padron of Programs of High Academic Performance, instituted by CENEVAL.
In the Educational Program of the Bachelor in Educational Sciences, of the Autonomous University of Baja California (UABC), the students are supporters of the EGEL in the area of knowledge of Pedagogy / Educational Sciences. In the Educational Program of the Bachelor in Educational Sciences, of the Autonomous University of Baja California (UABC), the students are supporters of the EGEL in the area of knowledge of Pedagogy / Educational Sciences. The instrument is structured in four training areas: i) Didactics and Curriculum; ii) Educational Policy, Management and Evaluation; iii) Teaching, Education and Orientation; and iv) Educational Research. In each of these areas, EGEL assesses the level of knowledge and academic skills of recent graduates of the Bachelor's Degree in Pedagogy-Educational Sciences.
Undoubtedly, EGEL is currently the most widely used standardized instrument by IES to evaluate the academic performance of their graduates, but there is no empirical research that delves into the influence of predictive factors or context variables that positively impact performance in this exam. It should be noted that not all the variables related to academic performance can be the subject of intervention to support the decision making of improvement in educational institutions, such as the reduction of rates of reprobation or school dropout. It is clear the importance of identifying and studying variables that could be manipulated by educational institutions; the educational centers can hardly impact on socio-economic and psychosocial factors of their students, but in the pedagogical and institutional aspects, an area of opportunity for the study of variables that could contribute to educational improvement is presented (Montero Rojas, Villalobos Palma & Valverde Bermúdez, 2007).
Purpose of the study
• Identify the main factors associated with the accreditation of the students of the Bachelor's Degree in Educational Sciences of the UABC (Mexico) in the General Bachelor´s Graduation Exam (EGEL).
•Describe the academic performance of the students in each of the training areas included in the EGEL.
•Compare the results of the EGEL among graduates who attended the Bachelor in a School Modality (UABC, Mexicali campus) with those who graduated under a semi-coached modality (UABC, Ensenada campus).
2.1 T empora ry - Space and participants Context
The analysis considered the students graduated from the Bachelor of Educational Sciences of the UABC of two academic units: The Faculty of Administrative and Social Sciences, Ensenada Campus, where said degree is offered in a semi-coined manner; and the graduates of the Faculty of Sciences, the Faculty of Human Sciences, Campus Mexicali, where said educational program operates under the scholarized modality. The main difference between both modalities is that in the semi-academic, students only take classes on Friday afternoons and Saturdays, the time of duration of each subject is two hours per week; students during the week develop learning activities and academic tasks autonomously. Those in the school modality come to class from Monday to Friday and the attendance time per subject is four hours. It should be noted that both Educational Programs have and operate with the same syllabus.
For the study an intentional sample was used, which was conformed in total by 277 students, of whom 225 are women and 52 men.
The criterion of inclusion was that the students have supported the EGEL corresponding to the area of Pedagogy-Educational Sciences in the school periods of 2016.
2.2 Sources of information and description of variables
The main source of information was the results obtained by the students in the General Bachelor´s Graduation Exam. A survey was also used that inquires about the other variables contemplated in the study. The dependent variable was the results obtained by the students in the EGEL; The independent variables were: grade average of the degree, sex, age, schooling of the father and mother, and if the student had a scholarship.
2.3 Procedure and data analysis
The design of this research corresponds to a retrospective, cross-sectional, comparative and relational study (Mendez, Namihira, Moreno & Sosa, 2001). In general, the procedure was carried out in two stages: the first consisted of the design and organization of the data in a computer system using the statistical package SPSS, version 24.0; and the second, in the realization of the statistical analyzes: descriptive, chi-square (X2) and logistic regression.
Given that the main variables of the study were estimated with nominal scales, in order to include them in the regression model, we proceeded to dichotomize them in fictitious metrics variables (dummy). By this way, they become binary variables (0, 1) and, based on this, they can operate as quantitative variables (Jimenez, 2004).
By this way, following the guidelines of Aguayo Canela (2007), prior to the execution of a binary logistic regression, it is recommended to start with an analysis of the relationship between the variable that is intended to explain and the possible independent variables. In this way, those independent variables that show statistically significant relationships with the dependent variable will be included in the multivariate model. Due to the type of scale of the variables, a chi-square analysis was performed to observe the association between the variables. Based on this analysis, the following variables were identified that are significantly related to the EGEL variable:
i) area of Didactic Formation and Curriculum; ii) Educational Policy, Management and Evaluation Area; iii) Teaching, Formation and Educational Guidance training area; iv) educational research area; v) age, vi) schooling of mother and vii) average of degree. These variables were recoded in the following way (see table 1):
|Variables||Original Scale||Re-modi f ications|
|Testimony EGEL||Without testimony (0), Satisfactory (1) and Outstanding (2)||Failed (0) and approved (1)|
|Didactic and Curricula||Without testimony (0), Satisfactory (1) and Outstanding (2)||Failed (0) and approved (1)|
|Policy, Management and Evaluation||Without testimony (0), Satisfactory (1) and Outstanding (2)||Failed (0) and approved (1)|
|Teaching, Training and Guidance||Without testimony (0), Satisfactory (1) and Outstanding (2)||Failed (0) and approved (1)|
|Investigation||Without testimony (0), Satisfactory (1) and Outstanding (2)||Failed (0) and approved (1)|
|Age||Scale||Youth (0, under 25) and Adults (1, over 25)|
|Principal Education||Ordinal with 8 categories, from without studies hasty postgraduate studies||Basic studies (0) and higher studies (I)|
* It maintained the same scale, since by dichotomizing it into "regular" and "outstanding", it did not show a significant relationship with the EGEL testimony through the chi-square test.
Table 2 shows the results of the EGEL in undergraduate students in Educational Sciences of the general sample by areas of training. As you can see only 56% of the supporters credit the exam. In the results by area of training, we found that the areas of Didactics and Curricula, such as Teaching, Training and Educational Guidance are where students obtain higher percentages of accreditation; the results are almost homogeneous between the two. Are the areas of: Policy, Management and Educational Evaluation; and the Educational Investigation where the indices of accreditation are minor, the latter being the least favored.
|Without testimony ( not accredited )||Satisfactory ( Accredited )||Outstanding ( Accredited )||% of accreditation|
|General testimony of demise||123 (44%)||126 (46%)||28 (10%)||56%|
|Didactics and Curriculum||88 (32%)||166 (60%)||23 (8%)||68%|
|Educational Policy, Management and Evaluation||107 (38%)||132 (48%)||38 (14%)||62%|
|Teaching, Training and Educational Guidance.||90 (32%)||166 (60%)||21 (8%)||68%|
|Educational Investigation||121 (44%)||129 (46%)||27 (10%)||56%|
When comparing the results by type of educational modality, as shown in Table 3, the supporters of the semi-coached modality obtain greater results, both in the general testimony of performance (11% difference) and in each of the training areas they include the exam: in the educational research training area the difference is ten percentage points; in Didactic and Curriculum the difference is nine points; in Teaching, Formation and Educational Orientation is seven; and in the area of Policy, Management and Educational Evaluation, the difference is minimal, of only two points.
|Educational modality||Without testimony ( not Appr )||Satisfactory ( Approved )||Outstanding ( Approved )||Percentage of Awarded|
|General testimony of performance||Schooled||86 (48%)||78 (43%)||16 (9%)||52%|
|Semi-schooled||36 (37%)||48 (50%)||12 (13%)||63%|
|Didactics and Curriculum||Schooled||63 (35%)||105 (58%)||12 (7%)||65%|
|Semi-schooled||25 (26%)||60 (62%)||11 (12%)||74%|
|Educational Policy, Management and Evaluation||Schooled||70 (39%)||86 (48%)||24 (13%)||61%|
|Semi-schooled||36 (37%)||46 (48%)||25 (15%)||63%|
|Teaching, Formation and Educational Orientation.||Schooled||63 (35%)||105 (58%)||12 (7%)||65%|
|Semi-schooled||27 (28%)||60 (63%)||9 (9%)||72%|
|Educational Research||Schooled||84 (47%)||81 (45%)||15 (8%)||53%|
|Semi-schooled||36 (37%)||48 (50%)||12 (13%)||63%|
As suggested by Aguayo Canela (2007), a relation analysis (chi-square, X2) was made between the dependent variable Testimony EGEL and the contextual variables included in the information collection instrument. Table 4 shows those variables that showed significant relationship with the EGEL testimony from the selected indicator:
|Politics, Management and Evaluation||218,288||.000|
|Teaching, Formation and Orientation||166,915||.000|
Table 5 shows the summary of the binary logistic regression model considering the EGEL test variable as dependent and the seven independent variables recoded and mentioned in the previous section. The R2 of Cox and Snell is a coefficient of generalized determination that is used to estimate the proportion of variance of the dependent variable explained by the independent variables (Aguayo Canela, 2007). In this case, the value indicates that the independent variables explain 73% the variance of the EGEL testimony in the participants. However, if we consider the Nagelkerke R2, which is a corrected version of the previous indicator ', the percentage of explanation of the EGEL test variable is 97.8%.
|Verosimilitude Log.||Cox and Snell R2||Nagelkerke R2|
Table 6 shows the classification of the logistic regression model. In this regard, it can be determined that the model correctly classifies all students approved in the EGEL, while classifying 98.4% of the students who failed the exam in the same way. At a global level, a correct percentage of classification of all cases of 99.3% is observed.
|Observed||Testimony EGEL recoded Failed Approved||Correct Percentage98.4|
1.-The R2 of Cox and Snell has a maximum value of less than 1, even for a "perfect" model. In contrast, Nagelkerke's R2 corrects the scale of this statistic to cover the entire range from 0 to 1 (Aguayo Canela, 2007).
Based on the previous classification, Table 7 shows the values of each variable that makes up the proposed regression model. From the exposed values, it is possible to calculate the equation of the regression model that allows to predict the EGEL testimony of the undergraduate students in Educational Sciences:
|B||Standard Error||Wald||Gl||Sig.||File (B)|
From the previous values, the following logistic regression equation is estimated:
P (TESTIMONY EGEL = 1) = _____________________________________
1 + e (32,084-1,184 xDIDACal , 745xPOU00,279xDOCEN * 39,717zINVEST + 0.475 x. 0.417 xESCMADRE- 17.699 xPROM)
5. Discuss i o n
In the present study is recognized that the explanation of students' academic performance is an extremely complex task since they are multiple factors that affect the learning process. In effect, it is not the analytic product of a single aptitude, but the result of a sum of interwoven elements that act in and from the person who learns; such as institutional, pedagogical, psychosocial and socio-demographic factors (Touron, 2009).
It also highlights the importance of the use of large-scale evaluations (reliable and valid standardized exams), as a measure to determine the academic performance of students and the quality offered by schools. In addition, the results of these evaluations are a reference for educational research within the framework of the study of their relationship with the different factors that could explain academic achievement.
The utility of the EGEL as an external evaluation instrument is highlighted, which allows students and institutions to verify its effectiveness and achievements in light of national standards. As a result, the results of the examination make it possible to identify areas of opportunity for the improvement of curricular contents and educational practices in the different Educational Programs.
The main finding in the descriptive analyzes show:
•From higher to lower, the results obtained by the graduates in the training areas included in the EGEL considering the general sample are: Didactics and Curriculum; Teaching, Training and Educational Guidance; Educational Policy, Management and Evaluation; and Educational Research. It should be noted that between the first two the differences are minimal since the results are almost homogeneous; in contrast, the results in the area of educational research are notably lower. These results give guidelines for the diagnosis, discussion and analysis of educational proposals that can contribute to the strengthening and improvement of less favored training areas.
•The students of the Bachelor's Degree in Educational Sciences graduated from the semi-coached modality (blended learning), accredit to a greater extent and have more favorable results in the EGEL than the graduates of the school modality. At the same time, when making comparisons between modalities by training area, the differences to a greater extent are in: Educational research (where the students of the schooled modality obtain very low scores); Didactics and Curriculum; and Teaching, Training and Educational Guidance. It should be noted that in the area of Policy, Management and Educational Evaluation, the difference is minimal. A possible explanation may be related to the fact that the students of the semi-academic modality carry out more academic and learning activities autonomously.
In relation to the results of the logistic regression analysis, it was found that three variables are incorporated into the proposed model, which are those that significantly relate to the results in the EGEL:
•Two variables establish a negative relationship: the age of the graduates and the schooling of the mother, while the grade point average is positively related. At younger age of the graduates, the probability of proving the exam is lower; In the same way, the lower schooling of the mother, the probability is also lower. On the other hand, the higher the average grade, the greater the probability of proving the exam.
•The results of the study can be useful for the design and implementation of pedagogical strategies that strengthen the educational areas of the Education Program of the Bachelor in Educational Sciences.
•The study presents empirical evidences that contribute to the discussion of an extremely complex topic such as the variables associated with academic performance, but above all, it presents concrete results that could support the return of institutional decisions that could contribute to educational improvement.
In summary, the results provided in this document allow us to obtain a more about the factors that affect the accreditation of the EGEL exam in the students of the degree in Educational Sciences. It is expected that these analyzes will be replicate in other areas of knowledge and deepen within the scope of the education, allowing for better results in future generations and contributing methodological elements for the training of educational agents.
Aguayo Canela, M. (2007). Cómo hacer una regresión losgística con SPSS “paso a paso”. Sevilla: Fundación andaluza Beturia para la investigación en salud. Disponible en: http://www.fabis.org/html/archivos/docuweb/Regres_ log lr.pdf
Casanova, P., Cruz, M., de la Torre, M. J. £ de la Villa, M. (2005). Influence of family and socio-demographic variables on students with low academic achievement. Educational Psychology, 25 (4), 423-435,
Ceneval (2018) Centro Nacional para la Evaluación de la Educación Superior. Padrón EGEL programas de alto rendimiento académico. Resultados. Disponible en: http://padronegel.ceneval.edu.mx/portal_idap/principal.¡sf
Coleman, J. S., et al. (1966). Equality of educational opportunity, Washington, DC: US Department on HEW-Office of Education.
Davis, A. y Thomas (1999), Escuelas eficaces y profesores eficientes, Madrid: La Muralla.
De Vries, W. (2007). La acreditación mexicana desde una perspectiva comparada. Revista Complutense
Gago, A. (2000). El CENEVAL y la evaluación externa de la educación en México. Revista Electrónica de Investigación Educativa, 2, 106-114,
Jiménez, E. (2004). Introducción al análisis multivariable (primera parte). Recuperado el 20 de mayo de 2011, de http://www. Scribd.com/doc/61268649/Analisis-multivariable.
Jiménez, J. A. (2017). La evaluación de los egresados de formación profesional en México: Reflejo de la implementación de la política de competitividad en la educación superior. ArchivosAnalíticos de Políticas Educativas, 25(48). http://dx.do1.org/10.14507/epaa.25.2868
Jones, IL. $ White, S. (2000). Family composition, parent involvement, and young children's academic achievement. Early Child Development and Care, 161, 71-82.
Jornet, J. M. (2014). Asignaturas pendientes en las evaluaciones a gran escala. En M. C. Cardona, y E. Chiner. (Eds.). Investigación educativa en escenarios diversos, plurales y globales. (pp. 115 — 128). Madrid: EOS.
Jornet, J. M. (014). Asignaturas pendientes en las evaluaciones a gran escala. En M. C. Cardona, y E. Chiner. (Eds.). Investigación educativa en escenarios diversos, plurales y globales. (pp. 115 — 128). Madrid: EOS
Méndez, l. Namihira, D. L. y Sosa, J. (2001). El protocolo de investigación. Lineamientos para su elaboración y análisis. México: Trillas.
México: avances, rezagos y retos. Suplemento Montero Rojas, E., Villalobos Palma, J. y Valverde Bermúdez, A. (2007). Factores institucionales, pedagógicos, psicosociales y sociodemográficos asociados alrendimiento académico en la Universidad de Costa Rica: Un análisis multinivel. RELIEVE, v, 13, n. 2, p. 215-234, www.uv.es/RELIEVE/v13n2/RELIEVEv13n2_5.htm
Neave, G. (1990). La educación superior bajo la evaluación estatal. Tendencias en Europa Occidental, 1986-1988. Universidad Futura, 5 (2), 5-16.
M.; Moreal, B; Calleja, J.,A; Cerdan, J; Echevarria, M.J; Garcia, C; Garivia, J.L; Gómez, C; Jiménez, S.C; López, B; MartinJavato, L; Mínguez, A.L; Sánchez, A é Trillo, €. (1990). Hacia un modelo causal del rendimiento académico. Madrid, España: Centro de Publicaciones del Ministerio de Educación y Ciencia (CIDE).
Montero Rojas, E., Villalobos Palma, J. y Valverde Bermúdez, A. (2007). Factores institucionales, pedagógicos, psicosociales y sociodemográficos asociados alrendimiento académico en la Universidad de Costa Rica: Un análisis multinivel. RELIEVE, v. 13, n. 2, p. 215-234. www.uv.es/RELIEVE/v13n2/RELIEVEv13n2_5.htm
Revista ELectrónica de Investigación y EValuación Educativa [ www.uv.es/RELIEVE Jpag. 231
Ruiz, €. (001). Factores familiares vinculados al bajo rendimiento. Revista Complutense de Educación, 12 (1), 81-113.
Silva, C. (2007). Evaluación y burocracia: medir igual a los diferentes. Revista de la Educación Superior,
Tourón, J. (2009). El establecimiento de estándares de rendimiento en los sistemas educativos. Estudios sobre Educación, 16, 127-146.
Tuirán, R. 2011). La educación superior en Campus Milenio. 27 de febrero