Home (Site Contents)
Bevil Table of Contents
Bottom of Page

All content of this website is under copyright and subject to all laws thereof. If you are unsure how to properly cite copyrighted material, refer to your style manual or feel free to e-mail me at bookcrazed@yahoo.com.


CHAPTER 3

METHODS

The purpose of the study was to examine the effect of real-world problem-solving applications on gifted and nongifted students' achievement and classroom environments. Therefore, the research question for this portion of the evaluation was: What is the effect of using real-world problem-solving applications on gifted and nongifted students' achievement and classroom environments?

The primary research hypotheses for the study were:

H-1: There are significant differences on achievement scores by group status (experimental or control), academic status (gifted or nongifted), and their interaction (group x academic).

H-2.0: There are significant differences on involvement scores by group status, academic status, and their interaction (group x academic).

H-2.1: There are significant differences on affiliation scores by group status, academic status, and their interaction (group x academic).

H-2.2: There are significant differences on rule clarity scores by group status, academic status, and their interaction (group x academic).

H-2.3: There are significant differences on task orientation scores by group status, academic status, and their interaction (group x academic).

H-2.4: There are significant differences on satisfaction scores by group status, academic status, and their interaction (group x academic).

H-2.5: There are significant differences on innovation scores by group status, academic status, and their interaction (group x academic).

Research Design

A nonequivalent control-group design was used to test the hypotheses in the study. In this quasi-experimental design, the experimenter had manipulative control over the independent variable, and randomization was not possible, as in Figure 1. The independent variable was real-world mathematical applications. The dependent variables were the posttest performance on the CES and the BMI. The study examined the influence of the treatment effect on the academic performance and classroom environment of the student study participants. The use of pretests and posttests provided the criteria for determining whether posttest differences could be attributed to the treatment. Figure 1 illustrates the nonequivalent control-group design.


In this research design, a total of 320 students participated in the study. Of these, 160 students (80 gifted and 80 nongifted) received the experimental treatment, the real-world, problem-solving mathematics applications. Likewise, 160 students (80 gifted and 80 nongifted) received the control treatment.

Participants

The sample was drawn from the population of sixth-, seventh-, and eighth-grade students in a suburban school district with a total enrollment of 12,000. The total sample for the study contained 320 students, of which 82.8% were African Americans and 16.8% were Hispanic. These were intact groups assigned by the principal and teachers prior to the beginning of the 2001-2002 school year. The study was conducted during the spring semester, so that all classes had been intact for one semester prior to the commencement of the study. The sample reflected the ethnic breakdown of the school population. This suburban school district identified the gifted students to receive the treatment according to the TEA's guidelines for the identification of gifted students.

Instruments

The classroom environment was measured by the Classroom Environment Scale (CES). The participating students' achievement was measured using the Bevil Mathematics Inventory (BMI).

Classroom Environment Scale (CES)

The CES used in the study was a shortened version of a self-report survey instrument (Fraser, 1982, 1986; Fraser & Fisher, 1983, 1986; Fisher, 1986; Moos, 1979). The instrument was designed "to focus on students' perceptions of their content area classes rather than their general impressions of school as a whole" (Waxman & Huang, 1998, p. 99). The CES was administered prior to and after the treatment. A description of each of the eight CES scales and a sample item from each follows.

  1. Involvement. The involvement component of the CES measured the extent to which students participated actively and attentively in their mathematics class discussions and activities; for example, "In my mathematics class, I really pay attention to what the teacher is saying" (Waxman & Huang, 1998).

  2. Affiliation. The affiliation component of the CES measured the extent to which students knew, helped, and were friendly toward each other in their mathematics class; for example, "I know other students in my mathematics class really well" (Waxman & Huang, 1998).

  3. Teacher support. The teacher support component of the CES measured the extent to which the mathematics teacher helped students and took a personal interest in them; for example, "My mathematics teacher takes a personal interest in me" (Waxman & Huang, 1998).

  4. Task orientation. The task orientation component of the CES measured the extent to which the mathematics class was business-like and emphasized completing classwork; for example, "Getting a certain amount of classwork done is very important in my mathematics class" (Waxman & Huang, 1998).

  5. Order and organization. The order and organization component of the CES measured the extent to which the mathematics class was under control and had orderly behavior; for example, "My mathematics class is well organized" (Waxman & Huang, 1998).

  6. Rule clarity. The rule clarity component of the CES measured the extent to which rules were clearly stated in mathematics class and students were aware of the consequences of breaking rules; for example, "In my mathematics class, there is a clear set of rules to follow" (Waxman & Huang, 1998).

  7. Satisfaction. The satisfaction component of the CES measured the extent to which students enjoyed their school work; for example, "I enjoy the schoolwork in my mathematics class" (Waxman & Huang, 1998).

  8. Innovation. The innovation component of the CES measured the extent to which variety and new ideas were tried in the mathematics class; for example, "New and different ways of solving problems are not tried very often in this class" (Fisher, 1986).

Validity. The mean correlation between all scales was .20 in using the individual students as the unit of analysis in the Waxman and Huang (1998) classroom environment study. The mean correlation between the scales was .39, using the class as the unit of analysis. These results indicated that the short CES has adequate discriminant validity, that is, correlation between scales (Waxman & Huang, 1998).

Reliability. Fraser (1982) and Fraser & Fisher (1983, 1986) studied several CES short-form subscales (24 items) in a sample of 116 middle school science classes. In analyzing the class means, they found that internal reliabilities were acceptable. Correlations with the Real Form (long form) "ranged from .78 to -.95, and the subscales significantly discriminated between classes" (Smith, 1989, p. 177). Waxman and Huang (1998) reported that CES reliability coefficients in their study on classroom environments were reasonably satisfactory for scales that are composed of proportionately few items.

Table 3 reports the alpha reliability and validity coefficients regarding the eight subscales of the CES. In order to ensure adequate reliability and validity of the scales used in the classroom environment of the study, internal consistency (Cronbach alpha) reliability and discriminant validity (correlations between scales) were estimated for each scale. These coefficients were computed using both the individual student and the class means of students as the units of statistical analysis. Using the individual as the unit of analysis, the alpha coefficient of these scales for the present study ranged from .64 to .88. The overall reliability coefficients in the prevailing study are reasonably satisfactory for scales that are composed of relatively few items, and they are similar to reliability coefficients found in past studies (Waxman, Huang, Anderson, & Weinstein, 1997).

Table 3
Alpha Reliability Coefficients Regarding the Subscales of the Classroom Environment Scale

Learning
Environment
Subscale
No. of Items MSDAlpha ReliabilityCorrelation with Other Scales

Involvement42.39.53.70.26
Affiliation 4 2.49 .52.64.17
Rule Clarity42.64.43.83.15
Task Orientation42.58.56.86.22
Satisfaction42.48.56.88.27
Innovation42.70.34.79.18

Using the individual student as the unit of analysis, the mean correlation between all scales was 0.21, with values ranging from 0.14 to 0.27. This result indicates that the survey instrument had adequate discriminant validity, especially when using the individual student as the unit of analysis.

Bevil Mathematics Inventory (BMI)

Validity. To test the validity of the BMI, the researcher administered the instrument to a group of experts in the fields of mathematics and evaluation. Experts included Michelle Rohr, Director of Math, Houston Independent School District; Dr. Anne Papacontinapolous, Director of Rice University Mathematics School Program; Sandra Harris, fourth-grade teacher; Merle Granek, fifth-grade teacher; Shirley Corte, fifth-grade teacher; Dorothy Brandley, fourth-grade teacher; Meredith Gartner, third-grade teacher; and Charlotte Haynes, Director of Grants for Houston Independent School District and former high school algebra and calculus teacher. Each of these educators had from 20 to 25 years teaching experience. The panel of experts were asked to assess the content of each item and of the test as a whole by employing a scale of 0 (statement is not valid) to 2 (statement is valid and measures what it is intended to measure), with an intervening 1 to indicate the respondent is unsure of the statement's validity. (See Appendix F, Scale for Bevil Mathematics Inventory.) The BMI was sent to the panel of mathematics experts twice (see Appendix G, Letter to Mathematics Educators). After the first BMI was sent to the panel, revisions were made to the test. Consequently, the revised BMI was sent to the panel of experts again for validation. A mean score of 1.80 was computed from the validation sheet administered to the panel. The panel of experts agreed that the revised survey was a valid instrument for use in the study.

Reliability. Reliability for the BMI was employed in the study assessment through the application of Rational Equivalency Procedure. This type of reliability determines "how all items on a test relate to all other items and to the total test" (Gay, 1996, p. 167). For determining the rational equivalency (internal consistency) reliability for an instrument, the Cronbach's Alpha Coefficient was used. This procedure required that each item be scored dichotomously; for instance in this investigation, correct or incorrect, 1 or 0. An internal consistency reliability of .823 was computed for the test as a whole.

Treatment

This section describes the non-treatment and treatment conditions for control and experimental groups. Mathematics was taught at the participating intermediate and middle schools each day, Monday through Friday, for one hour. Table 4 reports time sequence of treatments by group status. The total time expended for treatments was 14 weeks.

Table 4
Time Sequence of Treatments by Group Status

Time Control Group Experimental Group

Week One
Day 1
CESCES
Day 2BMIBMI
Week TwoDecimal fractionsDecimal fraction concepts,
Soaring with Numbers:
Introduction, Chapter 1, Extensions
& Beyond
Week ThreeCustomary measurementMeasurement concepts;
Soaring with Numbers:
Chapter 2, Extensions & Beyond
Week Four Customary measurement Measurement concepts;
Soaring with Numbers:
Chapter 3, Extensions & Beyond
Week Five Metric measurement Soaring with Numbers:
Chapter 4, Extensions & Beyond
Week Six Metric measurement Soaring with Numbers:
Chapter 5, Extensions & Beyond
Week Seven Metric measurement Soaring with Numbers:
Chapter 6, Extensions & Beyond
Week Eight Area, perimeter, volume Soaring with Numbers:
Chapter 7, Extensions & Beyond
Week Nine Area, perimeter, volume Soaring with Numbers:
Chapter 8, Extensions & Beyond
Week Ten Problem-solving Creative problem-solving
Week Eleven Problem-solving Creative problem-solving
Week Twelve Problem-solving Exchange of ideas within classes
Week Thirteen Writing own journals of creative problems from facts on the Internet
Week Fourteen
Day 1
CES CES
Day 2 BMI BMI

Group One: Control Group

The control group participated in the mathematics curriculum for 14 weeks without the treatment program. Control-group instruction consisted of studying concepts of customary and metric measurement, decimal fractions, area, perimeter, and volume. Prior to the beginning of the study, control-group teachers were asked to withhold any action or word from instructional practices that would bias the control group. Control group teachers met with the researcher on a weekly basis to discuss different concepts emphasized in the curriculum. The control group teachers were in-serviced on administering the BMI and the CES prior to the beginning of their teaching of the concepts and at the conclusion of the study. After the CES and BMI posttests had been administered by the control group teachers, the curriculum book, Soaring with Numbers, was given to student participants in the control group. Control group teachers received a copy of Soaring with Numbers in addition to other manipulatives.

Group Two: Experimental Group

Soaring with Numbers. The principles of the experimental-group treatment program reinforced students learning mathematical concepts through the practical application of the learned knowledge in the narrative text. To provide meaningful mathematical problems, students solved real-world mathematical application problems through the integration of disciplines. In exploring concepts, students made connections in mathematics. For variety and practice, students were exposed to a different concept in which several concepts were used to solve one problem. To ensure that different concepts were understood, students constructed their own graphs. As the chapters progressed, mathematical problems increased in complexity. The researcher met with the experimental teachers weekly to discuss progress in the real-world mathematical applications curriculum, give demonstrations (individually) of activities, and discuss any problems in implementing this curriculum. Sometimes, if necessary, the researcher met with teachers twice a week.

Instructor Training. The instructors' training for Soaring with Numbers involved a one-day inservice the week prior to beginning the program. Inservices and meetings were held weekly at each of the schools to confer with teachers, demonstrate activities, and assess and determine which teachers needed guidance and what needed to be accomplished prior to the next meeting. This training entailed: (a) purpose of Soaring with Numbers, (b) program's objectives, (c) overview of the chapters, (d) lesson plans, (e) overview of concepts, (f) the teacher's role as facilitator, (g) materials required, and (h) time required (see Appendix H). Teacher training was conducted with all 10 teachers with equal lengths of time to maximize the treatment.

Each teacher was asked to maintain a daily audiotaped journal of students' progress with Soaring with Numbers and the teacher's feelings about each chapter, including any surprises. The teacher was to track students who were absent during a class session.

Classroom instruction. Classroom instruction involved whole-class instruction versus small-group interaction. The teacher explained during the first 15 minutes of the class. Following the explanation, students were engaged in group activities for the 45-minute class duration. The teacher acted as the coach by facilitating and encouraging students to ask and explain to each other. The teacher sometimes gave small-group explanations. The classroom was typical of a class learning. The noise level was conducive to a community of mathematicians.

Content focus. Teachers were instructed to select mathematical tasks that engaged students' interests and intellects and to focus on the concepts developing into meaningful, real-world problem application solvers. The teachers were instructed to not be so concerned about obtaining a right answer, but rather to present questions for the students to speculate, to try alternative solutions, and to decide whether or not their approaches were valid (NCTM, 1991).

Equipment and materials. Teachers made a variety of materials available to students, as appropriate for Soaring with Numbers. Students were permitted to use the computer for making graphs and tables and for accessing The World Almanac and Guinness Book of World Records on CDs, as well as to use the Internet for a research tool. In addition, they were able to check large-number calculations with calculators. Students had access to a variety of tools rather than emphasizing conventional mathematical symbols. Students also needed these tools for communicating mathematically (NCTM, 1991).

Teacher questioning. Teachers were instructed to emphasize tasks that focused on thinking and reasoning. To encourage active participation, teachers would generally ask "Why? How did that work? Explain more. Can anyone think of another solution to the problem?" These discussions with questions were held in small groups and whole groups to facilitate and discover students' understanding. Teachers encouraged students to write explanations for their solutions.

Classroom environment. Students were allowed to work independently and collaboratively in their activities. Teachers were instructed that, as facilitators, they were to create an environment in which students used their time wisely and explored concepts. It was proposed that, by creating a positive atmosphere and through listening to each student, students would feel free to question new approaches in learning the concepts. To enhance students' learning in communities, classes started on time and materials were made readily available on the cluster tables. The teachers' years of experience in the classroom helped to engage students in the mathematical content.

Student interaction. Teachers were instructed, as facilitators, to promote classroom discourse in which students listened to, responded to, and questioned the teacher and one another. Students were encouraged to share ideas about different solutions to a problem, as well as support their ideas and solutions in response to other students' challenges or counter arguments. One goal was to accustom students to paying attention to one another's ideas and to reason together, as well as to accept responsibility for helping others.

Data Collection Procedures

Approval of the Committee for the Protection of Human Subjects and approval from the research department at the participating school district was obtained prior to data collection. In addition, parents of students participating in the study signed permission letters. These permission letters described the study and the students' participation in the study. All parents of students in control and experimental groups signed these consent letters. Students signed a permission slip indicating that they desired to be participants in the study. These permission slips signed by students were returned to school.

Four schools participated in the study during the spring semester, 2002. Participants in both control and experimental groups were administered the CES and BMI on Tuesday and Wednesday prior to beginning the treatment. On Tuesday the control and experimental groups were given the CES. The following day, Wednesday, the control and experimental groups were given the BMI.

The BMI is a multiple-choice test that requires about 40 minutes to complete. Students were provided with space to solve the problems in their BMI booklets. When they completed the test, they were permitted to read a library book until all students in that specific group had completed the test.

The implementation of the real-world mathematical applications curriculum began on Day 3 of Week 1. This implementation continued for 13 1/2 weeks. On the 3rd and 4th days of Week 14, students were administered the posttests of the CES and BMI.

Data Analysis Procedures

The research methodology included null hypotheses, instrumentation, subjects, and analysis. Table 5 lists the null hypotheses and the pretest and posttest instrumentation employed with each. The data for all hypotheses were analyzed using the analysis of covariance (ANCOVA).

The independent variable was the real-world mathematical applications program, and the dependent variable was this knowledge acquired in testing hypothesis 1. Hypothesis 2 required testing of the dependent variable of classroom environment. Data were obtained by using the knowledge indicator instrument for the pretests of the BMI and the CES. Covariates for 1 and 2 null hypotheses included the pretests.

Table 5
Null Hypotheses and Pretest and Posttest Instrumentation

Null Hypothesis Pretest and Posttest Instrumentation

H0-1: There are no significant differences on achievement scores by group (experimental or control), academic status (gifted or nongifted), and their interaction (group x academic).BMI
H0-2.0: There are no significant differences on involvement scores by group status, academic status, and their interaction (group x academic). CES
H0-2.1: There are no significant differences on affiliation scores by group status, academic status, and their interaction (group x academic). CES
H0-2.2: There are no significant differences on rule clarity scores by group status, academic status, and their interaction (group x academic).CES
H0-2.3: There are no significant differences on task orientation scores by group status, academic status, and their interaction (group x academic).CES
H0-2.4: There are no significant differences on satisfaction scores by group status, academic status, and their interaction (group x academic).CES
H0-2.5: There are no significant differences on innovation scores by group status, academic status, and their interaction (group x academic). CES

Stevens (1986) listed seven assumptions when using an ANCOVA as a data analysis procedure: (a) The independent variable should not affect the covariate variable; (b) observations are independent; (c) observations on the dependent variables follow a bivariate normal distribution in each group; (d) the population covariance matrices for the p dependent variables are equal or approximately equal (largest/smallest 1,5); (e) the covariates are measured without error so that they are perfectly reliable; (f) there is a linear relationship between the dependent variable and the covariate; and (g) there is homogeneity of the regression hyperplanes (p. 269).

The homogeneity of regression hyperplanes, or common slope, was examined through the MANOVA command in the SPSS program. First, the analysis statement specified the dependent variable. Second, the effects were listed in order in the design command. The covariate was listed first, then the independent variable, and finally the independent variable by covariate interaction. Third, the sequential method was computed to test common slopes assumption (Tabachnick & Fidell, 1996).

Evaluating Assumptions

Seven assumptions were examined and evaluated: normality, linearity, outliers, multicollinearity and singularity, homogeneity of variance, reliability of covariance, and homogeneity of regression.

Normality. Because the assumption of normality applies to the sample distribution of means, and not to the raw scores, skewness by itself poses no problem, with the large sample size and use of two-tailed tests, normality of sampling distribution of means was anticipated.

Linearity. Plots of residuals were run and examined. No curvilinearity was revealed.

Outliers. A histogram, as well as maximum values of DV, were run. Even though there was a little skewness, no outliers were visible.

Multicollinearity and singularity. Since there was only one covariate tested in this investigation, this assumption was met.

Homogeneity of variance. Since the N's were equal in each cell (group), this assumption was met.

Reliability of covariance. It is assumed when an experimental study is employed with cognitive variables as covariates, covariates are measured without error; that they are perfectly reliable (Tabachnick and Fidell, 1996). Additionally, Cohen, Cohen, West, and Aiken (2003) opined that unreliable covariates can lead only to underestimation of the magnitude of the treatment effect but not to spurious results indicating a treatment effect exists when in fact it does not (pp. 350-351).

Homogeneity of regression. This assumption was tested by examining the interaction between the pretest and covariate in each hypothesis.

As reported in Table 6, all seven interaction effects were nonsignificant.

Table 6 Evaluation of Assumptions by Academic Status and Group Status

F Values

Null HypothesisAcademic Status Group Status

H0-1 covariate
(achievement)
1.15 .65
H0-2.0 covariate
(involvement)
2.36.49
H0-2.1 covariate
(affiliation)
.73 2.93
H0-2.2 covariate
(rule clarity)
3.07 3.12
H0-2.3 covariate
(task orientation)
1.26 3.27
H0-2.4 covariate
satisfaction
2.49 2.25
H0-2.5 covariate
innovation
1.46 .85

Note. For all null hypotheses covariates, p > .05.

Go to Chapter 4

Home (Site Contents)
Bevil Table of Contents
Top of Page