Article Review #5

Designing for deeper learning in a blended computer science course for middle school students – Grover, Pea & Cooper (2015)

My research skills clearly peaked at the end of our article review period. Of all the papers I read over the past 5 weeks, this one had the most solidly designed study. And (excitingly!), it’s directly applicable to my teaching content. It was also 40 pages long and went very in depth into the details of their experiment, so I’ll do my best to not get lost in the weeds.

Researchers developed a 7-week curriculum for middle school students entitled “Foundations for Advancing Computational Thinking” (FACT), whose goal was to “prepare and motivate middle school learners for future engagement with algorithmic problem solving” (pg.199). Sounds boring, but this is actually very important in building capacity for future computer science work in secondary school and beyond. Algorithmic problem solving (specifically “serial execution, looping constructs, and conditional logic” (pg.201)) is transferrable between programming languages and is foundational in the development of computational thinking. The other goals of the study were to change the students’ perception of CS, and to encourage “deeper learning” (pg. 201).

A quick note on this study’s definition of “deeper learning”: this concept is concerned not just with content but also a student’s ability to problem solve, collaborate, communicate, and engage in self-directed learning (pg.204). Deeper learning extends beyond the cognitive domain and works to include important skills from the intrapersonal and interpersonal domains. Researchers choose a “deeper learning” framework because of its focus on the transferability of skills as students learn in one setting and are able to apply it in another.

Transferability of skills was actually built into the assessments used to collect data for the study. During the 7-week course, students learned the basics of algorithmic problem solving using the very kid-friendly Scratch platform. Scratch uses block-based coding that allows students to focus on the problem and not stuck looking for syntax errors (*Disclaimer: I’ve had really good luck using Scratch in my own classroom). Usually Scratch is used for game creation, but for this course it was used as a space to test algorithms with a variety of learning goals. At the end of the course students were then given the “preparation for future learning (PFL)” assessment in which students had to apply their computational thinking knowledge developed using block-based code to text-based code, specifically Pascal and a “Java-like” language (pg.201).

The FACT course was piloted in two iterations at the same middle school. The first iteration was a more traditional face-to-face course that used online tools, while the second iteration was delivered entirely online through the OpenEdX MOOC platform. Researchers used the feedback from the first iteration to significantly inform the design of the second iteration. Findings were collected through pre & post assessments, PFL, final projects, and interviews.

They did not run a control group (one not exposed to FACT), so the findings for this study can really only be compared between the two iterations or discussed as a whole. Overall, they found that the results from the students participating in the MOOC iteration had similar-to-better understandings of algorithmic structures. Both groups of students also demonstrated their knowledge more effectively in the final project and interview than they did in the post assessment. The separate “PLF” test left the researchers feeling “cautiously optimistic” although they felt that the test itself was too hard (pg.222). Students were able to transfer some of their skills to text-based problems, but struggled with loops and variables, which also showed on their post assessments. The open-ended questions on the post assessment also revealed that students gained a better understanding of the breadth of topics in computer science and its opportunities for problem solving and creativity.

At the time of publishing, this study was one of the first to have developed an online introduction to CS course that provided empirically positive results in the learning gains of middle school students (pg.224). We all anecdotally support middle school students building up their computational thinking, but it’s important to have the data. At this age students are going through some serious cognitive development and it’s critical to slip in some analytical reasoning to support their future STEM studies. Let’s get more pre-teens practicing their algorithmic problem solving skills!

Reference

Grover, S., Pea, R., & Cooper, S. (2015) Designing for deeper learning in a blended computer science course for middle school students, Computer Science Education, 25(2), 199-237, DOI: 10.1080/08993408.2015.1033142

Article Review #4

Mobile game development: Improving student engagement and motivation in introductory computing courses – Kurkovsky, 2013

I had much better luck this week finding a quality article. In fact, I may have even found a  journal I can stick with through the next review assignment (and maybe for nerdy professional reading). I chose this particular article because it sounded like I’d get some affirmation for my current curriculum choices. A little self-serving, I know, but I wanted to poke around the data behind integrating game development into computer science courses because it’s clearly a trend that is picking up speed and has been a hit in my own classroom.

This article started so hopeful. It included a lengthy lit review to support the use of game development to improve student engagement in intro computer science courses at universities. What many studies noted was that game development rarely makes it into the intro courses because building a full computer game takes high level programming skills. But, the creation of casual mobile games is totally within the capabilities of intro level students. Mobile game development provides an accessible, engaging, and practical application of many computer science concepts. I basically highlighted everything in this section; that’s how excited I was to see all this research affirming my current beliefs about teaching computer science. To sum it up, games are a slick way to teach programming concepts as it allows students to see “the connection between technical material and their everyday lives” (pg.141). It appeals to non-CS majors, women, and minorities (pg.143). Games help students understand that computer science is more than coding; an idea which hopefully gets them in the door and keeps them engaged throughout the semester.

For their study, the researchers created learning modules based on core Java programming concepts with an opportunity to practice and apply that knowledge through the enhancement of a mobile game. Some of these modules included variations on crowd favorites such as Battleship, Connect Four, Space Invaders, and Frogger. Students were not asked to build the games from scratch, but were given an almost functional game so that they could focus on smaller programming objectives while also customizing the interface and/or enhancing the game logic. Honestly, it all sounded awesome; if you have to learn Java then this seems like the way to do it.

The experiment was set up in introductory CS courses at two different universities: one school was more selective and only had STEM majors in the course, the other was less selective and had a wide variety of majors in the course. At each site, professors were given test groups (access to the game development features) and control groups. Researchers would assess the effectiveness of the mobile game modules through student grades/completion, a student survey, and two questionnaires bookending the semester.

And then it all went terribly, terribly wrong. Okay, maybe not wrong, but their findings were severely disappointing after the huge build up for game development at the beginning of the article. The researchers referred to their findings as a “mixed bag” (pg.153). Yikes. In the end, the variation between the two universities kind of hurt the study because nothing could conclusively be said for whether the game development features had a positive effect when one student population was so clearly better prepared from the start. They actually saw negative results in student interest from the more selective school; a suggested explanation being that students were anticipating traditionally taught courses and the new modules were jarring (pg.153). Happily, there was a (limited) positive effect in student engagement overall, and the test group did as well as the control group (pg.154).

Regardless of the findings, the researchers remain stalwart in their belief that game development is a positive teaching tool, and hold that more research on the topic must be done. I’m as baffled as they are as to why the study went awry. I’ll admit I got deeply suckered into the lit review section and now want to forgo these particular bad-to-middling findings, but I think this is a “fail forward” moment as the researchers noted that they would continue testing iterations of their modules. Clearly, there is a plethora of studies supporting game development in CS courses, but the modules that the researchers developed for this study are so similar to those I’m looking at to teach Java next year that I’m still kind of nervous/curious to know why they didn’t see better results. Or maybe we can just blame it on Java…

Reference

Kurkovsky, S. (2013). Mobile game development: Improving student engagement and motivation in introductory computing courses. Computer Science Education, 23(2), 138-157. Retrieved from http://dx.doi.org/10.1080/08993408.2013.777236