Tool Review #1 – WorkFlowy

There are multiple pieces of my life all converging into chaos right now (I’m sure I’m not alone), so I chose to try WorkFlowy to see if I could get a little more organized during this assignment. This tool is clean and dirt simple to use, though I’m so use to the colorful, sparkling interfaces of other tools that WorkFlowy is also a bit boring. In a nutshell — you make a bulleted list. And that’s it. You can check out my full list on the left.

There were only a handful of features to try out. Zooming allows you to choose one section to focus on at a time, which is helpful if you’re working with a giant list. I’m someone who likes to cross things off a list instead of having them disappear, so I really liked the strikethrough for completing tasks. Notes brought a little more flexibility to the tool so you weren’t limited to only creating new bullets. 

As you can see in the screenshot, I wanted to insert a photo but wasn’t able to. Part of me wanted WorkFlowy to become a holding pen for the details of those bullet points: embedded media, links with the photo/headline, etc. Nope. So, I learned to be content with the black and white bare bones list.

One feature that I found useful, but felt a little clunky, was tagging bullet points with hashtags. You could then sort your list so you would only see items with a specific tag. My tag #now indicates to me things that need to be completed ASAP. It works, and it’s in line with the limited functionality of the tool, but nothing to write home about compared to the other search/sort organizing feature of other tools.

When it comes down to it, you could do all of this in Google Docs, but the beauty of this tool is how weirdly simple it is in a world that is chock-full of busy digital media. It’s a better organized version of my phone’s Notes app, which is great because I use Notes all the time but it often becomes disjointed. I’m not sure it would be at all interesting to students; I see it being created then quickly forgotten. It may go the same way for teachers, but it could appeal to some who are looking for a place to pull together multiple to-do lists. A benefit of this stripped down bulleted list is that it focuses your attention the same way making a handwritten list does (pseudo-analog?). And, I kind of like it? It doesn’t integrate with Google or Outlook, has no calendar, or color coding. It is unabashedly just a list.

Article Review #5

Designing for deeper learning in a blended computer science course for middle school students – Grover, Pea & Cooper (2015)

My research skills clearly peaked at the end of our article review period. Of all the papers I read over the past 5 weeks, this one had the most solidly designed study. And (excitingly!), it’s directly applicable to my teaching content. It was also 40 pages long and went very in depth into the details of their experiment, so I’ll do my best to not get lost in the weeds.

Researchers developed a 7-week curriculum for middle school students entitled “Foundations for Advancing Computational Thinking” (FACT), whose goal was to “prepare and motivate middle school learners for future engagement with algorithmic problem solving” (pg.199). Sounds boring, but this is actually very important in building capacity for future computer science work in secondary school and beyond. Algorithmic problem solving (specifically “serial execution, looping constructs, and conditional logic” (pg.201)) is transferrable between programming languages and is foundational in the development of computational thinking. The other goals of the study were to change the students’ perception of CS, and to encourage “deeper learning” (pg. 201).

A quick note on this study’s definition of “deeper learning”: this concept is concerned not just with content but also a student’s ability to problem solve, collaborate, communicate, and engage in self-directed learning (pg.204). Deeper learning extends beyond the cognitive domain and works to include important skills from the intrapersonal and interpersonal domains. Researchers choose a “deeper learning” framework because of its focus on the transferability of skills as students learn in one setting and are able to apply it in another.

Transferability of skills was actually built into the assessments used to collect data for the study. During the 7-week course, students learned the basics of algorithmic problem solving using the very kid-friendly Scratch platform. Scratch uses block-based coding that allows students to focus on the problem and not stuck looking for syntax errors (*Disclaimer: I’ve had really good luck using Scratch in my own classroom). Usually Scratch is used for game creation, but for this course it was used as a space to test algorithms with a variety of learning goals. At the end of the course students were then given the “preparation for future learning (PFL)” assessment in which students had to apply their computational thinking knowledge developed using block-based code to text-based code, specifically Pascal and a “Java-like” language (pg.201).

The FACT course was piloted in two iterations at the same middle school. The first iteration was a more traditional face-to-face course that used online tools, while the second iteration was delivered entirely online through the OpenEdX MOOC platform. Researchers used the feedback from the first iteration to significantly inform the design of the second iteration. Findings were collected through pre & post assessments, PFL, final projects, and interviews.

They did not run a control group (one not exposed to FACT), so the findings for this study can really only be compared between the two iterations or discussed as a whole. Overall, they found that the results from the students participating in the MOOC iteration had similar-to-better understandings of algorithmic structures. Both groups of students also demonstrated their knowledge more effectively in the final project and interview than they did in the post assessment. The separate “PLF” test left the researchers feeling “cautiously optimistic” although they felt that the test itself was too hard (pg.222). Students were able to transfer some of their skills to text-based problems, but struggled with loops and variables, which also showed on their post assessments. The open-ended questions on the post assessment also revealed that students gained a better understanding of the breadth of topics in computer science and its opportunities for problem solving and creativity.

At the time of publishing, this study was one of the first to have developed an online introduction to CS course that provided empirically positive results in the learning gains of middle school students (pg.224). We all anecdotally support middle school students building up their computational thinking, but it’s important to have the data. At this age students are going through some serious cognitive development and it’s critical to slip in some analytical reasoning to support their future STEM studies. Let’s get more pre-teens practicing their algorithmic problem solving skills!


Grover, S., Pea, R., & Cooper, S. (2015) Designing for deeper learning in a blended computer science course for middle school students, Computer Science Education, 25(2), 199-237, DOI: 10.1080/08993408.2015.1033142

Article Review #4

Mobile game development: Improving student engagement and motivation in introductory computing courses – Kurkovsky, 2013

I had much better luck this week finding a quality article. In fact, I may have even found a  journal I can stick with through the next review assignment (and maybe for nerdy professional reading). I chose this particular article because it sounded like I’d get some affirmation for my current curriculum choices. A little self-serving, I know, but I wanted to poke around the data behind integrating game development into computer science courses because it’s clearly a trend that is picking up speed and has been a hit in my own classroom.

This article started so hopeful. It included a lengthy lit review to support the use of game development to improve student engagement in intro computer science courses at universities. What many studies noted was that game development rarely makes it into the intro courses because building a full computer game takes high level programming skills. But, the creation of casual mobile games is totally within the capabilities of intro level students. Mobile game development provides an accessible, engaging, and practical application of many computer science concepts. I basically highlighted everything in this section; that’s how excited I was to see all this research affirming my current beliefs about teaching computer science. To sum it up, games are a slick way to teach programming concepts as it allows students to see “the connection between technical material and their everyday lives” (pg.141). It appeals to non-CS majors, women, and minorities (pg.143). Games help students understand that computer science is more than coding; an idea which hopefully gets them in the door and keeps them engaged throughout the semester.

For their study, the researchers created learning modules based on core Java programming concepts with an opportunity to practice and apply that knowledge through the enhancement of a mobile game. Some of these modules included variations on crowd favorites such as Battleship, Connect Four, Space Invaders, and Frogger. Students were not asked to build the games from scratch, but were given an almost functional game so that they could focus on smaller programming objectives while also customizing the interface and/or enhancing the game logic. Honestly, it all sounded awesome; if you have to learn Java then this seems like the way to do it.

The experiment was set up in introductory CS courses at two different universities: one school was more selective and only had STEM majors in the course, the other was less selective and had a wide variety of majors in the course. At each site, professors were given test groups (access to the game development features) and control groups. Researchers would assess the effectiveness of the mobile game modules through student grades/completion, a student survey, and two questionnaires bookending the semester.

And then it all went terribly, terribly wrong. Okay, maybe not wrong, but their findings were severely disappointing after the huge build up for game development at the beginning of the article. The researchers referred to their findings as a “mixed bag” (pg.153). Yikes. In the end, the variation between the two universities kind of hurt the study because nothing could conclusively be said for whether the game development features had a positive effect when one student population was so clearly better prepared from the start. They actually saw negative results in student interest from the more selective school; a suggested explanation being that students were anticipating traditionally taught courses and the new modules were jarring (pg.153). Happily, there was a (limited) positive effect in student engagement overall, and the test group did as well as the control group (pg.154).

Regardless of the findings, the researchers remain stalwart in their belief that game development is a positive teaching tool, and hold that more research on the topic must be done. I’m as baffled as they are as to why the study went awry. I’ll admit I got deeply suckered into the lit review section and now want to forgo these particular bad-to-middling findings, but I think this is a “fail forward” moment as the researchers noted that they would continue testing iterations of their modules. Clearly, there is a plethora of studies supporting game development in CS courses, but the modules that the researchers developed for this study are so similar to those I’m looking at to teach Java next year that I’m still kind of nervous/curious to know why they didn’t see better results. Or maybe we can just blame it on Java…


Kurkovsky, S. (2013). Mobile game development: Improving student engagement and motivation in introductory computing courses. Computer Science Education, 23(2), 138-157. Retrieved from

Article Review #3

A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study – Pérez-Marín & Pascual-Nieto, 2012

Honestly, I partially chose this article because the title made me laugh: “A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study.” The researchers get right to business finding ways to get CS students to engage with the material after class. Apparently, the study habits of CS students are so notoriously bad that the authors didn’t feel the need to go into the claim that their entire study rests on. While I would have liked to see more than one article back up their assertion, it was clear that they saw a trend in their computer science department and wanted to tackle it. I had already mentally committed to this article before I realized that they were gathering data from a class held in the 2007-2008 school year. The paper itself was published in 2012, so I thought we were dealing with more recent applications of blended learning. Still, there may be a valid takeaway.

To test the efficacy of blended learning study tools for university CS students, researchers took 131 students in the second-year course “Operating Systems” and let half of them use a computer program to study and the other half (control group) received a print version of the study content. The online study program, “Willow,” was developed by the researchers. Students were able to type in their response, have it compared against pre-loaded answers from the instructor, and then were given immediate feedback (Figure 1). Data was collected through pre/post assessments, and a satisfaction questionnaire. The experiment took place near the end of the semester and (weirdly) lasted as long as a one hour study session bookended by the assessments. All students were then allowed to use their study tool of choice for the month leading up to the final exam at which time they took the satisfaction survey.

Figure 1 – Screenshot of Willow

While the results showed that the computer study group had a higher positive difference between their pre/post assessment scores 75% of the time, it was by a margin that was not statistically significant (Perez-Marin & Pascual-Nieto, 2011, p.78). The authors weren’t surprised by this finding, as their actual goal was to show that students must study for an exam over the course of several weeks. I struggle to understand why they set up this portion of the experiment this way if they were actually looking to prove an idea that required an extended window of time for data collection. Once the initial study session was over and students were able to choose their study tool, 99% of students used a Willow account and researchers saw that students were using the program regularly in the weeks leading up to the exam (Perez-Marin & Pascual-Nieto, 2011, p.80). But then they compare the increased studying anecdotally to the procrastination observed in the past when traditional paper study guides were used; they did not have data to back this up.

The results of the satisfaction questionnaire were unsurprising, especially considering all of the subjects were in the computer science program. They overwhelming felt that using a computer to study was good, a positive complement to their classwork, and their preferred method of study for the future. Researchers also took into consideration observable satisfaction during the 1 hour long study session, which led to my second laugh in this article, “The first reaction observed is that students assigned to the control group complained more than students assigned to the test group” (Perez-Marin & Pascual-Nieto, 2011, p.76). Computer science students complaining about not getting to use computers: typical.

While I can’t say that this article gave me ideas for my classroom, or helped me make new connections, it’s always good to know what came before you. Articles like this are like being visited by the ghost of technology past. If you don’t understand what the field used to look like then you won’t fully appreciate how far we’ve come in just a decade. Today, I wouldn’t even think to hand my CS students paper study guides, but clearly that used to be the norm. Blended learning is something that is no longer just a study tool but an active player in daily curriculum. This article may lack the appropriate data to show a positive effect on student scores over time, but their reasoning behind using blended learning tools are solid and similar to our reasons today (student control, flexibility, personalization). Unfortunately, even with the normalization of blended learning tools, it’s been my experience that CS students still slack on the studying.


Pérez-Marín, D., & Pascual-Nieto, I. (2012). A case study on the use of blended learning to encourage computer science students to study. Journal of Science Education and Technology,21(1), 74-82. Retrieved from

Article Review #2

Connectivism: Learning theory and pedagogical practice for networked information landscapes – Dunaway, 2011

Part of me wonders if I’m drawn to connectivism because it uses language similar to that used in my computer science classes. While I can see where connectivist strategies fit in my own classroom, I’ve been having a hard time envisioning how connectivism is general enough for any classroom. The digital jargon that is inherent to the theory make it feel a little cold compared to the focus on student experience and contextualized learning of constructivism. But I do think there’s something really interesting here! For this article review I wanted to poke around connectivism a bit more and see how some of the early adopters/developers were filling out this (potentially) new learning theory.

The article I found, “Connectivism: Learning theory and pedagogical practice for networked information landscapes”, was written specifically with librarians and those who work with information instruction in mind. The author, Michelle Dunaway, found a lot of overlap between the networked learning in connectivism and the role of those who teach students how to read, interpret, and analyze information sources. While it wasn’t exactly the K-12 classroom example I was looking for, the relationship between connectivism and information instruction is definitely strong and it was interesting to read about the environment where this learning theory thrives.

To sum up how this article defines connectivism, Dunaway says, “[t]he learning is the network” (2011, p.680). While I find this fairly catchy, it feels impersonal to describe learning without first mentioning the student; it’s as if information is first and then you apply the student, compared to other learning theories where you start with the student and apply information. Weird priorities, but maybe it’s just this article that pitches it this way. In connectivism, the student learns as they make connections between nodes of information. These nodes all reside in the student’s personal learning network which contains a wide variety of information sources and tools (Dunaway, 2011, p.676). Because learning rests in the ability to make connections, pattern recognition and that the ability to evaluate information sources are highly valued skills.

What made this article stand out over others about connectivism is that it goes beyond just explaining the theory; Dunaway also addresses two important literacies that are nurtured by connectivism (neither of which I had ever heard of). First, metaliteracy: “an overarching and self-referential framework that integrates emerging technologies and unifies multiple literacy types” (Dunaway, 2011, p.679). Apparently there are a lot of 21st century literacies floating around and metaliteracy ropes them all together and highlights their similarities to benefit learning. Second, transliteracy: “the ability to read, write and interact across a range of platforms, tools and media[…]” (Dunaway, 2011, p.679). Not only should students be able to gather information from multiple mediums, but they should know how to move information efficiently from one format to another. Transliteracy focuses on the relationship between users and their digital tools (Dunaway, 2011, p.679). This section of the article challenged my understanding of the term literacy, especially with concepts like metaliteracy where you’re trying to think about being literate in literacies. Transliteracy is easier to wrap my head around, but I also question if it’s an actual literacy or just a skill, or maybe those two things are the same?

Connectivism in the context of research libraries and information instruction makes sense to me as they are basically in the business of helping students build personal networks of information and matching students with information tools; it’s also a theory I can see integrating pieces of into my own classroom. But even after reading this article I’m struggling to envision how to sell this new learning theory to the English teacher in the classroom next door whose classroom is only lightly blended. I think the theory is too jargon heavy at the moment to be generally accessible in the same way some of the past learning theory are. Yet, despite its shortcomings as a potential learning theory, I’m not ready to give up on connectivism; I do think there has been a change in positioning of information, teacher/student roles, and learning because of the internet and digital tools. Here’s hoping I can form some clearer opinions about it over the course of the semester!


Dunaway, M. (2011). Connectivism: Learning theory and pedagogical practice for networked information landscapes. Reference Services Review, 39(4), 675-685.

Article Review #1

Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science – Bygholm & Buus, 2009

Thus far in my teaching career I have been running my computer science courses on a blended model. Often I can piece together what I’m looking for with a couple different programs and some choice collaborative “unplugged” activities. I’m constantly poking around the internet for new curriculum, but generally they all have similar patterns: direct instruction through video/slides and then individual practice. The more self-contained the course is online the more likely it is to follow this pattern. Looking for the “why” wasn’t going to be particularly productive, so I broadened my search. For this article review, I sought out studies about online computer science curriculum and some of their potential structures.

The article that caught my eye was “Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science,” which is a bit of a mouthful if you ask me. Between 2004-2006, 40 online computer science courses were jointly developed and delivered by the University of Strathcylde Scotland and Aalborg University in Denmark. The article was written by the researchers from Aalborg University, who prior to starting the project had a considerable investment in problem based learning and discussed it in detail in the article. They defined problem based learning as “aimed at providing the student with abilities to acquire knowledge appropriate to solve problems within the domain. Focus is on learner experience, participant control, learner self-management and guidance” (Bygholm & Buus, 2009, p.13). In fact, their whole university is so passionate about problem based learning that they have a variation of it called “The Aalborg PBL model” that uses problems as the starting point for learning with curriculum assigned as needed to solve the problem or is related to the theme (Figure 1) (Bygholm & Buus, 2009, p.17). They eventually decided that their personal version of problem based learning was too “radical” for their more traditional, curriculum based partners over in Scotland, and that used alone it failed to support the project’s need to reach stated learning objectives (Bygholm & Buus, 2009, p.19).

Figure 1: Aalborg University PBL model









The partners at Aalborg University had a clear prerogative to get more problem based learning into the online computer science courses. The problem, of course, is that those at the University of Strathcylde were more inclined to instructor led, curriculum focused learning. I’m not sure why they decided that they were a good match for each other in this project, but there you go. The two school were speaking different learning languages and their project ended up being a learning model creole. Their eventual compromise supported both learning strategies by providing opportunity for students to organize their own learning around specific problems within a set module and also be exposed to content through more direct teaching (Figure 2).

Figure 2: Co-designed model for online computer science courses

Within the article there is a brief discussion on the varying definitions of success between the curriculum based and problem based models. This was an issue I hadn’t previously considered. It’s easy to look at the activities used in each model and spot the differences, but it’s a little harder to process that they have entirely different overall learning goals. Success in the curriculum based model involves a “specified quantification in certain methods and techniques” (Bygholm & Buus, 2009, p.18). It is about absorbing a wide-breadth of content knowledge, generally through teacher-led instruction. Alternatively, success in problem based learning takes the form of “competencies to solve problems within the [content] area, independent of specific curriculum bites” (Bygholm & Buus, 2009, p.18). Their co-designed learning strategy and included activities seek success from both models by passing control between the teacher and student and providing opportunity for both direct content delivery and hands-on practice. Students would be both exposed to a breadth of necessary content knowledge and learn how to problem solve within the domain. Overall, I think the model they developed together is well constructed, especially for computer science where even at lower levels it can have a lot of content-specific knowledge before you can start building, creating, and solving. The courses I come across online may have a creative aspect so that students are developing their own projects under certain restrictions, but in general we severely lack the collaborative/group component and extended student-led learning. 

I also didn’t realize how political course development could be. Not only do individual people have their chosen learning strategies, but entire universities may ascribe to a particular model and want to push that agenda. Even in an article about finding the learning model middle ground, there was clearly a lot of stubbornness during the process, and the authors often came across as smug about their school’s personal use of one model of the other. I guess I hadn’t thought of it as a competition? Maybe I should be less surprised about the repetitive online course structures, clearly it takes a lot to get educators on the same page, or to even mix models outside of their comfort zones.


Bygholm, A., & Buus, L. (2009). Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science. International Journal of Education and Development using Information and Communication Technology, 5(1), 13-22. Retrieved from