Philosophy of Teaching and Learning

In honor of Computer Science Education Week, I’d like to frame this assignment with a quote from computing pioneer Rear Admiral Grace Murray Hopper (as cited in Engel, 2013):

“One of the most damaging phrases in the language is ‘We’ve always done it this way!’”

Now while Hopper may have been referring to data processing and computing innovations, I don’t think any of us would challenge the applicability of this statement to education. We are creatures of habit, and education (in practice and policy) can be slow to change. There are certainly solid 20th century pedagogical theories that come into play in our classrooms, necessary moments of behaviorism and cognitivism. And many of us find that our modern teaching styles align well with constructivism and its role in active student learning (Ally, 2008, p.30). But I also want to keep an eye on the horizon for new, innovative discourse about education. It’s important to play around with new pedagogical ideas, throw them against the wall and see what sticks; remembering that part of our job as academics is to promote progress in our field. As I move forward in my career, I believe I should be striving for flexibility of thought in everything from my pedagogy to the layout of classroom seating.

Being open to change has been especially important in my realm of Career & Technical Education. Preparing students for ever-evolving industries is a moving target that requires reflection and the constant revision of content. The assessment model that fits neatly into CTE, and most appeals to me, is Fink’s (2003) educative assessment, specifically the development of forward-looking assessments. With contextualized and authentic assessment strategies, students put their skills and knowledge to use in a way that is realistic to how they would do so in the real-world (Fink, 2003, p.84). Committing to the educative model assists in making my curriculum relevant and applicable to the modern work environment.

I hope that my willingness to try new things is an attribute that rubs off on my students as well: that they will jump right into a new programming language, or say “okay” when I suddenly enroll them in Canvas with 10 days left in the semester. It’s all gravy, but only because we’ve built a class culture where they feel comfortable stepping out of routine. A strong understanding of the class’s situational factors certainly helps achieve this goal as it’s essential to the development of a curriculum and environment tailored to their needs and personalities (Fink, 2003, p.68). If I weren’t meeting their needs in this way, then I would have no right asking them to jump into something challenging.

I wholeheartedly believe that blending my classes was the best teaching decision I’ve made to date. Implementing online education tools in my classroom more accurately reflects the possibilities of modern learning even if it’s on a small scale. I love the flexibility of having course shells hosted in an LMS, both for me and for students. Going blended has also recently pushed me to try more self-paced lesson plans, in which I’ve tried to give more opportunities for student control over learning and self-reflection (U.S. Department of Education, 2010, p.xvi) Students have become more internally motivation and engaged with the content as I’ve given up some control.

I’ll wrap this up with another Grace Hopper quote that I may like even more than the first one; remembering her time teaching students in a classroom she said, “They come to me, you know, and say, ‘Do you think we can do this?’ I say, ‘Try it.’ And I back ’em up” (as cited by Engel, 2013). Yes, this is exactly the student attitude we want to foster in our own classrooms: active, engaged and curious. But I think the real take away from this interaction is that, sometimes, students just need someone to “back ‘em up.”

 

References

Ally, M. (2008). Foundations of educational theory for online learning. In Anderson, T., & Elloumi, F. (Eds.). The theory and practice of online learning. (2nd ed.) (pp. 15–44). Athabasca, AB, Canada: Athabasca University.

Engel, K. (2013).  Admiral “Amazing Grace” Hopper, pioneering computer programmer. AWH. Retrived from: http://www.amazingwomeninhistory.com/amazing-grace-hopper-computer-programmer/

Fink, L. D. (2013). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco, CA: Jossey-Bass.

U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. (2010). Evaluation of evidence-based practices in online learning: a meta-analysis and review of online learning studies. Retrieved from: https://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Tool Review #3 – Remind

Disclaimer: I’ve been using Remind for about a year in my individual classes, but there’s been a big change in how we’re using it this semester at our school that I wanted to review.

Remind solved two huge problems for me: 1. I really do not like talking to parents on the phone; 2. I wanted two way communication with students and still protect my privacy. Last year, I had all of my students sign up and sent home handouts for parents who wanted to join. Students and parents who texted the class code to a specific number would be added to my Remind roster (organized by class). When sending a message I could push it out to the whole list of students and parents, or select an individual. You have the option of opening two way communication for whole classes, but not for individual students, so you have to make a decision about whether or not your class is mature enough to not abuse the privilege (ex. spamming your message inbox). I decided to keep my two way communication open and haven’t had an issue thus far. It’s been the easiest way to get a message out to students after-school since they rarely check their school email and may not be getting notifications from Google Classroom. Remind pops up just like a text message. It’s super slick to hit both students and parents with the same messages so everybody is in the loop.

Teachers can also decide how connected they want to be (respect the work/life balance!). You can turn off two way communication. You can have the app on your phone but turn off notifications. You can have it push to your work email so you don’t get messages at home. Or, just check it in your browser while you’re at work. Our staff members all use Remind differently; it’s important to find a way to make it work naturally in your own class or it’ll just be another tool you sign up for and never use.

So, I started this year all ready to use Remind, codes posted on my website, all my students signed up. It worked great per usual. Then a month of so in, we signed a school-wide contract with Remind. And at first it sounded really good; Remind would take all of the school registration information and make accounts for students and their parent/guardian. Then using our class rosters it would auto-enroll them and they would populate our teacher lists. But it’s never really that easy. What worked so beautifully on the classroom scale was a nightmare to roll-out for the general school population. It was the worst for those of us already using Remind because the tool itself has no flexibility between how it managed individual accounts (and the collected data) and those attached to a contracted school account. I had to merge accounts and start the student phone registration process again (because school-wide Remind only grabbed their school emails, which were useless). Students and parents also had to merge their accounts, and it took hours out of instruction to get everything lined out. So painful.

And yet, I’d still recommend it to all the phone-shy, hyper-connected teachers out there. There is some clear work to be done on their contract transitions, but for individual teachers it’s a life saver.

Tool Review #2 – Bubbl.us

When concept/mind mapping, l find that flexibility and customization features make or break the tool. Especially when you’re trying to express an individual brainstorm (versus a collaborative one), you want to create something that in flow, color, and functionality reflects how those ideas are connected in your own head. Bubbl was uninspiring on this front. I used Bubbl.us to create a concept map to break down necessary components of an app I’m building for ED 659. Remember the browser game my middle school coding kids are making? I’m developing the app version as an end of semester present for them.

I like the idea of mind mapping tools in theory but rarely in practice. My brain doesn’t naturally map ideas in round webs and that is generally the template shape. I like linear progression, hierarchy, and lists. Lucky for me, Bubbl offered both “tree” and “grid” templates which were helpful for what I was trying to convey. I realized too far into the project that a storyboarding tool would have been more appropriate for what I was trying to do, but I made it work by putting numbers in the bubbles to indicate sequence and “listing” components underneath their parent bubble.

Bubbl doesn’t bring much to the table in terms of features. While you can create hyperlinks and attach files there is no opportunity to embed photos, video, or commenting like you can in MindMeister. There are also few aesthetic customization options beyond color of the bubbles. It was limiting to the point of not being fun to use.

Overall, I think there are stronger/more interesting mind mapping tools out there. While the minimal interface may be a place to start trying digital mind mapping for younger students, I don’t think it would be enough to keep their attention for long. Older students would be better off with more complex mind mapping tools, or even Prezi. Of any point in the creation process, I think brainstorming is one of the most exciting. When you’re scribbling and drawing lines out on a whiteboard it’s easy to see that energy, but it’s much harder to recreate digitally, which is where flexibility and customization really come into play.

Tool Review #1 – WorkFlowy

There are multiple pieces of my life all converging into chaos right now (I’m sure I’m not alone), so I chose to try WorkFlowy to see if I could get a little more organized during this assignment. This tool is clean and dirt simple to use, though I’m so use to the colorful, sparkling interfaces of other tools that WorkFlowy is also a bit boring. In a nutshell — you make a bulleted list. And that’s it. You can check out my full list on the left.

There were only a handful of features to try out. Zooming allows you to choose one section to focus on at a time, which is helpful if you’re working with a giant list. I’m someone who likes to cross things off a list instead of having them disappear, so I really liked the strikethrough for completing tasks. Notes brought a little more flexibility to the tool so you weren’t limited to only creating new bullets. 

As you can see in the screenshot, I wanted to insert a photo but wasn’t able to. Part of me wanted WorkFlowy to become a holding pen for the details of those bullet points: embedded media, links with the photo/headline, etc. Nope. So, I learned to be content with the black and white bare bones list.

One feature that I found useful, but felt a little clunky, was tagging bullet points with hashtags. You could then sort your list so you would only see items with a specific tag. My tag #now indicates to me things that need to be completed ASAP. It works, and it’s in line with the limited functionality of the tool, but nothing to write home about compared to the other search/sort organizing feature of other tools.

When it comes down to it, you could do all of this in Google Docs, but the beauty of this tool is how weirdly simple it is in a world that is chock-full of busy digital media. It’s a better organized version of my phone’s Notes app, which is great because I use Notes all the time but it often becomes disjointed. I’m not sure it would be at all interesting to students; I see it being created then quickly forgotten. It may go the same way for teachers, but it could appeal to some who are looking for a place to pull together multiple to-do lists. A benefit of this stripped down bulleted list is that it focuses your attention the same way making a handwritten list does (pseudo-analog?). And, I kind of like it? It doesn’t integrate with Google or Outlook, has no calendar, or color coding. It is unabashedly just a list.

Article Review #5

Designing for deeper learning in a blended computer science course for middle school students – Grover, Pea & Cooper (2015)

My research skills clearly peaked at the end of our article review period. Of all the papers I read over the past 5 weeks, this one had the most solidly designed study. And (excitingly!), it’s directly applicable to my teaching content. It was also 40 pages long and went very in depth into the details of their experiment, so I’ll do my best to not get lost in the weeds.

Researchers developed a 7-week curriculum for middle school students entitled “Foundations for Advancing Computational Thinking” (FACT), whose goal was to “prepare and motivate middle school learners for future engagement with algorithmic problem solving” (pg.199). Sounds boring, but this is actually very important in building capacity for future computer science work in secondary school and beyond. Algorithmic problem solving (specifically “serial execution, looping constructs, and conditional logic” (pg.201)) is transferrable between programming languages and is foundational in the development of computational thinking. The other goals of the study were to change the students’ perception of CS, and to encourage “deeper learning” (pg. 201).

A quick note on this study’s definition of “deeper learning”: this concept is concerned not just with content but also a student’s ability to problem solve, collaborate, communicate, and engage in self-directed learning (pg.204). Deeper learning extends beyond the cognitive domain and works to include important skills from the intrapersonal and interpersonal domains. Researchers choose a “deeper learning” framework because of its focus on the transferability of skills as students learn in one setting and are able to apply it in another.

Transferability of skills was actually built into the assessments used to collect data for the study. During the 7-week course, students learned the basics of algorithmic problem solving using the very kid-friendly Scratch platform. Scratch uses block-based coding that allows students to focus on the problem and not stuck looking for syntax errors (*Disclaimer: I’ve had really good luck using Scratch in my own classroom). Usually Scratch is used for game creation, but for this course it was used as a space to test algorithms with a variety of learning goals. At the end of the course students were then given the “preparation for future learning (PFL)” assessment in which students had to apply their computational thinking knowledge developed using block-based code to text-based code, specifically Pascal and a “Java-like” language (pg.201).

The FACT course was piloted in two iterations at the same middle school. The first iteration was a more traditional face-to-face course that used online tools, while the second iteration was delivered entirely online through the OpenEdX MOOC platform. Researchers used the feedback from the first iteration to significantly inform the design of the second iteration. Findings were collected through pre & post assessments, PFL, final projects, and interviews.

They did not run a control group (one not exposed to FACT), so the findings for this study can really only be compared between the two iterations or discussed as a whole. Overall, they found that the results from the students participating in the MOOC iteration had similar-to-better understandings of algorithmic structures. Both groups of students also demonstrated their knowledge more effectively in the final project and interview than they did in the post assessment. The separate “PLF” test left the researchers feeling “cautiously optimistic” although they felt that the test itself was too hard (pg.222). Students were able to transfer some of their skills to text-based problems, but struggled with loops and variables, which also showed on their post assessments. The open-ended questions on the post assessment also revealed that students gained a better understanding of the breadth of topics in computer science and its opportunities for problem solving and creativity.

At the time of publishing, this study was one of the first to have developed an online introduction to CS course that provided empirically positive results in the learning gains of middle school students (pg.224). We all anecdotally support middle school students building up their computational thinking, but it’s important to have the data. At this age students are going through some serious cognitive development and it’s critical to slip in some analytical reasoning to support their future STEM studies. Let’s get more pre-teens practicing their algorithmic problem solving skills!

Reference

Grover, S., Pea, R., & Cooper, S. (2015) Designing for deeper learning in a blended computer science course for middle school students, Computer Science Education, 25(2), 199-237, DOI: 10.1080/08993408.2015.1033142

Article Review #4

Mobile game development: Improving student engagement and motivation in introductory computing courses – Kurkovsky, 2013

I had much better luck this week finding a quality article. In fact, I may have even found a  journal I can stick with through the next review assignment (and maybe for nerdy professional reading). I chose this particular article because it sounded like I’d get some affirmation for my current curriculum choices. A little self-serving, I know, but I wanted to poke around the data behind integrating game development into computer science courses because it’s clearly a trend that is picking up speed and has been a hit in my own classroom.

This article started so hopeful. It included a lengthy lit review to support the use of game development to improve student engagement in intro computer science courses at universities. What many studies noted was that game development rarely makes it into the intro courses because building a full computer game takes high level programming skills. But, the creation of casual mobile games is totally within the capabilities of intro level students. Mobile game development provides an accessible, engaging, and practical application of many computer science concepts. I basically highlighted everything in this section; that’s how excited I was to see all this research affirming my current beliefs about teaching computer science. To sum it up, games are a slick way to teach programming concepts as it allows students to see “the connection between technical material and their everyday lives” (pg.141). It appeals to non-CS majors, women, and minorities (pg.143). Games help students understand that computer science is more than coding; an idea which hopefully gets them in the door and keeps them engaged throughout the semester.

For their study, the researchers created learning modules based on core Java programming concepts with an opportunity to practice and apply that knowledge through the enhancement of a mobile game. Some of these modules included variations on crowd favorites such as Battleship, Connect Four, Space Invaders, and Frogger. Students were not asked to build the games from scratch, but were given an almost functional game so that they could focus on smaller programming objectives while also customizing the interface and/or enhancing the game logic. Honestly, it all sounded awesome; if you have to learn Java then this seems like the way to do it.

The experiment was set up in introductory CS courses at two different universities: one school was more selective and only had STEM majors in the course, the other was less selective and had a wide variety of majors in the course. At each site, professors were given test groups (access to the game development features) and control groups. Researchers would assess the effectiveness of the mobile game modules through student grades/completion, a student survey, and two questionnaires bookending the semester.

And then it all went terribly, terribly wrong. Okay, maybe not wrong, but their findings were severely disappointing after the huge build up for game development at the beginning of the article. The researchers referred to their findings as a “mixed bag” (pg.153). Yikes. In the end, the variation between the two universities kind of hurt the study because nothing could conclusively be said for whether the game development features had a positive effect when one student population was so clearly better prepared from the start. They actually saw negative results in student interest from the more selective school; a suggested explanation being that students were anticipating traditionally taught courses and the new modules were jarring (pg.153). Happily, there was a (limited) positive effect in student engagement overall, and the test group did as well as the control group (pg.154).

Regardless of the findings, the researchers remain stalwart in their belief that game development is a positive teaching tool, and hold that more research on the topic must be done. I’m as baffled as they are as to why the study went awry. I’ll admit I got deeply suckered into the lit review section and now want to forgo these particular bad-to-middling findings, but I think this is a “fail forward” moment as the researchers noted that they would continue testing iterations of their modules. Clearly, there is a plethora of studies supporting game development in CS courses, but the modules that the researchers developed for this study are so similar to those I’m looking at to teach Java next year that I’m still kind of nervous/curious to know why they didn’t see better results. Or maybe we can just blame it on Java…

Reference

Kurkovsky, S. (2013). Mobile game development: Improving student engagement and motivation in introductory computing courses. Computer Science Education, 23(2), 138-157. Retrieved from http://dx.doi.org/10.1080/08993408.2013.777236

Article Review #3

A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study – Pérez-Marín & Pascual-Nieto, 2012

Honestly, I partially chose this article because the title made me laugh: “A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study.” The researchers get right to business finding ways to get CS students to engage with the material after class. Apparently, the study habits of CS students are so notoriously bad that the authors didn’t feel the need to go into the claim that their entire study rests on. While I would have liked to see more than one article back up their assertion, it was clear that they saw a trend in their computer science department and wanted to tackle it. I had already mentally committed to this article before I realized that they were gathering data from a class held in the 2007-2008 school year. The paper itself was published in 2012, so I thought we were dealing with more recent applications of blended learning. Still, there may be a valid takeaway.

To test the efficacy of blended learning study tools for university CS students, researchers took 131 students in the second-year course “Operating Systems” and let half of them use a computer program to study and the other half (control group) received a print version of the study content. The online study program, “Willow,” was developed by the researchers. Students were able to type in their response, have it compared against pre-loaded answers from the instructor, and then were given immediate feedback (Figure 1). Data was collected through pre/post assessments, and a satisfaction questionnaire. The experiment took place near the end of the semester and (weirdly) lasted as long as a one hour study session bookended by the assessments. All students were then allowed to use their study tool of choice for the month leading up to the final exam at which time they took the satisfaction survey.

Figure 1 – Screenshot of Willow

While the results showed that the computer study group had a higher positive difference between their pre/post assessment scores 75% of the time, it was by a margin that was not statistically significant (Perez-Marin & Pascual-Nieto, 2011, p.78). The authors weren’t surprised by this finding, as their actual goal was to show that students must study for an exam over the course of several weeks. I struggle to understand why they set up this portion of the experiment this way if they were actually looking to prove an idea that required an extended window of time for data collection. Once the initial study session was over and students were able to choose their study tool, 99% of students used a Willow account and researchers saw that students were using the program regularly in the weeks leading up to the exam (Perez-Marin & Pascual-Nieto, 2011, p.80). But then they compare the increased studying anecdotally to the procrastination observed in the past when traditional paper study guides were used; they did not have data to back this up.

The results of the satisfaction questionnaire were unsurprising, especially considering all of the subjects were in the computer science program. They overwhelming felt that using a computer to study was good, a positive complement to their classwork, and their preferred method of study for the future. Researchers also took into consideration observable satisfaction during the 1 hour long study session, which led to my second laugh in this article, “The first reaction observed is that students assigned to the control group complained more than students assigned to the test group” (Perez-Marin & Pascual-Nieto, 2011, p.76). Computer science students complaining about not getting to use computers: typical.

While I can’t say that this article gave me ideas for my classroom, or helped me make new connections, it’s always good to know what came before you. Articles like this are like being visited by the ghost of technology past. If you don’t understand what the field used to look like then you won’t fully appreciate how far we’ve come in just a decade. Today, I wouldn’t even think to hand my CS students paper study guides, but clearly that used to be the norm. Blended learning is something that is no longer just a study tool but an active player in daily curriculum. This article may lack the appropriate data to show a positive effect on student scores over time, but their reasoning behind using blended learning tools are solid and similar to our reasons today (student control, flexibility, personalization). Unfortunately, even with the normalization of blended learning tools, it’s been my experience that CS students still slack on the studying.

References

Pérez-Marín, D., & Pascual-Nieto, I. (2012). A case study on the use of blended learning to encourage computer science students to study. Journal of Science Education and Technology,21(1), 74-82. Retrieved from http://www.jstor.org/stable/41413286

Article Review #2

Connectivism: Learning theory and pedagogical practice for networked information landscapes – Dunaway, 2011

Part of me wonders if I’m drawn to connectivism because it uses language similar to that used in my computer science classes. While I can see where connectivist strategies fit in my own classroom, I’ve been having a hard time envisioning how connectivism is general enough for any classroom. The digital jargon that is inherent to the theory make it feel a little cold compared to the focus on student experience and contextualized learning of constructivism. But I do think there’s something really interesting here! For this article review I wanted to poke around connectivism a bit more and see how some of the early adopters/developers were filling out this (potentially) new learning theory.

The article I found, “Connectivism: Learning theory and pedagogical practice for networked information landscapes”, was written specifically with librarians and those who work with information instruction in mind. The author, Michelle Dunaway, found a lot of overlap between the networked learning in connectivism and the role of those who teach students how to read, interpret, and analyze information sources. While it wasn’t exactly the K-12 classroom example I was looking for, the relationship between connectivism and information instruction is definitely strong and it was interesting to read about the environment where this learning theory thrives.

To sum up how this article defines connectivism, Dunaway says, “[t]he learning is the network” (2011, p.680). While I find this fairly catchy, it feels impersonal to describe learning without first mentioning the student; it’s as if information is first and then you apply the student, compared to other learning theories where you start with the student and apply information. Weird priorities, but maybe it’s just this article that pitches it this way. In connectivism, the student learns as they make connections between nodes of information. These nodes all reside in the student’s personal learning network which contains a wide variety of information sources and tools (Dunaway, 2011, p.676). Because learning rests in the ability to make connections, pattern recognition and that the ability to evaluate information sources are highly valued skills.

What made this article stand out over others about connectivism is that it goes beyond just explaining the theory; Dunaway also addresses two important literacies that are nurtured by connectivism (neither of which I had ever heard of). First, metaliteracy: “an overarching and self-referential framework that integrates emerging technologies and unifies multiple literacy types” (Dunaway, 2011, p.679). Apparently there are a lot of 21st century literacies floating around and metaliteracy ropes them all together and highlights their similarities to benefit learning. Second, transliteracy: “the ability to read, write and interact across a range of platforms, tools and media[…]” (Dunaway, 2011, p.679). Not only should students be able to gather information from multiple mediums, but they should know how to move information efficiently from one format to another. Transliteracy focuses on the relationship between users and their digital tools (Dunaway, 2011, p.679). This section of the article challenged my understanding of the term literacy, especially with concepts like metaliteracy where you’re trying to think about being literate in literacies. Transliteracy is easier to wrap my head around, but I also question if it’s an actual literacy or just a skill, or maybe those two things are the same?

Connectivism in the context of research libraries and information instruction makes sense to me as they are basically in the business of helping students build personal networks of information and matching students with information tools; it’s also a theory I can see integrating pieces of into my own classroom. But even after reading this article I’m struggling to envision how to sell this new learning theory to the English teacher in the classroom next door whose classroom is only lightly blended. I think the theory is too jargon heavy at the moment to be generally accessible in the same way some of the past learning theory are. Yet, despite its shortcomings as a potential learning theory, I’m not ready to give up on connectivism; I do think there has been a change in positioning of information, teacher/student roles, and learning because of the internet and digital tools. Here’s hoping I can form some clearer opinions about it over the course of the semester!

References

Dunaway, M. (2011). Connectivism: Learning theory and pedagogical practice for networked information landscapes. Reference Services Review, 39(4), 675-685. https://doi.org/10.1108/00907321111186686

Web Album

My images are hosted on Google Photos:

ED 659 Web Album – Sunset Paddle

All of the images were taken with an Olympus E-M10 Mark II with a 40-150mm lens. Other than resizing and cropping no editing has been done to the photos.

Print Format

Feather

Good grief, I paddled around this stinking feather forever. I wanted to practice some close up shots with the bigger lens, and as you can see I got incredibly lucky with the light. I have rule of thirds working in my favor, and while the reflection is nice I think it’s the details on the feather that are really interesting.

I have about 20 versions of this photo as I tried to manually set the ISO, f/stop, and shutter speed. The light was killing me on manual and I wasn’t quite skilled enough to get dark water and lit up feather on my own. So I’ll confess now that I used the auto setting for this one, but I just want you to know that I tried.

Aperture: f/5.6

Shutter Speed: 1/250

ISO: 200

Blue

Again, I was practicing my close up shots. I love the color and lines in this one. Composition could be better as it doesn’t make great use of rule of thirds. I tried to focus on the top band of the shell, and it’s leaning toward a bokeh style (but the focus caught a little too much of the rock!).

Had to up the ISO and reel back the shutter speed on this one compared to the feather photo as I had moved over to the shady side of the lake. Since we were out at sunset my biggest challenge was adjusting to the ever-changing amount of light. I wanted to keep the colors cool and a bit dark.

Aperture: f/5.6

Shutter Speed: 1/80

ISO: 500

Monitor Format

Swans

Meet our friendly neighborhood honkers! We have some very curious swans on the lake who manage to swim just close enough for pictures. I’m glad I was using my bigger lens for this project as I was able to get some nice details on the swans. The lake edge creates a nice 1/3 line, and I kind of like the unusual placement of the swans on the top left side.

There was a lot of light hitting the trees and the swans when I was trying to set up for this picture. I think f/9 ended up being the largest aperture I used for the whole project. I have a lot of washed out versions of this picture. It was chilly out there and I wanted that reflected in the cool tones in the photos!

Aperture: f/9

Shutter Speed: 1/250

ISO: 200

Paddle

This one is my favorite, and not just because it’s of my husband. I generally shy away from action shots and pictures of people. Let’s be honest landscapes and architecture are a little more forgiving for the amateur photographer. This photo started vertical and I cropped it into a horizontal close up. I tried to take all photos in this series from water level which gave this shot a particularly interesting angle.

He was paddling very, very slowly so I could get this picture. Had he been going full speed I would have needed a faster shutter speed. Again, I was going for darker tones so it kind of worked out to use a lower ISO and still be able to get some detail clarity.

Aperture: f/5

Shutter Speed: 1/200

ISO: 200

Web Format

Creek

Upon reflection, this photo wasn’t the best choice for the Web format because now that it’s a super small it lost some of its depth. I wanted to create contrast between the dark foreground with the creek leading towards the lit up trees in the background. Symmetry was also working in my favor on this one.

I guess the aperture is leaning towards larger (compared to my other pictures) in this case and I think that helped keep the foreground darker and create depth. A low ISO number kept the plants in the foreground sharp without lighting them up. This was a weird one with the contrasting light and I had to try quite a few different combinations to find something that worked.

Aperture: f/7.1

Shutter Speed: 1/320

ISO: 200

Symmetry

If you don’t take a reflection picture, were you even really on a lake? I did my best to not create any extra ripples so that the reflection of the trees was as clear as possible in the water. The colors were awesome and the reflection just emphasizes that point. My goal was symmetry so it was also important to get the shoreline straight!

Honestly, I can’t remember why I set the shutter speed so high. I was struggling to get a darker version of this picture for a while. Instead of bright yellow I wanted the trees to be more orange and to be able to see variation in that color. Aperture and ISO are both middling, so I bet the shutter speed was just get reduce the amount of light coming in.

Aperture: f/4.5

Shutter Speed: 1/640

ISO: 500

 

Article Review #1

Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science – Bygholm & Buus, 2009

Thus far in my teaching career I have been running my computer science courses on a blended model. Often I can piece together what I’m looking for with a couple different programs and some choice collaborative “unplugged” activities. I’m constantly poking around the internet for new curriculum, but generally they all have similar patterns: direct instruction through video/slides and then individual practice. The more self-contained the course is online the more likely it is to follow this pattern. Looking for the “why” wasn’t going to be particularly productive, so I broadened my search. For this article review, I sought out studies about online computer science curriculum and some of their potential structures.

The article that caught my eye was “Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science,” which is a bit of a mouthful if you ask me. Between 2004-2006, 40 online computer science courses were jointly developed and delivered by the University of Strathcylde Scotland and Aalborg University in Denmark. The article was written by the researchers from Aalborg University, who prior to starting the project had a considerable investment in problem based learning and discussed it in detail in the article. They defined problem based learning as “aimed at providing the student with abilities to acquire knowledge appropriate to solve problems within the domain. Focus is on learner experience, participant control, learner self-management and guidance” (Bygholm & Buus, 2009, p.13). In fact, their whole university is so passionate about problem based learning that they have a variation of it called “The Aalborg PBL model” that uses problems as the starting point for learning with curriculum assigned as needed to solve the problem or is related to the theme (Figure 1) (Bygholm & Buus, 2009, p.17). They eventually decided that their personal version of problem based learning was too “radical” for their more traditional, curriculum based partners over in Scotland, and that used alone it failed to support the project’s need to reach stated learning objectives (Bygholm & Buus, 2009, p.19).

Figure 1: Aalborg University PBL model

 

 

 

 

 

 

 

 

The partners at Aalborg University had a clear prerogative to get more problem based learning into the online computer science courses. The problem, of course, is that those at the University of Strathcylde were more inclined to instructor led, curriculum focused learning. I’m not sure why they decided that they were a good match for each other in this project, but there you go. The two school were speaking different learning languages and their project ended up being a learning model creole. Their eventual compromise supported both learning strategies by providing opportunity for students to organize their own learning around specific problems within a set module and also be exposed to content through more direct teaching (Figure 2).

Figure 2: Co-designed model for online computer science courses

Within the article there is a brief discussion on the varying definitions of success between the curriculum based and problem based models. This was an issue I hadn’t previously considered. It’s easy to look at the activities used in each model and spot the differences, but it’s a little harder to process that they have entirely different overall learning goals. Success in the curriculum based model involves a “specified quantification in certain methods and techniques” (Bygholm & Buus, 2009, p.18). It is about absorbing a wide-breadth of content knowledge, generally through teacher-led instruction. Alternatively, success in problem based learning takes the form of “competencies to solve problems within the [content] area, independent of specific curriculum bites” (Bygholm & Buus, 2009, p.18). Their co-designed learning strategy and included activities seek success from both models by passing control between the teacher and student and providing opportunity for both direct content delivery and hands-on practice. Students would be both exposed to a breadth of necessary content knowledge and learn how to problem solve within the domain. Overall, I think the model they developed together is well constructed, especially for computer science where even at lower levels it can have a lot of content-specific knowledge before you can start building, creating, and solving. The courses I come across online may have a creative aspect so that students are developing their own projects under certain restrictions, but in general we severely lack the collaborative/group component and extended student-led learning. 

I also didn’t realize how political course development could be. Not only do individual people have their chosen learning strategies, but entire universities may ascribe to a particular model and want to push that agenda. Even in an article about finding the learning model middle ground, there was clearly a lot of stubbornness during the process, and the authors often came across as smug about their school’s personal use of one model of the other. I guess I hadn’t thought of it as a competition? Maybe I should be less surprised about the repetitive online course structures, clearly it takes a lot to get educators on the same page, or to even mix models outside of their comfort zones.

References:

Bygholm, A., & Buus, L. (2009). Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science. International Journal of Education and Development using Information and Communication Technology, 5(1), 13-22. Retrieved from http://search.proquest.com.proxy.consortiumlibrary.org/docview/886577964?accountid=14473