Article Review #5

Designing for deeper learning in a blended computer science course for middle school students – Grover, Pea & Cooper (2015)

My research skills clearly peaked at the end of our article review period. Of all the papers I read over the past 5 weeks, this one had the most solidly designed study. And (excitingly!), it’s directly applicable to my teaching content. It was also 40 pages long and went very in depth into the details of their experiment, so I’ll do my best to not get lost in the weeds.

Researchers developed a 7-week curriculum for middle school students entitled “Foundations for Advancing Computational Thinking” (FACT), whose goal was to “prepare and motivate middle school learners for future engagement with algorithmic problem solving” (pg.199). Sounds boring, but this is actually very important in building capacity for future computer science work in secondary school and beyond. Algorithmic problem solving (specifically “serial execution, looping constructs, and conditional logic” (pg.201)) is transferrable between programming languages and is foundational in the development of computational thinking. The other goals of the study were to change the students’ perception of CS, and to encourage “deeper learning” (pg. 201).

A quick note on this study’s definition of “deeper learning”: this concept is concerned not just with content but also a student’s ability to problem solve, collaborate, communicate, and engage in self-directed learning (pg.204). Deeper learning extends beyond the cognitive domain and works to include important skills from the intrapersonal and interpersonal domains. Researchers choose a “deeper learning” framework because of its focus on the transferability of skills as students learn in one setting and are able to apply it in another.

Transferability of skills was actually built into the assessments used to collect data for the study. During the 7-week course, students learned the basics of algorithmic problem solving using the very kid-friendly Scratch platform. Scratch uses block-based coding that allows students to focus on the problem and not stuck looking for syntax errors (*Disclaimer: I’ve had really good luck using Scratch in my own classroom). Usually Scratch is used for game creation, but for this course it was used as a space to test algorithms with a variety of learning goals. At the end of the course students were then given the “preparation for future learning (PFL)” assessment in which students had to apply their computational thinking knowledge developed using block-based code to text-based code, specifically Pascal and a “Java-like” language (pg.201).

The FACT course was piloted in two iterations at the same middle school. The first iteration was a more traditional face-to-face course that used online tools, while the second iteration was delivered entirely online through the OpenEdX MOOC platform. Researchers used the feedback from the first iteration to significantly inform the design of the second iteration. Findings were collected through pre & post assessments, PFL, final projects, and interviews.

They did not run a control group (one not exposed to FACT), so the findings for this study can really only be compared between the two iterations or discussed as a whole. Overall, they found that the results from the students participating in the MOOC iteration had similar-to-better understandings of algorithmic structures. Both groups of students also demonstrated their knowledge more effectively in the final project and interview than they did in the post assessment. The separate “PLF” test left the researchers feeling “cautiously optimistic” although they felt that the test itself was too hard (pg.222). Students were able to transfer some of their skills to text-based problems, but struggled with loops and variables, which also showed on their post assessments. The open-ended questions on the post assessment also revealed that students gained a better understanding of the breadth of topics in computer science and its opportunities for problem solving and creativity.

At the time of publishing, this study was one of the first to have developed an online introduction to CS course that provided empirically positive results in the learning gains of middle school students (pg.224). We all anecdotally support middle school students building up their computational thinking, but it’s important to have the data. At this age students are going through some serious cognitive development and it’s critical to slip in some analytical reasoning to support their future STEM studies. Let’s get more pre-teens practicing their algorithmic problem solving skills!

Reference

Grover, S., Pea, R., & Cooper, S. (2015) Designing for deeper learning in a blended computer science course for middle school students, Computer Science Education, 25(2), 199-237, DOI: 10.1080/08993408.2015.1033142

Article Review #4

Mobile game development: Improving student engagement and motivation in introductory computing courses – Kurkovsky, 2013

I had much better luck this week finding a quality article. In fact, I may have even found a  journal I can stick with through the next review assignment (and maybe for nerdy professional reading). I chose this particular article because it sounded like I’d get some affirmation for my current curriculum choices. A little self-serving, I know, but I wanted to poke around the data behind integrating game development into computer science courses because it’s clearly a trend that is picking up speed and has been a hit in my own classroom.

This article started so hopeful. It included a lengthy lit review to support the use of game development to improve student engagement in intro computer science courses at universities. What many studies noted was that game development rarely makes it into the intro courses because building a full computer game takes high level programming skills. But, the creation of casual mobile games is totally within the capabilities of intro level students. Mobile game development provides an accessible, engaging, and practical application of many computer science concepts. I basically highlighted everything in this section; that’s how excited I was to see all this research affirming my current beliefs about teaching computer science. To sum it up, games are a slick way to teach programming concepts as it allows students to see “the connection between technical material and their everyday lives” (pg.141). It appeals to non-CS majors, women, and minorities (pg.143). Games help students understand that computer science is more than coding; an idea which hopefully gets them in the door and keeps them engaged throughout the semester.

For their study, the researchers created learning modules based on core Java programming concepts with an opportunity to practice and apply that knowledge through the enhancement of a mobile game. Some of these modules included variations on crowd favorites such as Battleship, Connect Four, Space Invaders, and Frogger. Students were not asked to build the games from scratch, but were given an almost functional game so that they could focus on smaller programming objectives while also customizing the interface and/or enhancing the game logic. Honestly, it all sounded awesome; if you have to learn Java then this seems like the way to do it.

The experiment was set up in introductory CS courses at two different universities: one school was more selective and only had STEM majors in the course, the other was less selective and had a wide variety of majors in the course. At each site, professors were given test groups (access to the game development features) and control groups. Researchers would assess the effectiveness of the mobile game modules through student grades/completion, a student survey, and two questionnaires bookending the semester.

And then it all went terribly, terribly wrong. Okay, maybe not wrong, but their findings were severely disappointing after the huge build up for game development at the beginning of the article. The researchers referred to their findings as a “mixed bag” (pg.153). Yikes. In the end, the variation between the two universities kind of hurt the study because nothing could conclusively be said for whether the game development features had a positive effect when one student population was so clearly better prepared from the start. They actually saw negative results in student interest from the more selective school; a suggested explanation being that students were anticipating traditionally taught courses and the new modules were jarring (pg.153). Happily, there was a (limited) positive effect in student engagement overall, and the test group did as well as the control group (pg.154).

Regardless of the findings, the researchers remain stalwart in their belief that game development is a positive teaching tool, and hold that more research on the topic must be done. I’m as baffled as they are as to why the study went awry. I’ll admit I got deeply suckered into the lit review section and now want to forgo these particular bad-to-middling findings, but I think this is a “fail forward” moment as the researchers noted that they would continue testing iterations of their modules. Clearly, there is a plethora of studies supporting game development in CS courses, but the modules that the researchers developed for this study are so similar to those I’m looking at to teach Java next year that I’m still kind of nervous/curious to know why they didn’t see better results. Or maybe we can just blame it on Java…

Reference

Kurkovsky, S. (2013). Mobile game development: Improving student engagement and motivation in introductory computing courses. Computer Science Education, 23(2), 138-157. Retrieved from http://dx.doi.org/10.1080/08993408.2013.777236

Article Review #3

A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study – Pérez-Marín & Pascual-Nieto, 2012

Honestly, I partially chose this article because the title made me laugh: “A Case Study on the Use of Blended Learning to Encourage Computer Science Students to Study.” The researchers get right to business finding ways to get CS students to engage with the material after class. Apparently, the study habits of CS students are so notoriously bad that the authors didn’t feel the need to go into the claim that their entire study rests on. While I would have liked to see more than one article back up their assertion, it was clear that they saw a trend in their computer science department and wanted to tackle it. I had already mentally committed to this article before I realized that they were gathering data from a class held in the 2007-2008 school year. The paper itself was published in 2012, so I thought we were dealing with more recent applications of blended learning. Still, there may be a valid takeaway.

To test the efficacy of blended learning study tools for university CS students, researchers took 131 students in the second-year course “Operating Systems” and let half of them use a computer program to study and the other half (control group) received a print version of the study content. The online study program, “Willow,” was developed by the researchers. Students were able to type in their response, have it compared against pre-loaded answers from the instructor, and then were given immediate feedback (Figure 1). Data was collected through pre/post assessments, and a satisfaction questionnaire. The experiment took place near the end of the semester and (weirdly) lasted as long as a one hour study session bookended by the assessments. All students were then allowed to use their study tool of choice for the month leading up to the final exam at which time they took the satisfaction survey.

Figure 1 – Screenshot of Willow

While the results showed that the computer study group had a higher positive difference between their pre/post assessment scores 75% of the time, it was by a margin that was not statistically significant (Perez-Marin & Pascual-Nieto, 2011, p.78). The authors weren’t surprised by this finding, as their actual goal was to show that students must study for an exam over the course of several weeks. I struggle to understand why they set up this portion of the experiment this way if they were actually looking to prove an idea that required an extended window of time for data collection. Once the initial study session was over and students were able to choose their study tool, 99% of students used a Willow account and researchers saw that students were using the program regularly in the weeks leading up to the exam (Perez-Marin & Pascual-Nieto, 2011, p.80). But then they compare the increased studying anecdotally to the procrastination observed in the past when traditional paper study guides were used; they did not have data to back this up.

The results of the satisfaction questionnaire were unsurprising, especially considering all of the subjects were in the computer science program. They overwhelming felt that using a computer to study was good, a positive complement to their classwork, and their preferred method of study for the future. Researchers also took into consideration observable satisfaction during the 1 hour long study session, which led to my second laugh in this article, “The first reaction observed is that students assigned to the control group complained more than students assigned to the test group” (Perez-Marin & Pascual-Nieto, 2011, p.76). Computer science students complaining about not getting to use computers: typical.

While I can’t say that this article gave me ideas for my classroom, or helped me make new connections, it’s always good to know what came before you. Articles like this are like being visited by the ghost of technology past. If you don’t understand what the field used to look like then you won’t fully appreciate how far we’ve come in just a decade. Today, I wouldn’t even think to hand my CS students paper study guides, but clearly that used to be the norm. Blended learning is something that is no longer just a study tool but an active player in daily curriculum. This article may lack the appropriate data to show a positive effect on student scores over time, but their reasoning behind using blended learning tools are solid and similar to our reasons today (student control, flexibility, personalization). Unfortunately, even with the normalization of blended learning tools, it’s been my experience that CS students still slack on the studying.

References

Pérez-Marín, D., & Pascual-Nieto, I. (2012). A case study on the use of blended learning to encourage computer science students to study. Journal of Science Education and Technology,21(1), 74-82. Retrieved from http://www.jstor.org/stable/41413286

Article Review #2

Connectivism: Learning theory and pedagogical practice for networked information landscapes – Dunaway, 2011

Part of me wonders if I’m drawn to connectivism because it uses language similar to that used in my computer science classes. While I can see where connectivist strategies fit in my own classroom, I’ve been having a hard time envisioning how connectivism is general enough for any classroom. The digital jargon that is inherent to the theory make it feel a little cold compared to the focus on student experience and contextualized learning of constructivism. But I do think there’s something really interesting here! For this article review I wanted to poke around connectivism a bit more and see how some of the early adopters/developers were filling out this (potentially) new learning theory.

The article I found, “Connectivism: Learning theory and pedagogical practice for networked information landscapes”, was written specifically with librarians and those who work with information instruction in mind. The author, Michelle Dunaway, found a lot of overlap between the networked learning in connectivism and the role of those who teach students how to read, interpret, and analyze information sources. While it wasn’t exactly the K-12 classroom example I was looking for, the relationship between connectivism and information instruction is definitely strong and it was interesting to read about the environment where this learning theory thrives.

To sum up how this article defines connectivism, Dunaway says, “[t]he learning is the network” (2011, p.680). While I find this fairly catchy, it feels impersonal to describe learning without first mentioning the student; it’s as if information is first and then you apply the student, compared to other learning theories where you start with the student and apply information. Weird priorities, but maybe it’s just this article that pitches it this way. In connectivism, the student learns as they make connections between nodes of information. These nodes all reside in the student’s personal learning network which contains a wide variety of information sources and tools (Dunaway, 2011, p.676). Because learning rests in the ability to make connections, pattern recognition and that the ability to evaluate information sources are highly valued skills.

What made this article stand out over others about connectivism is that it goes beyond just explaining the theory; Dunaway also addresses two important literacies that are nurtured by connectivism (neither of which I had ever heard of). First, metaliteracy: “an overarching and self-referential framework that integrates emerging technologies and unifies multiple literacy types” (Dunaway, 2011, p.679). Apparently there are a lot of 21st century literacies floating around and metaliteracy ropes them all together and highlights their similarities to benefit learning. Second, transliteracy: “the ability to read, write and interact across a range of platforms, tools and media[…]” (Dunaway, 2011, p.679). Not only should students be able to gather information from multiple mediums, but they should know how to move information efficiently from one format to another. Transliteracy focuses on the relationship between users and their digital tools (Dunaway, 2011, p.679). This section of the article challenged my understanding of the term literacy, especially with concepts like metaliteracy where you’re trying to think about being literate in literacies. Transliteracy is easier to wrap my head around, but I also question if it’s an actual literacy or just a skill, or maybe those two things are the same?

Connectivism in the context of research libraries and information instruction makes sense to me as they are basically in the business of helping students build personal networks of information and matching students with information tools; it’s also a theory I can see integrating pieces of into my own classroom. But even after reading this article I’m struggling to envision how to sell this new learning theory to the English teacher in the classroom next door whose classroom is only lightly blended. I think the theory is too jargon heavy at the moment to be generally accessible in the same way some of the past learning theory are. Yet, despite its shortcomings as a potential learning theory, I’m not ready to give up on connectivism; I do think there has been a change in positioning of information, teacher/student roles, and learning because of the internet and digital tools. Here’s hoping I can form some clearer opinions about it over the course of the semester!

References

Dunaway, M. (2011). Connectivism: Learning theory and pedagogical practice for networked information landscapes. Reference Services Review, 39(4), 675-685. https://doi.org/10.1108/00907321111186686

Web Album

My images are hosted on Google Photos:

ED 659 Web Album – Sunset Paddle

All of the images were taken with an Olympus E-M10 Mark II with a 40-150mm lens. Other than resizing and cropping no editing has been done to the photos.

Print Format

Feather

Good grief, I paddled around this stinking feather forever. I wanted to practice some close up shots with the bigger lens, and as you can see I got incredibly lucky with the light. I have rule of thirds working in my favor, and while the reflection is nice I think it’s the details on the feather that are really interesting.

I have about 20 versions of this photo as I tried to manually set the ISO, f/stop, and shutter speed. The light was killing me on manual and I wasn’t quite skilled enough to get dark water and lit up feather on my own. So I’ll confess now that I used the auto setting for this one, but I just want you to know that I tried.

Aperture: f/5.6

Shutter Speed: 1/250

ISO: 200

Blue

Again, I was practicing my close up shots. I love the color and lines in this one. Composition could be better as it doesn’t make great use of rule of thirds. I tried to focus on the top band of the shell, and it’s leaning toward a bokeh style (but the focus caught a little too much of the rock!).

Had to up the ISO and reel back the shutter speed on this one compared to the feather photo as I had moved over to the shady side of the lake. Since we were out at sunset my biggest challenge was adjusting to the ever-changing amount of light. I wanted to keep the colors cool and a bit dark.

Aperture: f/5.6

Shutter Speed: 1/80

ISO: 500

Monitor Format

Swans

Meet our friendly neighborhood honkers! We have some very curious swans on the lake who manage to swim just close enough for pictures. I’m glad I was using my bigger lens for this project as I was able to get some nice details on the swans. The lake edge creates a nice 1/3 line, and I kind of like the unusual placement of the swans on the top left side.

There was a lot of light hitting the trees and the swans when I was trying to set up for this picture. I think f/9 ended up being the largest aperture I used for the whole project. I have a lot of washed out versions of this picture. It was chilly out there and I wanted that reflected in the cool tones in the photos!

Aperture: f/9

Shutter Speed: 1/250

ISO: 200

Paddle

This one is my favorite, and not just because it’s of my husband. I generally shy away from action shots and pictures of people. Let’s be honest landscapes and architecture are a little more forgiving for the amateur photographer. This photo started vertical and I cropped it into a horizontal close up. I tried to take all photos in this series from water level which gave this shot a particularly interesting angle.

He was paddling very, very slowly so I could get this picture. Had he been going full speed I would have needed a faster shutter speed. Again, I was going for darker tones so it kind of worked out to use a lower ISO and still be able to get some detail clarity.

Aperture: f/5

Shutter Speed: 1/200

ISO: 200

Web Format

Creek

Upon reflection, this photo wasn’t the best choice for the Web format because now that it’s a super small it lost some of its depth. I wanted to create contrast between the dark foreground with the creek leading towards the lit up trees in the background. Symmetry was also working in my favor on this one.

I guess the aperture is leaning towards larger (compared to my other pictures) in this case and I think that helped keep the foreground darker and create depth. A low ISO number kept the plants in the foreground sharp without lighting them up. This was a weird one with the contrasting light and I had to try quite a few different combinations to find something that worked.

Aperture: f/7.1

Shutter Speed: 1/320

ISO: 200

Symmetry

If you don’t take a reflection picture, were you even really on a lake? I did my best to not create any extra ripples so that the reflection of the trees was as clear as possible in the water. The colors were awesome and the reflection just emphasizes that point. My goal was symmetry so it was also important to get the shoreline straight!

Honestly, I can’t remember why I set the shutter speed so high. I was struggling to get a darker version of this picture for a while. Instead of bright yellow I wanted the trees to be more orange and to be able to see variation in that color. Aperture and ISO are both middling, so I bet the shutter speed was just get reduce the amount of light coming in.

Aperture: f/4.5

Shutter Speed: 1/640

ISO: 500

 

Article Review #1

Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science – Bygholm & Buus, 2009

Thus far in my teaching career I have been running my computer science courses on a blended model. Often I can piece together what I’m looking for with a couple different programs and some choice collaborative “unplugged” activities. I’m constantly poking around the internet for new curriculum, but generally they all have similar patterns: direct instruction through video/slides and then individual practice. The more self-contained the course is online the more likely it is to follow this pattern. Looking for the “why” wasn’t going to be particularly productive, so I broadened my search. For this article review, I sought out studies about online computer science curriculum and some of their potential structures.

The article that caught my eye was “Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science,” which is a bit of a mouthful if you ask me. Between 2004-2006, 40 online computer science courses were jointly developed and delivered by the University of Strathcylde Scotland and Aalborg University in Denmark. The article was written by the researchers from Aalborg University, who prior to starting the project had a considerable investment in problem based learning and discussed it in detail in the article. They defined problem based learning as “aimed at providing the student with abilities to acquire knowledge appropriate to solve problems within the domain. Focus is on learner experience, participant control, learner self-management and guidance” (Bygholm & Buus, 2009, p.13). In fact, their whole university is so passionate about problem based learning that they have a variation of it called “The Aalborg PBL model” that uses problems as the starting point for learning with curriculum assigned as needed to solve the problem or is related to the theme (Figure 1) (Bygholm & Buus, 2009, p.17). They eventually decided that their personal version of problem based learning was too “radical” for their more traditional, curriculum based partners over in Scotland, and that used alone it failed to support the project’s need to reach stated learning objectives (Bygholm & Buus, 2009, p.19).

Figure 1: Aalborg University PBL model

 

 

 

 

 

 

 

 

The partners at Aalborg University had a clear prerogative to get more problem based learning into the online computer science courses. The problem, of course, is that those at the University of Strathcylde were more inclined to instructor led, curriculum focused learning. I’m not sure why they decided that they were a good match for each other in this project, but there you go. The two school were speaking different learning languages and their project ended up being a learning model creole. Their eventual compromise supported both learning strategies by providing opportunity for students to organize their own learning around specific problems within a set module and also be exposed to content through more direct teaching (Figure 2).

Figure 2: Co-designed model for online computer science courses

Within the article there is a brief discussion on the varying definitions of success between the curriculum based and problem based models. This was an issue I hadn’t previously considered. It’s easy to look at the activities used in each model and spot the differences, but it’s a little harder to process that they have entirely different overall learning goals. Success in the curriculum based model involves a “specified quantification in certain methods and techniques” (Bygholm & Buus, 2009, p.18). It is about absorbing a wide-breadth of content knowledge, generally through teacher-led instruction. Alternatively, success in problem based learning takes the form of “competencies to solve problems within the [content] area, independent of specific curriculum bites” (Bygholm & Buus, 2009, p.18). Their co-designed learning strategy and included activities seek success from both models by passing control between the teacher and student and providing opportunity for both direct content delivery and hands-on practice. Students would be both exposed to a breadth of necessary content knowledge and learn how to problem solve within the domain. Overall, I think the model they developed together is well constructed, especially for computer science where even at lower levels it can have a lot of content-specific knowledge before you can start building, creating, and solving. The courses I come across online may have a creative aspect so that students are developing their own projects under certain restrictions, but in general we severely lack the collaborative/group component and extended student-led learning. 

I also didn’t realize how political course development could be. Not only do individual people have their chosen learning strategies, but entire universities may ascribe to a particular model and want to push that agenda. Even in an article about finding the learning model middle ground, there was clearly a lot of stubbornness during the process, and the authors often came across as smug about their school’s personal use of one model of the other. I guess I hadn’t thought of it as a competition? Maybe I should be less surprised about the repetitive online course structures, clearly it takes a lot to get educators on the same page, or to even mix models outside of their comfort zones.

References:

Bygholm, A., & Buus, L. (2009). Managing the gap between curriculum based and problem based learning: Deployment of multiple learning strategies in design and delivery of online courses in computer science. International Journal of Education and Development using Information and Communication Technology, 5(1), 13-22. Retrieved from http://search.proquest.com.proxy.consortiumlibrary.org/docview/886577964?accountid=14473

Not-So-Final-Project Part 1: Sharing & Remixing with Scratch

Almost a year ago to the day, I was really nervous about teaching my first middle school computer class. Not only was it a new age group for me, but the subject was also a little beyond my wheelhouse. Luckily, I stumbled upon some great curriculum. MIT’s Scratch provided simple block programming and even better, they knew how to make it work in the classroom. So, I borrowed lessons from Scratch’s “Creative Computing”  curriculum and reworked them into a blended format.  The images below are
screenshots from my Moodle course. **All the little creatures on the assignments are Scratch Screen Shot 2016-08-11 at 10.57.28 AMsprites. In total there are 6 units whose projects and coding concepts get progressively more difficult. Integrated into the unit themes, students are also given opportunities to de-bug code, complete group work, and give constructive feedback.

For my “not-so-final project,” I added five assignments to the unit “Scratch Surprise & Sharing/Remixing” and revised one assignment that was already in place. This unit immediately follows the creation of the students’ Scratch account. My goal with the additional assignments was to address an issue that I’d run into last year. Students had a tendency to overlook the little details as they wrapped up a project, but two of those details were fundamental to engaging on Scratch: sharing your project and giving credit when you remix or borrow. I purposely placed the new sharing and remixing assignments at the beginning of the semester when I first introduce Scratch so that students would be aligning with the Scratch “Community Guidelines” from the get-go. Not only did I want to introduce and reinforce a procedure for students to appropriately share and remix projects, but also help them meet the norms of this digital community.

Unit Overview

Screen Shot 2016-08-10 at 10.29.08 PM

Scratch Surprise – This lesson was already in place, but I went back through to add images and match the language to the assignments that follow. The new sharing assignments use this project as a base, so I needed to clarify exactly what I wanted to see in this project before moving them to the next step.

Screen Shot 2016-08-10 at 10.08.21 PM

Sharing on Scratch – Before writing up this assignment, I didn’t know that Scratch projects fell under a Share-Alike license. So I got to drop a little Creative Commons knowledge, but not too much because, you know, attention span. For me, knowing that the projects had this license made it all the more important to encourage students to share their work because it’s what will allow the Scratch community to grow. I was so excited, I had to highlight it.

Screen Shot 2016-08-10 at 10.09.02 PM

Sharing Reflection – I think the big question on this reflection is number 2. In the “Sharing on Scratch” assignment I basically told them that other people are going to mess with their precious projects that they spent hours and hours working on. That can be a tough pill to swallow and instead of pretending that it’s all cool (which I usually do), it’s important to let them vent their fears/hesitations about loss of ownership.

Screen Shot 2016-08-10 at 10.09.20 PM

How to Remix and Give Credit – Since everything shared on Scratch has a Share-Alike license, you’re able to remix any project you find on the site. Even though the license doesn’t require attribution, it’s part of the Community Guidelines that when you remix or borrow from another Scratcher, or pull media from elsewhere on the web, that you give them appropriate credit. The note about pirated music was actually a late addition to the assignment. After talking with the other middle school teachers this week, it was clear that we needed to be consistent between classes on how students use online media for their classwork. As for the second video about how to give credit, I recorded a screencast with a sample project that remixes someone else’s work and uses royalty free music.

Screen Shot 2016-08-10 at 10.41.17 PM**Link to the YouTube video

Screen Shot 2016-08-08 at 10.16.29 PM**Link to my screencast video

My First Remix – This project tries to scaffold the three skills that the students just learned: how to build; how to share; and how to remix.

Screen Shot 2016-08-10 at 10.14.52 PM

Online Media – I’ve never used the “Choice” assignment before, so I’m curious to see how it plays with the group. My goal is to use the results for a bigger class discussion about copyright, ownership, and maybe a little IP. They’ll each answer individually on the Moodle assignment. Then I’ll put them in their respective result groups to develop some some arguments for their choice. When they have something ready to share, we can tackle the question from a couple different angles to test their choice (What if you buy the music and then distribute it? What if a million other people have already shared it?). We should be able to loop back around to the “no pirated music” topic, so they’ll know it’s not just me crushing their pirate-y dreams.
Screen Shot 2016-08-11 at 10.26.48 AM