Sunday, July 16, 2017

Parking Lot: Engagement 'equates' to Learning?


Source: KQED News - How to Determine if Student Engagement is Leading to Learning 

The article deals with a question that I think most educator would be able to answer - Engagement does not necessarily imply Learning takes place. It has been the myth that was clarified long long ago.

Hence, while technology may engage the learner, it does not necessarily mean that the learner has learnt effectively with technology use.

Of course, we need to first unpack what 'engagement' means (in the educational context). I think what's described in the 'hidden curriculum' has clearly listed down the elements that need to be present and interact "in" the learner. The outcome: extends to the level of motivation they have learn and progress in their education.

The article added that engagement would also translate to deeper learning (initiated by the learner). And to evaluate the degree of success, students must be able to demonstrate the understanding and skills they are expected to have acquired through the 'engaging activities'.











Saturday, June 03, 2017

Timed assessment - what does it try to find out?

This heading is eye-catching, especially to a Maths teacher!
It came from the post at edutopia: Tips for Tackling Timed Tests and Math Anxiety

It started by explaining the drawbacks of "timed assessment" and subsequently putting in some suggestions.

Indeed, "timed assessment" was something we were subjected to, all this while - not just in Mathematics, but also other subjects. It goes back to the rationale of such assessment - and of course, these days, more would discuss the purpose of assessment, which has evolved from Assessment of Learning to Assessment for Learning.

Would you associate "Timed Assessment" more with "AoL" or "AfL"?

Timed assessment is widely used for placement purpose, though some "major assessments" include other components like coursework or core assignments to be completed over a given period of time (e.g. those involves some research or experiment to be carried out).

Instead of focusing on the concerns arises from timed assessment,  perhaps we can look at its purpose from another perspective?

It is no doubt that timed assessment causes stress the candidates as they are expected to demonstrate what they know within a limited time and psychologically, it creates a tension in how the brain processes data. One's heart pumps faster as it gets closer to the end time, haven't we experienced that before?

While I have not find out more (from any academic/ research papers) on this area, based on experience and reflection over these years, I think timed assessment has its merits. One of its "use" is to help us determine how well one has understood the concepts and being able to  apply what he/ she has understood in a fluent manner, which is also a demonstration on the degree of mastery (or proficiently)! It comes with two core elements: Understanding/ Mastery + Effort
(with the assumption that one needs to put in effort for what's learnt to stay!)

In other words, students could have understood what was going on in the class, and even able to articulate deeper understanding. However, when one lacks the practice or do not have enough rounds to rehearse the use, while he/ she might be able to retain and therefore able to 'retrieve' the necessary knowledge and skill to process the task in the timed assessment. In other words, he/ she would need more time to do so compared to another one of the same capacity but have put in the effort to ensure internalise the knowledge and skills.

Given that 'limiting' factor that adds to the tension/ stress, this student would not be able to complete the task as quickly as his peer who is of the same ability but has put in the effort to practice, rehearse and internalise.

That, I think, based on experience, accounts for most instances when students usually claim they do not have enough time to complete the paper!

一分耕耘,一分收获
This is not about finding out who knows, who doesn't know as any form of assessment would be able to do this.
I think...
It is about differentiating who learns better and shows the potential being able to extent learning further.

Sunday, May 21, 2017

Feedback Strategy - Praise-Question-Polish (PQP)

One of the key takeaways from the recent webinar is the "PQP" strategy to provide feedback.
(once again, thanks to ETD colleagues who organised the virtual event, and thanks to Dr Tay for her sharing.)

Well, PQP does not only apply to students, i.e. to guide them to give constructive/ useful feedback, but also useful for teachers to frame our feedback to our students. Personally, I would make a more conscious effort to adopt this strategy and role model it.

Mindset change:

Ops! Having said that, I must clarify that it does not mean that I do not provide feedback to my students. Of course, some of us would say, what kind of feedback do you expect Maths teachers to give? Isn't it a tick or cross, or at most, circle the step where the error occurred? Well, yes, that's one form of feedback commonly practised by many of us, especially when we were in our beginning years as teachers. Isn't it "wordy" feedback is more relevant to other disciplines such as languages or humanities?

Not true.
I came to learn that there's a place for this in the subject (as I polish my craft over these years)!
事在人为!

There is a place for the less conventional exercises! For example, alternative assessments or performance tasks that come with rubrics! While writing qualitative feedback is something that I was not used to, I think this is one way students would benefit more (apart from trying to 'interpret' their work quality with a tick or a mark).

Based on my humble experience, the easiest way to give feedback is to make reference to the rubrics - not about lifting the entire description, but to contextualise (or customise) it to the work students have submitted. Students will learn, through the feedback, and make the relevant changes/ improvements/ enhancements to better a piece of work of similar nature. Personally, I observed this change in the quality in the viva voce assignment (as compared to the practice task). Undeniably, it takes time to give feedback and tasks of this scale is not assigned in a regular basis (like once a month). My point here is, given relevant amount of feedback, students will respond and make progress. It's worth the time and effort.

My Personal Reflection:

The next question is, by making reference to the rubrics, is it enough?
You see, I'm trying to make a connection between what I have been doing all this while to the PQP strategy. I think, in my current practice, what's lacking (though not totally missing) is the first "P" - Praise. Being very focused to identify what has gone wrong or incomplete, and targeted to improve,  "Question" and "Polish" are two aspects that I'm familiar with. One area that I will need to work on is the first "P" - Praise.

What we could do: 

Generic praises (such as excellent, "well done") without elaboration are not useful though it makes the day for the learner who receives the praise. Praise needs to be targeted so that it is helpful to affirm and to spur the learner to do even better (hopefully).

Similarly, when it comes to getting students to giving peer feedback, as teachers, we need/ should provide scaffolds so that learners are really giving useful feedback to their peers directed to the context (i.e. related to the task - disciplinary knowledge and skills involved).

"Good job", "Awesome" without elaboration are not very helpful though it motivates the learner.
In fact, before getting students to provide peer feedback, we should ask the following questions
  1. Do students know the purpose/ intent of feedback?
  2. Do they know what are the areas that they should specifically be looking out for when giving feedback (e.g. strategy or aesthetic part of the piece of work?)
  3. Do they know what makes a feedback useful to the recipient?
If these are not clear, they are just going through the motion of giving what they think is feedback.

In my opinion, to make peer feedback a productive one, for beginners involved in giving feedback, it is necessary to discuss the three questions above. With a clear rationale, the PQP strategy would make more sense to the students. On the other hand, stem questions would be necessary and very useful to help them kickstart this peer evaluation process. Though it seems to be very guided, it takes time for one to be familiar before adopting the vocabulary. Last but not least, the teacher needs to model the use so that it becomes something that the students are familiar with before they adopt a similar role (to give feedback).

Indeed, I think we should seek clarity to the following before embarking and getting others (i.e. students) to be involved:
  1. Why Praise?
  2. Why Question?
  3. Why Polish?
As mentioned before, to "praise" is not prominent in my practice. However, I recognise that it helps pointing out to the learner what they have done correctly so that they know they are heading the right direction. It could be in terms of an approach, or a strategy that they adopted, or even a decision made that is sound and grounded with good reasons. It is a form of feedback. As such, we should elaborate so that the message (feedback) is re-enforced. The outcome? One continues to do well in the area highlighted, or one could actually re-strategise/ re-focus to work in areas that need more attention. It is also a means to motivate the learner.

To "question" seems obvious, isn't it? It seems to be a forte of most teachers; but sometimes we might "mix it up" with "polish". To question is about asking "why"? To seek understanding. To probe for clarity. To ask for elaboration. It is quite natural that we could have already passed a judgement to the answer/ working (sub-consciously).
  • Does this sound like the "fault finding" stage? To some extent, yes - with the intent to surface the error/ issue that one might have overlooked so that they can be addressed. So, we need to be mindful of the words and tone used in this stage when giving feedback. And it is important to keep in mind that the feedback is targeted at work and not the person.
  • Indeed, it is important that the recipient of the feedback is clear of the rationale so that he/ she would receive the feedback with an open mind, and not attempt to defend for the sake of defending.
To "polish" is about giving suggestions to make improvements. I guess where we usually come from is to correct the error made, or point out the gaps, or suggest what could be added to the make the solution more complete. On another note, I think it could also be about sharing a different perspective so that the learner could gain a fuller picture on the subject matter that he/ she has been looking into. Of course, this "alternative" may also come in at the "Question" stage. All-in-all, the purpose of this stage is to enhance the outcome. It could be a case of correcting a misconception, or extending the breadth or depth to the matter of discussion.

Point to Ponder:

Indeed, after going crafting the previous section (of the post),  a thought popped-up - Isn't it that we should give the learner some wait time between "Question" and "Polish" stages? With the question, shouldn't we gave the learner time to think, process and respond? If we "polish" almost immediately after the "questions", are we depriving them from learning with our 'prescription'? I wonder.

Well, I guess it all depends on how rigorous the learning activity require the peer feedback to be.
In fact, I think sometimes it would be more effective if we "re-organise and customise" the strategy to become:
  • "Praise" followed by "Question"
  • "Praise" followed by "Polish" 

Scaffolds:

It is not difficult to find useful stem questions to aid the crafting feedback for the three stages (PQP).

Here are some examples...

Praise
  • I like the part where...
  • I like the way you explained...
  • I like the order you used in your writing because...
  • I think the ending was...
Question
  • I was confused with...
  • I didn't understand...
  • Why did you leave out...
  • Why did you include....
Polish
  • Don't forget to add...
  • Are your paragraphs in the best order....?
  • Could you add more to this part....?
  • Have  you tried...?
Useful sites:

Technologies for PQP: 

To facilitate documentation so that learners can make reference and learn (in the process of giving/ receiving feedback), we can consider the following tools (note: I've only selected those that I think are common and easy to use):

(i) Padlet(s)

As suggested in the example shared by Dr Tay - Padlet could be used to support the facilitation of the three stages of PQP. In the illustration, one padlet was divided into 3 sections.

One suggestion to enhance the technology:
If one is expecting several responses to each of the stages, it would be good to have 3 different padlets, each one serves a stage so that the feedback would not be messy.

I would possibly use this when first introducing the PQP strategy to students, to see how they provide feedback (for each of the stages) to a common problem. From the padlet posts, they would learn (from their peers) how to provide relevant feedback to the task presented.


Based on experience, the stickies may overlap each other - so, might be a bit challenging to manage. In addition, it would be much neater to set the posting layout to be organised in a grid or linear:


One could also activate the "Comment" feature for the padlet to allow one to respond to the feedback given:

An illustration on how it could look like:

The assumption: Re-classification of stickies is not necessary.


(ii) GoogleForm/ Sheets


GoogleForm is one of the most convenient tools and, it can be 'recycled' to serve a few exercises if what's shown in the form is generic enough.
  • GoogleForm is easy to set up. Broadly, the structure would  to collect the responses using

What I like is that the feedback can be sorted by Problem numbers (or even colour-coded).
In other words, we can use the same form in a classroom where students provide feedback to different sets of answers/ presentations at the same time.


The only draw back is the learner could not respond to the feedback in the spreadsheet unless he/ she has the right to edit the spreadsheet. Again, depending on the maturity of the class, the teacher may choose to give the edit right or not.

(iii) Google Slides or Doc

If Google Slides are used to collate the work, for example, in a language class, students work in pairs to annotate the visual text in an assigned slide; at the end of the activity, the pairs can be assigned to provide feedback to one or two other groups.

We can simply insert the PQP 'structure' in the speakers' notes of the slide so that students can insert their feedback to the peers.




Reflection: Learning the "HOW" and "WHAT" learners are thinking

Technology is a powerful tool that enables us to "see" and "document" what's going on in our mind, hence creating a much enriched learning experience, for self and others!

Well, the above seemed too broad and generic, especially the second half of the statement. Isn't it a given? Some of us might think. Well, what assumptions do we make if we say it's a given?  That is, the first half of the statement is TRUE for the learners!

Episodes of the learning activities flashed across my head when attending (and mentally preparing to attend) the recent webinar. Indeed, value this opportunity as I hardly had a chance to sit down and think about T&L more thoroughly and making good connections across my practices during this assessment period.

Sometimes, we tend to claim (and therefore generate excuses) like due to the nature of subject discipline, there is little room for technology to come into the picture more often that we could. Well, often it's a "yes" response if we look at what's expected of the students in the syllabus document. E.g. They are expected to carry out proofs in Mathematics. The application of knowledge from various units and the writing to present their train of thought - all to be written on the paper. On other hand, how do we know if the students are not regurgitate what they had memorised?
  • Drawing from my personal experience, sad to say, that's how I cleared my Physics at "O" Levels and went through the same when  I did Calculus in my university days! Glad that I joined the teaching profession and over the years,  I gain better insights to understand how learners learn and how teachers could do differently to enable learning. I believe, if I had learnt or shown how to learn the abstract topics differently (or if  my teachers had approach teaching differently), my learning experience and perspectives would have been rather different today. The teacher/lecturer had taught; but I had not learnt well.
  • From another perspective, this experience has helped me to better understand the struggle that learners in similar situation as I did have to go through. It helped me to recognise and appreciate how important it is for teachers to possess the necessary and relevant knowledge and skills to bring about positive experiences for the learner.
So, how did we know how well students have learnt? By doing the questions on papers correctly, is that enough? How do we assess the thinking behind the set of solution? Many very experienced educators would share that the design of the assessment items must be clear, on which aspect of learning are we trying to assess? That is the clarity of the intent and the design of assessment items. The next step is, how often or regular, and at what juncture should we carry out the 'checks'? How tedious is the process? How big a group can we reach out and cater to (using tradition methods, without technology) in a timely manner? The entire purpose of formative assessment is about timely intervention.
  • To conclude this paragraph: It's about the type of assessment item (to draw out what we want to know about our learner) and the timeliness of our response to need the need.
What are some ways that enable us to find how the learners think, or ways that enable us to probe their thoughts? This is one challenge that I notice students who just joined the Sec 1 classes would face. Not because they are not thinking - They had already proven themselves after braving through the Primary 6 Mathematics problem sums! They are great problem solvers who applied heuristics that would surprise many of us! The next question is, how well can they articulate how they think? or are they simply applying some "sure work" method that they learnt?

Let's refer to the Singapore Mathematics Framework (Syllabus document, p14) - which aspect(s) do we try to develop/ inculcate in our students? We are good in some areas outlined in the 'pentagon' (in particular, the skills, concepts and processes). Metacognition is an area that we should work harder on; and some aspects of "processes" could be further strengthened by riding on the affordances of technologies!

Making thinking visible is one approach that enable us to "see" how students think, how they process data/ information and definitely enable us to diagnose any misconceptions. It does not stop here! Because we could "see" the misconceptions, it enable us to surface the difficult points (Three Point Framework. (Yang & Ricks, 2012)) which we, as educators, might not be aware of! There are many generic thinking routines that we can pick and choose. One way of implementation is to make the routine "really" visible - flag it out each time it is used. Another way is to have it embedded more seamlessly in the way the problem/ question is being put across - which I think it's more 'natural'. 

To probe how they think, often we get them to elaborate the thinking process, which includes articulating:
  • What are the key words they identify - which frames the direction or way they will approach the question/ problem?
  • What are the info/ data they would use or need to solve the question/ problem?
  • What strategies would they consider? How to do they decide which to use if there is more than one way? (What's the criteria)
  • How do they articulate the steps clearly - where the concepts are explained?
  • How would they check that the answer is reasonably correct? 
Well, does the above sound like Polya's 4 steps to problem solving?

How do we do this in a less elaborate but deliberate manner during lesson, and at the same time, reach out to as many learners in the class as possible?

I guess, that's where technology makes its entrance and presence felt to this learning ecosystem.
Its key is accessibility. It's not about physical accessibility (which no doubt, students are well provided with technological devices in Singapore schools). But, it provides a channel for teachers to access learners' thinking far much faster and easier. With the appropriate choice of tools used in a well thought through manner, we are able to "see" how students think!

One of my favourite examples is getting students to explain via viva voce  how they manage a problem (notice that I use 'manage' instead of 'solve'?): They
  • are required to articulate the background (concepts)
  • describe how they dissect the question (to understand what's given and what's required)
  • list down the strategies can they apply and 
  • articulate what considerations they need to take into account to make a choice of the method; 
  • and eventually, how they would check the reasonableness of their proposed solution or answer!
Of course, we don't expect students being able to do all the above in the first attempt. It requires them rehearse the same process over and over again and improve with the feedback given along the way. That's where we help them to see their potential!

Students do not know and do not carry out the above steps naturally. It's not something that they are born with. Nevertheless, quite often, as teachers, we make assumptions that all learners are quick to pick that up, forgetting they need time to hone their stills and feedback is necessary for them to make positive progress. Scaffolding is a must to bring students through all these stages; and the guidance given in their first attempt would be far much more compared to subsequent attempts. Rubrics help. But we must explain the rubrics and give illustrations to that they become aware of the expectations and polish their crafts along the way. In summary,



Over the years, error analysis is introduced into the mathematics classrooms. It serves dual purposes. Indeed, it could be used as a strategy for differentiated learning, though I did not notice it when I first introduced it to the students.

The original intent, still remains as one of the primary intent, is for students to identify and explain what's wrong with the working (which is usually a result of misconception). Think more deeply, what is the underlying assumption behind this?
  •  In order to identify the error, the assumption is students already know the concept quite well such that they could see or identify the step that went wrong. There is another group who could identify the error; however, are unable to explain the misconception - simply because through their own experiences - they might have made enough similar mistakes or similar mistakes had been pointed to them before which would help them to see the mistakes very quickly. It is possible that latter could make the same mistake because they have yet understood the concept behind the operation. To differentiate these 2 groups of students (on who really understand what's going on) would require them to explain what was the thinking behind the error. This is the first step to develop their metacognition. The ability to articulate demonstrate how well they have known the concepts (as concepts in isolation, as well as links to others). 
  • On the other hand, error analysis-type of exercises would quickly filter those who are still trying to grabble with the basic or still functioning at the operational level. Likelihood is, the working speaks the type of misconception that these students already have. So, they would not be able to surface any mistake in the working at all! Hence, their conclusion is "nothing is wrong", but I guess that's what would worry them! 
  • We also have route procedural learners who probably re-attempt the entire question using the 'correct' method. To surface the error in the given working, they "spot" the difference and try to describe the mistake. They are unlikely able to explain the thinking behind the mistake (i.e. the misconception), but able to tell operationally what has gone wrong.
Indeed, error analysis is a very powerful strategy to differentiate learner - as they interact and exchange their observations during discussions. On another note, for this to play out well in a classroom setting, it is necessary to maintain a safe learning environment and to cultivate a open mindset among the learners.


How does technology come in to support the above?

It can be as simple as getting students to provide a brief explanation to support/ justify their answer. No sophisticated technology is required to facilitate the above processes. With the use of blog (comments) and Google Form/ Spreadsheet (embedded in a blog).





Saturday, May 20, 2017

Webinar: Assessment for Learning (AfL) - what does it emcompass?

Attended a webinar (organised by ETD) yesterday and glad to have one solid hour of learning and engagement :)

At the start of the webinar, a poll was conducted among the participants - to indicate which of the following did they think would describe assessment for learning most appropriately,
  • Thermometer 
  • Stethoscope
  • Mirror 
  • GPS (Global Positioning System)
Dr Tay spoke about the big picture (to set the context) before sharing an number of classroom examples to illustrate how one could assess learners' understanding (in particular, tapping on the affordances of technology) formatively. Indeed, the overarching idea when designing the AfL learning experience is to ask THREE following questions:
  • Where the learners are?
  • How best to get there?
  • Where the learners need to be?
This is accompanied by two important principles: Find the gap; Fix the gap

Based on the way the questions are phrased, it is no doubt that GPS seemed to be the most appropriate "answer".

Personally, I think all (4 items) are relevant and contribute to the big picture, especially if we talk about "effective" assessment for learning.

It was obvious that GPS best describes the "moves" involved:
To begin with an end in mind (where the learners need to be at eventually), we need to know our learners' strengths and gaps, what they know, what they do not know, what are their learning challenges/ difficulties and who they are (which would involve other areas like learning style). Only then, we could identify the most appropriate/ suitable approach to be integrated into our lesson/ assessment design so that they could benefit from the process.

How about the rest (thermometer, mirror and stethoscope)? Can we do without them?
  • Thermometer - it's about measurement. Indeed, I would equate it with our sensing - to sense readiness of students to move on? to sense if they got stuck at a difficult point. It also means to be sensitive and responsive to the students' needs. This requires us to be flexible and responsive. For example, do we complete what's planned for the 60-min period lesson, and 'die-die' must finish what's planned, or should we "detour" or change path because the sensing tells us that students needed additional support or scaffolds that we might not have anticipated? Will we end up in a situation that "we have taught; but have students learnt?" This is essential to alert us where the learners are.
  • Stethoscope - an item that I would associate with diagnosis, which I think it's an important step. If the diagnosis is inaccurate, we would plan and implement something that is irrelevant or not appropriate, when the good intent would never lead to fruitful or productive outcome. Diagnosis does not necessary just refer to what's immediate; but sometimes, it's where we surface other underlying issues which may or may not link to something we can address to immediately. We need to 对症下药, else would lead to 前功尽弃 

  • Mirror - close association with reflection (of practices). I would regard this as a 'given' -  teachers would regularly draw on his/ her past experiences, review and reflect his/ her practices to hone his/ her skills. The reflection would also mean he/ she could best match the approaches/ strategies to address what is needed (surfaced through the diagnosis).
Hence, I think, all these complement each other, and when synergised, would create a 
1+ 1 + 1 + 1 > 4 impact.