At the 15th Annual Durham Blackboard Users’ Conference, I attended a session on discussing the shared experience of using Turnitin in contrast to Blackboard’s own SafeAssign. While much frustration was vented at the unreliability of Turnitin during submission periods, the key focus was whether Safe Assign should be used instead assuming it was more stable for students to use.
So the initial focus was on their functionality as plagiarism checkers. So let’s just clarify what they both offer. Firstly, they are not plagiarism checkers as they are promoted but rather check for text similarity between a student’s paper and the resources in their databases. In fact, this key sentence appears in a turnitin guide for students “Turnitin does not detect or determine plagiarism—it just detects matching text to help instructors determine if plagiarism has occurred.” (http://bit.ly/14IEaXT). So academic judgement is required to identify accidental or deliberate plagiarism.
A second consideration is the database for the” originality check”. One service has a huge student paper archive and refers to over 130 million articles while the other has fewer student papers and references the ProQuest ABI/Inform database with over 1,100 titles or 2.6 million articles . So while there are other factors, Turnitin appears to be the better resourced but is let down by its instability at critical periods. This really illustrates the institutional dilemma posed by the Clash.
Yet such a decision should be made on a wider context electronic management of assessment (EMA) and not in isolation. The Durham discussion mentioned Turnitin’s key advantage over SafeAssign was QuickMarks despite such features as delegated marking and video feedback being available in Blackboard. so really the Durham debate moved beyond ‘Plagiarism’ and this is the key point I raised at the meeting. It is vital to look at what institutions are trying to achieve with electronic submission right through to feedback. There is no perfect system yet so it requires compromise and agreeing how to manage the assessment process within the module, department and institution. Even if this approach is possible, we are all assuming that there are no access problems not just for submission but for providing feedback. These services must realise that they have beyond ‘plagiarism checking’ to being part of a bigger picture. This is why reliability of access by all is critical and random system failure means undermining e-assessment within the education sector.