Apple and Microsoft frequently release updates for their Operating Systems. You must keep your computer current with these updates to protect against vulnerabilities. Many viruses and worms spread rapidly because this essential task is overlooked or Malware is malicious software that gets installed on yourcomputer. Malware programs:Gather information about your web browsing habits and pass the information on to advertisers. Cause severe damage to your system, such as pop-up advertisements, viruse It is very important to lock your computer any time that you step away from it.
Doing so is an easy and effective way to protect the information on your computer system while it's unattended. When you lock your computer, you safeguard your data witho If you need to use Internet Explorer for a specific application, protect your computer with the security precautions below. Open up a session of Internet Explorer. Click Tools Backing up your data is one of the most important steps you can perform inpracticing safe computing. Unfortunately, this step is often overlooked. Ifyour computer hard drive malfunctions or if your computer gets compromised bya virus or hacker, the p Microsoft Office has been -- and still is -- a target for hackers and viruses because of its security vulnerabilities.
To combat this, Microsoft is continuously working to fix these vulnerabilities by offering updates, patches and service packs. A firewall is a protective barrier that prevents unwanted sources such asviruses, worms, trojans and other malicious code from accessing your computeror network. Firewalls can be either hardware-based or software-based. Thesetwo types of solutions ca To request a firewall rule to be put in place, use the Firewall Rule Request form. Encrypting a portable USB flash drive is a great way to prevent sensitive data falling into the wrong hands, preventing data breaches and security incidents.
Encrypting a portable USB flash drive is a great way to prevent sensitive data falling into the wrong hands via data breaches and security incidents. This guide will demonstrate how to securely encrypt a USB flash drive within Windows so that the dat Android runs on the majority of smartphones across the globe. Due to its popularity, it is also the most targeted. Here are some tips to keep your Android device secure. However, please keep in mind that your Android device may differ slightly, as ev Here are some tips to keep your iOS device secure. However, please keep in mind that your iOS device may differ slightly, as previous generations of hardware may not be One of the most important ways you can protect your personal computer PC is to download antivirus software and run regular scans.
While no antivirus is percent foolproof, there are now many free and often operating system-integrated products th As mobile devices have become increasingly important in our everyday lives, they have also become a primary target of thieves and hackers. Follow these easy tips below to keep your device and your information safe. Built-in security featuresLock your You may have sensitive data in Microsoft Office documents that you would like to have extra protection for.
Spezielle Probleme mit PDAs
While you may be storing these documents in a secured location such as a network drive , they may still be vulnerable to attack and disclosur While you may be storing your files in a secured location such as a network drive , they may still be vulnerable to attack and disclosure i A digital signature is a convenient method of signing documents electronically. Digital signatures can accomplish this in several ways, but are most commonly done using certificates that use a non-reversible mathematical algorithm. Incident response The Information Security Office is responsible for managing any reported IT security incident, whether it is a policy violation, large data breach, or a suspected phishing email.
Please see this page on how to report an incident to Keeping your computer secure is necessary whether you are at work or at home. Follow this advice to stay safe and minimize your chances of getting hacked. Always hover, every time This title may sound funny, but it is important to remember to hover y Timely reporting of any suspected IT security incidents keeps the University secure. Please see below on how to best contact the Information Security Office when reporting a security incident: Reporting a data breach In the event of a system breach o Standard security policies The following official university policies have been approved by the President and cover major technologies and responsibilities for computer system usage.
Acceptable use of information technology resourcesMaster IT policy This team is responsible for network and application security acro This material helps establish a basel Wayne Connect has a powerful filter that prevents many malicious emails from reaching your inbox. Some messages do get past these filters, so If you receive a phishing email, forward the message to abuse wayne.
You can optionally report the phish A data breach occurs when sensitive, protected or confidential information has potentially been viewed, stolen or used by an unauthorized individual. Data breaches may involve personal health information, personally identifiable information or intell Windows Users of any University-owned machines are required to change their passwords on a regular basis.
You are responsible for the security of your computer. For personal computers, learn how to change your password manually at support. A core method, included in every one of these models by default, is relations note: the following is how an empty relation method looks :. You can do this quite easily by hand. The great thing about this relations method is that it turns relations into attributes of the model itself.
That is to say, attributes of the related tables can be access directly from the model in which they are declared. Sometimes, you might be able to refer to the two attributes separately. For example, if you simply wanted to echo the content in a view somewhere you could use code like this:. This is a little inefficient if you are doing it a lot throughout your site, because you need to keep re-entering the same code correctly over and over again and avoiding this is a main reason for going with an MVC framework like Yii in the first place.
But the real trouble comes if you want to use the attributes in a context where the method you are invoking expects a single attribute—as is the case, for example, with the yii linking method CHtml::link. The way round this is to create a new compound attribute in your model. This means taking the two underlying attributes in your table and combining them into a single attribute already arranged the way you want that you can then invoke in other methods. Or in this case, adding something like this to the Person model:.
Note: there is a much more efficient way of encoding this last example using the same principle. See this post. As we are building our workflow manager, we are discovering that we develop a more intuitive interface if some terms are always hyperlinked and point to a standard presentation of the relational information. One example of this might be names of people associated with the workflow editors, authors, copyeditors, production assistants.
One way of doing this in Yii would be to modify the views associated with each table in the site so that every time a name is called, you get a link. This is contrary to spirit of the MVC model, however, since it means you are using a view to present logic about the underlying model. And it is also prone to error, since it means you a need to find every possible invocation of the name in all you various views and b not make an error as you enter the same code over an over again in all these different views.
The better approach is to add this functionality to the underlying datamodel that supplies the information to the entire site in the first place—that is, to the model for the database that is providing the name information and the page you want to link to in the end. This post shows you how to use a similar method to first make compound attributes :.
Once this has been added to my Person model, I can call the code and add a link to the person profile in any view by just invoking the method: from now on, the LastNameLink functions as an attribute of the Person model and can be used in exactly the same way actual direct, table-based attributes can be invoked. This code will produce a link to index. A common set is:. The comments explain what each array means.
Is there some method or class somewhere that store this information?drved.3callistos.com/16349.php
ITS VPN Mobile Server
And if so, how do I get in it? If you create new users you might devote considerable amount of time trying to get them into the admin class, all to no avail. There are a couple of choices here. The other choices are to change the code, either to allow some other determinant there. Moodle 2.
But while this introduces great flexibility, it can be quite a cumbersome system to use at first. There is an important difference between Moodle and Blackboard WebCT short answer questions that instructors should be aware of, namely that Moodle short answer questions allow only one answer field. When these are imported into Moodle, the question is converted into a form in which there is a single blank that has four possible correct answers.
There are various ways of asking the same kinds of questions in Moodle. The easiest when you are dealing with imported questions is to ask for a single quality in each answer. So instead of one question asking for part of speech, person, tense, and number, you might have four different questions, one for part of speech, another, for person, a third for tense, and a fourth for number.
A second way of asking this kind of question in Moodle is to use the embedded answer type. These are harder to write, but are arguably closer to the paper equivalent of the same type of question:. Write a modern English translation of one of the following passages in Old English in the space provided below. The point of this format is to provide the student with a choice of topics. If students all write their essays or translations at the same time, you can build your choice of topics by hand and write them into a single question.
The challenge comes if you want to be able to allow your students to write the test asynchronously, as is common with Learning Management Software. In such cases you want to be able to draw your essay topics or translation passages randomly from a test bank. All the basic elements you would need to do this are available in Moodle, both 1. I recently had a discussion with the head of a humanities organisation who wanted to move a website. The website was written using Coldfusion, a proprietary suite of server-based software that is used by developers for writing and publishing interactive web sites Adobe nd.
After some discussion of the pros and cons of moving the site, we turned to the question of the software. Head of Humanities Organisation: We'd also like to change the software. Me: I'm not sure that is wise unless you really have to: it will mean hiring somebody to port everything and you are likely to introduce new problems. Me: Coldfusion is software that runs on a server. You don't need it on your computer.
You just need it on the server. Your techies handle that. I might be exaggerating here—I can't remember if the person really said they used a Mac. But the underlying confusion we faced in the conversation was very real: the person I was talking to did not seem at all to understand the distinction between a personal computer and a network server— basic technology by which web pages are published and read.
This is not an isolated problem. In the last few years, I have been involved with a humanities organisation that distributes e-mail by cc:-list to its thirty-odd participants because some members believe their email system can't access listervers. I have had discussions with a scholar working on a very time-consuming web-based research project who was intent on inventing a custom method for indicating accents because they thought Unicode was too esoteric. I have helped another scholar who wrote an entire edition in a proprietary word-processor format and needed to recover the significance of the various coloured fonts and type faces he had used.
And I have attended presentations by more than one project that intended to do all their development and archiving in layout-oriented HTML. These examples all involve basic technological misunderstandings by people actively interested in pursuing digital projects of one kind or another. When you move outside this relatively small subgroup of humanities scholars, the level of technological awareness gets correspondingly lower. We all have colleagues who do not understand the difference between a blog and a mailing list, who don't know how web-pages are composed or published, who can't insert foreign characters into a word-processor document, and who are unable to backup or take other basic precautions concerning the security of their data.
Until very recently, this technological illiteracy has been excusable: humanities researchers and students, quite properly, concerned themselves primarily with their disciplinary work. The early Humanities Computing experts were working on topics, such as statistical analysis, the production of concordances, and building the back-ends for dictionaries, that were of no real interest to those who intended simply to access the final results of this work.
Even after the personal computer replaced the typewriter, there was no real need for humanities scholars to understand technical details beyond such basics as turning a computer on and off and starting up their word-processor. The principal format for exchange and storage of scholarly information remained paper and the few areas where paper was superseded—such as in the use of email to replace the memo—the technology involved was so widely used, so robust, and above all so useful and so well supported that there was no need to learn anything about it: if your email and word-processor weren't set up at the store when you bought a computer, you could expect this work to be done for you by the technicians at your place of employment or over the phone by the Help Desk at your Internet Service Provider: nothing about humanities scholars' use of the technology required special treatment or distinguished them from the University President, a lawyer in a one-person law office In the last half-decade, this situation has changed dramatically.
The principal exchange format for humanities research is no longer paper but the digital byte—albeit admittedly as represented in PDF and word-processor formats which are intended ultimately for printing or uses similar to that for which we print documents. State agencies are beginning to require open digital access to publicly-funded research. At humanities conferences, an increasing number of sessions focus on digital project reports and the application.
And as Peter Robinson has recently argued, it is rare to discover a new major humanities project that does not include a significant digital component as part of its plans Robinson Indeed some of the most interesting and exciting work in many fields is taking advantage of technology such as GPS, digital imaging, gaming, social networking, and multimedia digital libraries that was unheard of or still very experimental less than a decade ago. That humanists are heavily engaged with technology should come, of course, as no real surprise. Humanities computing as a discipline can trace its origins back to the relatively early days of the computer, and a surprising number of the developments that led to the revolution in digital communication over the last decade were led by people with backgrounds in humanities research.
The XML specification XML is the computer language that underlies all sophisticated web-based applications, from your bank statement to Facebook was edited under the supervision of C. Michael Everson, the current registrar and a co-author of the Unicode standard for the representation of characters for use with computers, has an M. Just as importantly, the second generation of interactive web technology the so-called "Web 2.
The Wikipedia has turned the writing of dusty old encyclopedias into a hobby much like ham-radio. The social networking site Second Life has seen the construction of virtual representations of museums, and libraries. Placing images of a manuscript library or museum's holding on the web is a sure way of increasing in-person traffic at the institution. The newest field for the study of such phenomenon, Information Studies, is also one of the oldest: almost without exception, departments of Information Studies are housed in and are extensions of traditional Library science programmes.
The result of this technological revolution is that very few active humanists can now truthfully say that they have absolutely no reason to understand the technology underlying their work. Whether we are board members of an academic society, working on a research project that is considering the pros and cons of on-line publication, instructors who need to publish lecture notes to the web, researchers who are searching JSTOR for secondary literature in our discipline, or the head of a humanities organisation that wants to move its web-site, we are all increasingly involved in circumstances that require us to make basic technological decisions.
Is this software better than that? What are the long-term archival implications for storing digital information in format x vs. Will users be able to make appropriate use of our digitally-published data?
How do we ensure the quality of crowd-sourced contributions? Are we sure that the technology we are using will not become obsolete in an unacceptably short period of time?
Will on-line publication destroy our journal's subscriber base? The problem is that these are not always questions that we can "leave to the techies. And while the computer skills of our students is often over-rated, it is possible to train them to carry out many day-to-day technological tasks. But such assistance is only as good as the scholar who requests it.
If the scholar who hires a student or asks for advice from their university's technical services does not know in broad terms what they want or what the minimum technological standards of their discipline are, they are likely to receive advice and help that is at best substandard and perhaps even counter-productive. Humanities researchers work on a time-scale and with archival standards far beyond those of the average client needing assistance with the average web-site or multimedia presentation.
We all know of important print research in our disciplines that is still cited decades after the date of original publication. Not a few scholarly debates in the historical sciences have hinged on questions of whether a presentation of material adequately represents the "original" medium, function, or intention. Unless he or she has special training, a technician asked by a scholar to "build a website" for an editorial project may very well not understand the extent to which such questions require the use of different approaches to the composition, storage, and publication of data than those required to design and publish the athletic department's fall football schedule.
Even if your technical assistant is able to come up with a responsible solution for your request without direction from somebody who knows the current standards for Digital Humanities research in your discipline, the problem remains that such advice almost certainly would be reactive: the technician would be responding to your perhaps naive request for assistance, not thinking of new disciplinary questions that you might be able to ask if you knew more about the existing options.
Might you be able to ask different questions by employing new or novel technology like GPS, serious gaming, or social networking? Can technology help you or your users see your results in a different way? Are there ways that your project could be integrated with other projects looking at similar types of material or using different technologies. Would your work benefit from distribution in some of the new publication styles like blogs or wikis?
These are questions that require a strong grounding in the original humanistic discipline and a more-than-passing knowledge of current technology and digital genres. Many of us have students who know more than than we do about on-line search engines; while we might hire such students to assist us in the compilation of our bibliographies, we would not let them set our research agendas or determine the contours of project we hire them to work on.
Handing technological design of a major humanities research project over to a non-specialist university IT department or a student whose only claim to expertise is that they are better than you at instant messaging is no more responsible. Fortunately, our home humanistic disciplines have had to deal with this kind of problem before. Many graduate, and even some undergraduate, departments require students to take courses in research methods, bibliography, or theory as part of their regular degree programmes.
The goal of such courses is not necessarily to turn such students into librarians, textual scholars, or theorists—though I suppose we wouldn't complain if some of them discovered a previously unknown interest. Rather, it is to ensure that students have a background in such fundamental areas sufficient to allow them to conduct their own research without making basic mistakes or suffering unnecessary delays while they discover by trial-and-error things that might far more efficiently be taught to them upfront in the lecture hall.
In the case of technology, I believe we have now reached the stage where we need to be giving our students a similar grounding. We do not need to produce IT specialists—though it is true that a well-trained and knowledgeable Digital Humanities graduate has a combination of technological skills and experience with real-world problems and concepts that are very easily transferable to the private sector. But we do need to produce graduates who understand the technological world in which we now live—and, more importantly, how this technology can help them do better work in their home discipline.
The precise details of such an understanding will vary from discipline to discipline.
MINI-FAQ: OpenBSD 2.4 IPSEC VPN Configuration
Working as an Anglo-Saxonist and a textual critic in an English department, I will no doubt consider different skills and knowledge to be essential than I would if I were a church historian or theologian. But in its basic outlines such a orientation to the Digital Humanities probably need not vary too much from humanities department to humanities department. We simply should no longer be graduating students who do not know the basic history and nature of web technologies, what a database is and how it is designed and used, the importance of keeping content and processing distinct from each other, and the archival and maintenance issues involved in the development of robust digital standards like Unicode and the TEI Guidelines.
Such students should be able to discuss the practical differences and similarities of print vs. Not all humanists need to become Digital Humanists. Indeed, in attending conferences in the last few years and observing the increasingly diverging interests and research questions pursued by those who identify themselves as "Digital Humanists" and those who define themselves primarily as traditional domain specialists, I am beginning to wonder if we are not seeing the beginnings of a split between "experimentalists" and "theorists" similar to that which exists today in some of the natural sciences.
But just as theoretical and experimental scientists need to maintain some awareness of what each branch of their common larger discipline is doing if the field as a whole is to progress, so too must there remain an interaction between the traditional humanistic and digital humanistic domains if our larger fields are also going to continue to make the best use of the new tools and technologies available to us.
As humanists, we are, unavoidably, making increasing use of digital media in our research and dissemination. If this work is to take the best advantage of these new tools and rhetorics—and not inadvertently harm our work by naively adopting techniques that are already known to represent poor practice, we need to start treating a basic knowledge of relevant digital technology and rhetorics as a core research skill in much the same way we currently treat bibliography and research methods.
Evertype Robinson, Peter. Sperberg-McQueen, C. Sperberg-McQueen Home Page. Wikipedia contributors. I have recently started using plagiarism detection software. Not so much for the ability to detect plagiarism as for the essay submission- and grading- management capabilities it offered. Years ago I moved all my examinations and tests from paper to course management software WebCT originally, now Blackboard, and soon Moodle.
I discovered in my first year using that software that simply delivering and correcting my tests on-line—i. I long wondered whether I could capture the same kind of efficiencies by automating my essay administration. Here too, I thought that I spent a lot of time handling paper rather than engaging with content. In this case, however, I was not sure I would be able to gain the same kind of time-saving. While I was sure that I could streamline my workflow, I was afraid that marking on screen might prove much less efficient than pen and paper—to the point perhaps of actually hurting the quality and speed of my essay-grading.
My experience this semester has been that my fears about lack of efficiency in the intellectual aspects of my correction were largely unfounded. And that my hopes for improving my administrative efficiency closely reflected the actual possibilities. While marking on screen is slower than marking with a pencil a paper that used to take me 20 minutes to mark now will take 24 to 25 minutes , the difference is both smaller than I originally feared and more than compensated by the administrative time-savings, again including the class time freed up from the need to collect and redistribute papers.
Obvious examples of this include quotations from works under discussion and bibliographic entries. It is also quite common to see the occasional short phrase or clause flagged in otherwise original work, especially at the beginning of paragraphs or in passages introducing or linking quotations. Using plagiarism detection software gave me the opportunity of checking how well I had been doing catching plagiarists the old fashioned way, when I was marking by hand.
But neither of them had particularly high unoriginality scores: in both cases, I discovered the plagiarism after something in their essays seemed strange to me and caused me to go through originality reports turnitin provides on each essay more carefully. None of the others showed the same kind of suspicious content that had led me to suspect the two I caught. Even though it turns out that I apparently can still rely on my ability to discover plagiarism intuitively, there are two things about plagiarism detection software that do mark an improvement over previous methods of identifying such problems by hand.
The first is how quickly such software lets instructors test their hunches. In the two cases I caught this semester, confirming my hunch took less than a minute: I simply clicked on the originality report and compared the highlighted passages until I discovered a couple that were clearly copied by the students without ackowledgement in ways that went beyond reasonable use, unconscious error, or unrealised intellectual debt.
In the past it has often taken me hours to make a reasonable case against even quite obvious examples of plagiarism. The second improvement brought on by plagiarism detection software lies in the type of misuse of sources it uncovers. In the old days, my students used to plagiarise with a shovel; these students were plagiarising with a scalpel.
This is where my title comes in. It is of course entirely possible that students always have plagiarised in this way and that I and many of my colleagues simply have missed it because it is so hard to spot by hand. But I think that the plagiarism turnitin caught in these two essays this semester actually may represent a new kind of problem involving the misappropriation of sources in student work—a problem that has different origins, and may even involve more examples of honest mistake, than we used to see when students had to go to the library to steal their material.
Having interviewed a number of students in the course of the semester, I am in fact fairly firmly convinced that what turnitin found is a symptom of new problems in genre and research methodology that are particularly to the current generation of students—students who are undergoing their intellectual maturation as young adults in a digital culture that is quite different from that of even five years ago. In the old days, you had to positively decide to plagiarise an essay by buying one off a friend or going to the library and actually typing text out that you were planning to present as your own.
This first thing to realise about how our students approach our assignments has to do with genre. For most pre-digital university instructors, the essay is self-evidently the way one engages with humanistic intellectual problems. It is what we were taught in school and practiced at university.
But more importantly, it was almost exclusively how argument and debate were conducted in the larger society. The important issues of the day were discussed in magazines and newspapers by journalists whose opinion pieces were also more-or-less similar to the type of work students were asked to do at the university: reasoned, original, and polished pieces of writing in which a single author demonstrated his or her competence by the focussed selection of argument and supporting evidence.
For most contemporary students, however, the essay is neither the only nor the most obviously appropriate way of engaging with the world of ideas, politics, and culture. Far more common, certainly numerically and, increasingly, in influence, is the blog—and making a good blog can often involve skills that are anathemetic to the traditional essay. While it is possible to publish essays using blog software, the point of blogs, increasingly, is less to digest facts and arguments than to accumulate and react to them.
The skill an accomplished blogger brings to this type of material lies in the ability to select and organise these quotations. Professional examples include the various Barack Obama tributes that were a defining feature of the Democratic Primary in the U. The real evidence of the evolving distinction between the essay and the blog as methods of argumentation and literary engagement, however, can be seen in the blogs that newspapers are increasingly asking their traditional opinion columnists to write.
It is no longer enough to write essays about the news, though the continued existence and popularity of the on-line and paper newspaper column shows that there is still an important role for this kind of work. Newspapers and presumably their readers also now want columnists to document the process by which they gather the material they write about—creating a second channel in which they accumulate and react to facts and opinions alongside their more traditional essays. Among the older journalists, an example of this is Nicholas Kristof at the New York Times , who supplements his column with a blog and other interactive material about the subjects he feels most passionate about.
In his column he digests evidence and makes arguments; in his blog he accumulates the raw material he uses to write his columns and presents it to others as part of a process of sharing his outrage. In the case of our students, the problem this generic difference between the blog and the essay causes is magnified by the way they conduct their research.
On the basis of my interviews, it appears to me that most of my first year students now conduct their research and compile their notes primarily by searching the Internet, and, when they find an interesting site, copying and pasting large sections of verbatim quotation into their word processor. Often they include the URL of this material with the quotations; but because you can always find the source of a passage you are quoting from the Internet, it is easy for them to get sloppy. Once this accumulation of material is complete, they then start to add their own contribution to the collection, moving the passages they have collected around and interspacing them with their opinions, arguments, and transitions.
This is, of course, how bloggers, not essayists, work. Unfortunately, since we are asking them to write essays, the result if they are not careful is something that belongs to neither genre: it is not a good blog, because it is not spontaneous, dynamic, or interactive enough; and it is not a good traditional essay, because it is more pastiche than an original piece of writing that takes its reader in an new direction. The best students working this way do in the end manage to overcome the generic mismatch between their method of research and their ultimate output, producing something that is more controlled and intellectually original than a blog.
But less good students, or good students working under mid- or end-of-term pressure, are almost unavoidably leaving themselves open to producing work that is, in a traditional sense at least, plagiarised—by forgetting to distinguish, perhaps even losing track of the distinction, between their own comments and opinions and those of others, or by collecting and responding exclusively to passages mentioned in the work of others rather than finding new and original passages that support their particular arguments.
This is still plagiarism: it is no more acceptable to misrepresent the words and ideas of others as your own in the blogging world as it is in the world of the traditional essay. In preventing it, instructors will need to take into account the now quite different ways of working and understanding intellectual argument that the current generation of students bring with them into the classroom. The first thing to do is realise the difference between the essay and the blog.
When you write an essay, your reader is interested in your ability to digest facts and arguments and set your own argumentative agenda. Even when, as is more normal and probably better, essays do engage with previous arguments and topics that are of some debate, the expectation is that the essayist will digest this evidence and these opinions and shape the result in ways that point the reader in new directions—not primarily to new sources, but rather to new claims and ideas that are not yet part of the current discourse.
The second thing to realise is just how dangerous the approach many students take to note-taking is in terms of inviting charges of plagiarism. In a world of Google, where text is data that can be found, aggregated, copied, and reworked with the greatest of ease, it is of course very tempting to take notes by quotation.
When people worked with paper, pens, and typewriters, quotation was more difficult and time-consuming: when you had to type out quotations by hand, writing summaries and notes was far quicker. Nowadays, it is much easier and less time-consuming to quote something than it is to take notes: when you find an interesting point in an on-line source, it uses far fewer keystrokes and less intellectual effort to highlight, copy, and paste the actual verbatim text of the source in a file than it does to turn to the keyboard and compose a summary statement or not. And if you are used to reading blogs, you know that this method can be used to summarise even quote long and complex arguments.
There are two problems, however. The first is that this method encourages you to write like a blogger rather than an essayist: your notes are set up in a way that makes it easier to write around your quotations linking, organising, and responding to them than to digest what they are saying and produce a new argument that takes your reader in unexpected directions. The second problem is that it is almost inevitable that you will end up accidentally incorporating the words and ideas of your sources in your essay without acknowledgement.
Once you add your own material to this collection of quotations in the file that will eventually become your essay, you will discover that it is almost impossible to remember or distinguish between what you have added and what you got from somebody else. One way of solving this is to change the way you take notes, doing less quoting and more summarising. Doing this might even help you improve the originality of your essays by forcing you to internalise your evidence and arguments. But cutting and pasting from digital sources is so easy that you are unlikely ever to stop doing it completely—and even if your do, you are very likely to run into trouble again the moment you face the pressure of multiple competing deadlines.
A better approach is to develop protocols and practices that help you reduce the chances that your research method will cause you to commit unintentional plagiarism. Perhaps the single most important thing you can do in this regard is to establish a barrier between your research and your essay. So when you come to write an essay, create two or more files: one for the copying and pasting you do as part of your research or even better, one file for each source from which you copy and paste or make notes , and, most importantly, a separate file for writing your essay. In maintaining this separate file for you essays, you should establish a rule that nothing in this file is to be copied directly from an outside source.
If you find something interesting in your research, you should copy this material into a research file; only if you decide to use it in your essay should should you copy it from your research file into your essay file. In other words, your essay file is focussed on your work: in that file, the words and ideas of others appear only when you need them to support your already existing arguments. An even stricter way of doing this is to establish a rule that nothing is ever pasted into your essay file: if you want to quote a passage in your text, you can decide that you will only type it out by hand.
This has the advantage of discouraging you from over-quoting or building your essay around the words of others—something that is fine in a blog, but bad in an essay. If this rule sounds too austere and difficult to enforce, at least make it a rule that you paste nothing into you essay before you have composed the surrounding material—i. Another thing you could try is finding digital tools that will make your current copy-and-paste approach to note-taking more valuable and less dangerous. In the pre-digital era, students often took notes on note cards or in small notebooks.
They would read a source in the library with a note card or notebook in front of them. They would begin by writing basic bibliographic information on this card or notebook. Then, when they read something interesting, they would write a note on the card or in the notebook, quoting the source if they thought the wording was particularly noteworthy or apt. By the time they came to write their essays, they would have stacks of cards or a series of notebooks, one dedicated to each work or idea.
There are several ways of replicating and improving on this method digitally. One way is to use new word-processor files for each source: every time you discover a new source, start a new file in your word-processor, recording the basic information you need to find the source again URL , title, author, etc. Then start pasting in your quotations and making your notes in this file. When you are finished you give your file a descriptive name that will help you remember where it came from and save it.
But other tools exist that allow you to implement this basic method more easily. Citation managers such as Endnote or Refworks , for example, tie notes to bibliographic entries. If you decide to try one of these, you start your entry for a new source i. There is no problem with naming files your notes are all stored under the relevant bibliographic entry in a single database , with moving between sources you call up the each source by the bibliographic reference , and in most cases you will be able to use a built in search function to find passages in your notes if you forget which particular work you read them in.
Bibliographic databases and citation managers are great if all your notes revolve around material from text-based sources. But what if you also need to record observations, evidence, interviews, and the like that cannot easily be tied to specific references? In this case, the best tool may be a private wiki—for example at PbWiki or if you are computer literate, and have access to a server, a private installation of MediaWiki , the software that runs the Wikipedia.
We tend to think of wikis as being primarily media for the new type of writing that characterises collaborative web applications like the Wikipedia or Facebook. In actual fact, however, wikis have a surprising amount in common with the notebooks or stacks of note cards students used to bring with them to the library. Unlike an entry in citation management software, wiki entry pages are largely free-form space on which you can record arbitrary types of information—a recipe, an image more accurately a link to an image rather than the image itself , pasted text, bibliographic information, tables of numerical data, and your own annotations and comments on any of the above.
As with an index card, you can return to your entry whenever you want in order to add or erase things though a wiki entry, unlike an index card preserves all your original material as well , or let others comment on. And as with note cards you can shuffle and arrange them in various different ways depending on your needs—using the category feature, you can create groupings that collect all the pages you want to use in a given essay, or that refer to a specific source, or involve a particular topic.
However you decide to solve this problem, the most important thing is to avoid the habit which is most likely to lead you into unintentionally plagiarising from your sources: starting an essay by copying and pasting large passages of direct quotation into the file that you ultimately intend to submit to your instructor. In an essay, unlike a blog, the point is to hear what you have to say.
In the year-end papers, I found a surprisingly large number of papers with plagiarised passaged in them five or six out of sixty with perhaps one or two doubtful cases. The larger number of hits is coming from the ability turnitin is giving me to check my hunches more easily and quickly, The pattern I describe above of writing between large quotations and paraphrases still seems to be holding true, however—as is the age or generational difference: my senior students are not nearly as likely to write essays like this.
You should then be presented with a screen that looks something like this:. In Repositories we will place the address of our SVN client; in Working Copies we will put the directory on our local machine i. You will be presented with a small dialogue like this:. A working directory dialogue like this will open:. Browse to an appropriate directory or create a new folder for the repository.
Click on O. Normally Oxygen will then pop up on your screen with the file loaded. The ultimate goal will be to have a synoptic oversight and index that will allow students to click on major events, persons, or cultural artefacts and then see how they fit in with other milestones. The following is a list of typographical conventions to use when transcribing medieval manuscripts in my classes.
Deletion may be by any method underlining, punctum delens , erasure, overwriting, etc. You should indicate the precise method of deletion by a note at the end of your transcription. The deleted text is recorded whenever possible. If deleted text cannot be recovered, it is replaced by colons. Insertion is distinguished from overwriting i. This addition may involve the conversion of one letter to another for example, the conversion of to by the addition of an ascender , or the addition of new text in the place of a previous erasure. The overwritten text is treated as a deletion.
Text preceded by a single vertical bar has been added at the end of a manuscript line. Text followed by a single vertical bar has been added at the beginning of a manuscript line. When damaged text is unclear or illegible, additional symbols are used. If you use the greater and less than signs, your text will not appear as the browser will think your text is an HTML code.
The number of colons used corresponds roughly to the number of letters the transcriber believes are missing. Note that colons are used for text that was in the manuscript but is not physically missing due to erasure or other damage. They are not used to indicate text that has not been copied into the manuscript but appears in other versions. The last decade or so has proven to be a heady time for editors of digital editions. The questions they have asked have ranged from the nature of the editorial enterprise to issues of academic economics and politics; from problems of textual theory to questions of mise-en-page and navigation: What is an Edition?
What kinds of objects can it contain? How should it be used? Must it be critical? Must it have a reading text? How should it be organised and displayed? Can intellectual responsibility be shared among editors and users? Can it be shared across generations of editors and users? While some of these questions clearly are related to earlier debates in print theory and practice, others involve aspects of the production of editions not relevant to or largely taken for granted by previous generations of print-based editors.
The answers that have developed to these questions at times have involved radical departures from earlier norms 1. The flexibility inherent to the electronic medium, for example, has encouraged editors to produce editions that users can manipulate interactively, displaying or suppressing different types of readings, annotation, and editorial approaches, or even navigate in rudimentary three-dimensional virtual reality e. The relatively low production, storage, and publication costs associated with digital publication, similarly, have encouraged the development of the archive as the de facto standard of the genre: users of digital editions now expect to have access to all the evidence used by the editors in the construction of their texts assuming, indeed, that editors actually have provided some kind of mediated text : full text transcriptions, high-quality facsimiles of all known witnesses, and tools for building alternate views of the underlying data e.
There have been editions that radically decenter the reading text e. Robinson , and editions that force users to consult their material using an editorially imposed conceit Reed-Kline Much of the impetus behind this theoretical and practical experimentation has come from developments in the wider field of textual and editorial scholarship, particularly work of the book historians, new philologists, and social textual critics who came into prominence in the decade preceding the publication of the earliest modern digital editorial projects e.
Despite significant differences in emphasis and detail, these approaches are united by two main characteristics: a broad interest in the editorial representation of variance as a fundamental feature of textual production, transmission, and reception; and opposition to earlier, intentionalist, approaches that privileged the reconstruction of a hypothetical, usually single, authorial text over the many actual texts used and developed by historical authors, scribes, publishers, readers, and scholars.
Working largely before the revolution in Humanities Computing brought on by the development of structural markup languages and popularity of the Internet, these scholars nevertheless often expressed themselves in technological terms, calling for changes in the way editions were printed and organised see, for example, the call for a loose leaf edition of Chaucer in Pearsall or pointing to the then largely incipient promise of the new digital media for representing texts as multiforms e.
McGann ; Shillingsburg A second, complementary, impetus for this experimentation has been the sense that the digital editorial practice is, or ought to be, fundamentally different from and even opposed to that of print. This view is found to a greater or lesser extent in both early speculative accounts of the coming revolution e.
McGann ; the essays collected in Finneran and Landow and Delaney and subsequent, more sober and experienced discussions of whether digital practice has lived up to its initial promise e. Robinson , , ; Karlsson and Malm It is characterised both by a sense that many intellectual conventions found in print editions are at their root primarily technological in origin, and that the new digital media offer what is in effect a tabula rasa upon which digital editors can develop new and better editorial approaches and conventions to accommodate the problems raised by textual theorists of the s and s.
Of course in some cases, this sense that digital practice is different from print is justified. Technological advances in our ability to produce, manipulate, and store images cheaply, likewise, have significantly changed what editors and users expect editions to tell them about the primary sources. The ability to present research interactively has opened up rhetorical possibilities for the representation of textual scholarship difficult or impossible to realise in the printed codex. But the sense that digital practice is fundamentally different from print has been also at times more reactionary than revolutionary.
If digital theorists have been quick to recognise the ways in which some aspects of print editorial theory and practice have been influenced by the technological limitations of the printed page, they have been also at times too quick to see other, more intellectually significant aspects of print practice as technological quirks. The development of the critical edition over this period has been as much an intellectual as a technological process. While the limitations of the printed page have undoubtedly dictated the form of many features of the traditional critical edition, centuries of refinement—by trial-and-error as well as outright invention—also have produced conventions that transcend the specific medium for which they were developed.
In such cases, digital editors may be able to improve upon these conventions by recognising the often unexpressed underlying theory and taking advantage of the superior flexibility and interactivity of the digital medium to improve their representation. Perhaps no area of traditional print editorial practice has come in for more practical and theoretical criticism than the provision of synthetic, stereotypically eclectic, reading texts 3. Of course this criticism is not solely the result of developments in the digital medium: suspicion of claims to definitiveness and privilege is, after all, perhaps the most characteristic feature of post-structuralist literary theory.
It is the case, however, that digital editors have taken to avoiding the critical text with a gusto that far outstrips that of their print colleagues. It is still not unusual to find a print edition with some kind of critical text; the provision of similarly critical texts in digital editions is far less common. More commonly, as in the early ground breaking editions of the Canterbury Tales Project CTP , the intention of the guide text is, at best, to provide readers with some way of organising the diversity without making any direct claim to authority Robinson nd :. We began… work [on the CTP ] with the intention of trying to recreate a better reading text of the Canterbury Tales.
As the work progressed, our aims have changed. Rather than trying to create a better reading text, we now see our aim as helping readers to read these many texts. Thus from what we provide, readers can read the transcripts, examine the manuscripts behind the transcripts, see what different readings are available at any one word, and determine the significance of a particular reading occurring in a particular group of manuscripts.
Perhaps this aim is less grand than making a definitive text; but it may also be more useful. There are some exceptions to this general tendency—both in the form of digital editions that are focussed around the provision of editorially mediated critical texts e. But even here I think it is fair to say that the provision of a synthetic critical text is not what most digital editors consider to be the really interesting thing about their projects.
What distinguishes the computer from the codex and makes digital editing such an exciting enterprise is precisely the ability the new medium gives us for collecting, cataloguing, and navigating massive amounts of raw information: transcriptions of every witness, collations of every textual difference, facsimiles of every page of every primary source. Even when the ultimate goal is the production of a critically mediated text, the ability to archive remains distracting 4. In some areas of study, this emphasis on collection over synthesis is perhaps not a bad thing.
Texts like Piers Plowman and the Canterbury Tales have such complex textual histories that they rarely have been archived in any form useful to the average scholar; in such cases, indeed, the historical tendency—seen from our post-structuralist perspective—has been towards over-synthesis. Their textual histories, too, have tended to be too complex for easy presentation in print format e.
Manley and Rickert The area in which I work, Old English textual studies, has not suffered from this tendency in recent memory, however. Editions of Old English texts historically have tended to be under- rather than over-determined, even in print Sisam ; Lapidge , In most cases, this is excused by the paucity of surviving witnesses. Even when there is more primary material, Anglo-Saxon editors work in a culture that resists attempts at textual synthesis or interpretation, preferring parallel-text or single-witness manuscript editions whenever feasible and limiting editorial interpretation to the expansion of abbreviations, word-division, and metrical layout, or, in student editions, the occasional normalisation of unusual linguistic and orthographic features Sisam One result of this is that print practice in Anglo-Saxon studies over the last century or so has anticipated to a great extent many of the aspects that in other periods distinguish digital editions from their print predecessors.
The poem also has been well studied. Semi-diplomatic transcriptions of all known witnesses were published in the s Dobbie 5. Facsimiles of the earliest manuscripts of the poem dating from the mid-eighth century have been available from various sources since the beginning of the twentieth century e. Dobiache-Rojdestvensky and were supplemented in the early s by a complete collection of high quality black and white photos of all witnesses in Fred C.
Robinson and E. The poem has been at the centre of most debates about the nature of textual transmission in Anglo-Saxon England since at least the s. Taken together, the result of this activity has been the development of an editorial form and history that resembles contemporary digital practice in everything but its medium of production and dissemination. Ore ; Robinson The last century has seen the publication of a couple of student editions of the poem e.
The closest thing to a standard edition for most of this time has been a parallel text edition of the Hymn by Elliot Van Kirk Dobbie Unfortunately, in dividing this text into Northumbrian and West-Saxon dialectal recensions, Dobbie produced an edition that ignored his own previous and never renounced work demonstrating that such dialectal divisions were less important that other distinctions that cut across dialectal lines Dobbie 6.
What these readers want—and certainly what I want when I consult an edition of a work I am studying for reasons other than its textual history—is a text that is accurate, readable, and hopefully based on clearly defined and well-explained criteria. They want, in other words, to be able to take advantage of the expert knowledge of those responsible for putting together the text they are consulting.
But they will not—except in extreme cases I suspect—actually want to duplicate the effort required to put together a top-quality edition. This is because, as we shall see, the dissemination of expert knowledge is something that print-based editors are generally very good at. At a conceptual level, print approaches developed over the last several hundred years to the arrangement of editorial and bibliographic information in the critical edition form an almost textbook example for the parsimonious organisation of information about texts and witnesses.
While there are technological and conventional limitations to the way this information can be used and presented in codex form, digital scholars would be hard pressed to come up with a theoretically more sophisticated or efficient organisation for the underlying data. Demonstrating the efficiency of traditional print practice requires us to make a brief excursion into questions of relational database theory and design 7. In designing a relational database, the goal is to generate a set of relationship schemas that allow us to store information without unnecessary redundancy but in a form that is easily retrievable Silberschatz, Korth, and Sudarshan , The relational model organises information into two-dimensional tables, each row of which represents a relationship among associated bits of information.
Complex data commonly requires the use of more than one set of relations or tables. The key thing is to avoid complex redundancies: in a well designed relational database, no piece of information that logically follows from any other should appear more than once 8. The process used to eliminate redundancies and dependencies is known as normalisation. When data has been organised so that it is free of all such inefficiencies, it is usually said to be in third normal form. How one goes about doing this can be best seen through an example.
The following is an invoice from a hypothetical book store adapted from Krishna , 32 :. Describing the information in this case in relational terms is a three step process. In the following, parentheses are used to indicate information that can occur more than once on a single invoice:. The second step involves extracting fields that contain repeating information and placing them in a separate table. The final step involves removing functional dependencies within these two tables.
At this point the data is said to be in third normal form: we have four sets of relations, none of which can be broken down any further:. The normalisation process becomes interesting when one applies it to the type of information editors commonly collect about textual witnesses. From the point of view of the database designer, this sheet has what are essentially fields for the manuscript sigil, date, scribe, location, and, of course, the text of the poem in the witness itself, something that can be seen, on analogy with our book store invoice, as itself a repeating set of largely implicit information: manuscript forms, normalised readings, grammatical and lexical information, metrical position, relationship to canonical referencing systems, and the like.
As with the invoice from our hypothetical bookstore, it is possible to place this data in normal form. The first step, once again, is to extract the relevant relations from the manuscript sheet and, in this case, the often unstated expert knowledge an editor typically brings to his or her task. This leads at the very least to the following set of relations 10 :.
At this point, we have organised our data in its most efficient format. Of course in real life, there would be many more tables, and even then it would be probably impossible—and certainly not cost effective—to treat all editorial knowledge about a given text as normalisable data. What is significant about this arrangement, however, is the extent to which our final set of tables reflects the traditional arrangements of information in a stereotypical print edition: a section up front with bibliographic and other information about the text and associated witnesses; a section in the middle relating manuscript readings to editorially privileged forms; and a section at the end containing abstract lexical and grammatical information about words in the text.
Moreover, although familiarity and the use of narrative can obscure this fact in practice, much of the information contained in these traditional sections of a print edition actually is in implicitly tabular form: in structural terms, a glossary are best understood as the functional equivalent of a highly structured list or table row, with information presented in a fixed order from entry to entry. Bibliographical discussions, too, often consist of what are in effect, highly structured lists that can easily be converted to tabular format: one cell for shelf-mark, another for related bibliography, provenance, contents, and the like This analogy between the traditional arrangement of editorial matter in print editions and normalised data in a relational database seems to break down, however, in one key location: the representation of the abstract text.
For while it is possible to see the how the other sections of a print critical edition might be rendered in tabular form, the critical text itself—the place where editors present an actual reading as a result of their efforts—is not usually presented in anything resembling the non-hierarchical, tabular form a relational model would lead us to expect. In fact, the essential point of the editorial text—and indeed the reason it comes in for criticism from post-structuralists—is that it eliminates non-hierarchical choice.
In constructing a reading text, print editors impose order on the mass of textual evidence by privileging individual readings at each collation point. All other forms—the material that would make up the Text table in a relational database—is either hidden from the reader or relegated, and even then usually only as a sample, to appearance in small type at the bottom of the page in the critical apparatus. Although it is the defining feature of the print critical edition, the critical text itself would appear to be the only part that is not directly part of the underlying, and extremely efficient, relational data model developed by print editors through the centuries.
But this does not invalidate my larger argument, because we build databases precisely in order to acquire this ability to select and organise data. In computer database management systems, views are built by querying the underlying data and building new relations that contain one or more answers from the results. If this understanding of the critical text and its relationship to the data model underlying print critical practice is correct, then digital editors can almost certainly improve upon it. One obvious place to start might seem to lie in the formalising and automating the process by which print editors process and query the data upon which their editions are based.
Where, for economic and technological reasons, print editions tend to offer readers only a single critical approach and text, digital editions could now offer readings a series of possible approaches and texts built according to various selection criteria. In this approach, users would read texts either by building their own textual queries, or by selecting pre-made queries that build views by dynamically modelling the decisions of others—a Kane-Donaldson view of Piers Plowman , perhaps, or a Gabler reading text view of Ulysses. This is an area of research we should pursue, even though, in actual practice, we are still a long way from being able to build anything but the simplest of texts in this manner.
Unfortunately, such conceptually simple tasks are still at the extreme outer limits of what it is currently possible, let alone economically reasonable, to do. Going beyond this and learning to automate higher-level critical decisions involving cultural, historical, or literary distinctions, is beyond the realm of current database design and artificial intelligence even for people working in fields vastly better funded than textual scholarship.
Thus, while it would be a fairly trivial process to generate a reading text based on a single witness from an underlying relational database, building automatically a best text edition—that is to say, an edition in which a single witness is singled out automatically for reproduction on the basis of some higher-level criteria—is still beyond our current capabilities. Automating other distinctions of the type made every day by human editors—distinguishing between good and bad scribes, assessing difficilior vs. For while we are still far away from being able to truly automate our digital textual editions, and we do need to find some way of incorporating expert knowledge into digital editions that are becoming ever more complex.
The more evidence we cram into our digital editions, the harder it becomes for readers to make anything of them.
Virtual Private Networks Cisco AnyConnect VPN Client Windows / opensuse / Mac OS X
No two witnesses to any text are equally reliable, authentic, or useful for all purposes at all times. In some cases, it is possible to use hierarchical and object-oriented data models to encode these human judgements so that they can be generated dynamically see note 14 above. In other cases, digital editors, like their print predecessors, will simply have to build critical texts of their editions the old fashioned way, by hand, or run the risk or failing to pass on the expert knowledge they have built up over years of scholarly engagement with the primary sources.
It is here, however, that digital editors can improve theoretically and practically the most on traditional print practice. For if critical reading texts are, conceptually understood, the equivalent of query-derived database views, then there is no reason why readers of critical editions should not be able to entertain multiple views of the underlying data. Critical texts, in other words—as post-structuralist theory has told us all along—really are neither right nor wrong: they are simply views of a textual history constructed according to different, more or less explicit, selection criteria.
In the print world, economic necessity and technological rigidity imposed constraints on the number of different views editors could reasonably present to their readers—and encouraged them in pre post-structuralist days to see the production of a single definitive critical text as the primary purpose of their editions.
Digital editors, on the other hand, have the advantage of a medium that allows the inclusion much more easily of multiple critical views, a technology in which the relationship between views and data is widely known and accepted, and a theoretical climate that encourages an attention to variance. If we are still far from being at the stage in which we can produce critical views of our data using dynamic searches, we are able even now to hard-code such views into our editions in unobtrusive and user-friendly ways.
By taking advantage of the superior flexibility inherent in our technology and the existence of a formal theory that now explains conceptually what print editors appear to have discovered by experience and tradition, we can improve upon print editorial practice by extending it to the point that it begins to subvert the very claims to definitiveness we now find so suspicious.
Related vpn uni basel download mac
Copyright 2019 - All Right Reserved