Distributed translation experiment, conclusions
A couple of lessons I learned from my distributed translation experiment:
1. Don’t worry about volunteers showing up. Initially nobody seemed to be interested in participating, but after a while somewhere from ten to twenty people turned up, which was more than enough for my purposes. I had advertised my experiment in four places: this blog, the Dutch forum at Distributed Proofreaders, a chatty general purpose Usenet group, and a mailing list for (non-literary) professional translators. OK, so do worry, a lot. :) Thing is, if you’ve made something interesting, people will come and take a peek.
2. Don’t just dabble. I set up the site as minimally as possible using the very simple Usemod wiki. Usemod is great because it so small; you can easily modify it if you have simple needs. Unfortunately, spammers found out about the site rather quickly and began hitting it heavily. If I had used better developed software, such as the Mediawiki, I could probably have turned on all kinds of anti-spam measures that were now not available to me, and that would have been too much work to develop. Even then I could probably have switched to Mediawiki, but that seemed too much work to me for a simple experiment. In hind-sight that would have allowed me to keep the experiment running, so it’s a pity I chose not to take that path.
3. Don’t underestimate your volunteers. I had assumed that the level of quality would be fairly high, but perhaps a little too consistent; and in order to remedy this I had planned to add a few bad translations myself (remember, the experiment was to measure differences in consistency). Not necessary, it turned out. The quality of submitted translations was both high and varied.
4. Let your volunteers find things out for themselves. I had planned a translation dictionary, but nobody used the pages I set up for that. No need to provide your volunteers with things you think they would need, only provide them with what they actually need.
Looking at other translation projects:
5. There are more ways to skin a cat. My experiment was set up to find out what happens when different volunteers tackle one paragraph at a time. That idea was borrowed from Distributed Proofreaders, where volunteers work at one page at a time. My fear was that you cannot slowly build a literary translation when every translated paragraph ends up with a different style (Wikipedia syndrome). My hope was that you could solve this problem by having post-processors try to smooth out the differences.
Harry auf Deutsch worked this way; volunteers would each get assigned a small bunch of pages; then chapter managers would iron out the differences chapter-wide, and a book manager would do something similar for an entire book.
I have since seen another distributed translation project that takes a radically different approach. Although volunteers there are still free to tackle a work one paragraph at a time, in practice they work on much more, sometimes even on entire novels at a time. The difference is that they limit themselves in the quality levels they try to achieve. The first volunteer or set of volunteers uses software to generate a machine translation. The second volunteer for a work tries to produce a rough translation from the machine translation. The third tries to clean up that rough translation a bit.
[…] Distributed translation experiment, conclusions / Brankoâ€™s Weblog (tags: translation collaboration crowdsourcing language editing innovation distributed-processing) […]