Wikifunctions is a new site that has been added to the list of sites operated by WMF. I definitely see uses for it in automating updates on Wikipedia and bots (and also for programmers to reference), but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic information. I have mixed feelings about this, as I don't like existing programs that automatically generate articles (see the Cebuano and Dutch Wikipedias), and I worry that the system will be too complicated for average people.
It isn't just "a good bit of work", it's an unreasonably large amount of work. It's like draining the ocean with a bucket. I'm talking about tagging hundreds of subtle distinctions for each sentence, and that not tagging those distinctions will output nonsense for at least some language.
I did consider it. And it's blatantly clearly overall less work, and easier to distribute among multiple translators.
For example. If I'm translating some genitive construction from Portuguese to Latin, I don't need to care on which side of English's esoteric "of vs. 's" distinction it lies in. Or if I'm expected to use の/no in Japanese in that situation. Or to tag "hey, this is not alienable!" for the sake of Nahuatl. I need to deal with oddities of exactly two languages - source and target.
Under the proposed system though? Enjoy tagging a single word [jap-no][eng-of][lat-gen][nah-inal]. And that's only for four languages.
(inb4: this shit depends on meaning, so no, code can't handle it. At most code can convert sea[lat-gen] to "maris", but it won't "magically" know if it needs to use the genitive or ablative, or if English would use "of" or "'s".)
False dichotomy.
If you're eager to assume (i.e. to make shit up and take it as true), please do not waste my time.
Source: you made it up.
Okay... I've stopped reading here. If your low-hanging fruit example is three closely related languages, then it's blatantly clear that you're ignorant on the sheer scale of the problem.