If You’re Happy and You Know It (biblical Hebrew songs, cont’d)

So, mostly what I’ve been doing is supporting my faculty colleagues in their transition from Blackboard to our new Moodle learning management system.

But, partly what I’ve been doing is continuing with the biblical Hebrew resources in my series, “A Foundation for Biblical Hebrew,” a series that uses communicative learning tools as a supplement to an elementary biblical Hebrew curriculum.

This is, “If You’re Happy and You Know It.” Some points I had to work through, and on which I welcome feedback:

  • I decided that being happy and knowing it was best expressed with perfect verbs joined by we-gam.
  • I decided to use the masculine plural pronoun suffixes; sorry, but there’s just no room in the song for a more up-to-date solution to the problem of gender inclusivity. In English, I usually use the feminine singular as the “representative human” (“each student must see to her own work”).
  • “Let your lives show it”: going with the jussive here, naturally, verb-subject.
  • For the commands, I abandon personal pronouns: “clap a palm”; “stomp a foot”; etc. Again, only so much room in the scansion. This—leaving pronominal suffixes off of body parts where they are the objects of verbs—accords well enough with biblical usage (Psa 47:1; cf. Isa 37:22; but Ezek 6:11).
  • Main learning points: body parts, the masculine plural imperative, the masculine plural pronominal suffixes -tem and -kem; the conditional particle ʾim.

Feedback encourages, as always.

[If You’re Happy and You Know It (biblical Hebrew songs, cont’d) was written by G. Brooke Lester for Anumma.com and was originally posted on 2011/09/12. Except as noted, it is © 2011 G. Brooke Lester and licensed for re-use only under CC BY-NC-ND 3.0.]

MultiMarkdown and Me

MultiMarkdown: All I Never Knew I Wanted:

When I write, I want to write text files that are ready to be published either as word processing files or to the Web, with full formatting, while still already human-readable simply as text. And I didn’t even know how badly I wanted that until I discovered that it’s possible with Markdown. This is probably easier to show first, then tell.

As you can see, the *.txt file is human-readable, and I get the same formatting results whether I publish to *.rtf (for word processing) or to HTML (for web publishing as you’re reading it now). This is the point.

Results Explained:

These examples illustrate the gist of it. As a writer, this is what I gain from MultiMarkdown:

I get to create a human-readable document that can nonetheless be exported to the Web as HTML. Have you ever seen a page of text that is marked up for HTML, that is for web viewing? It’s a blizzard of tags that make the actual content unreadable. (You can see an example if you select, in your browser, View: Source or Page Source.) But with MultiMarkdown (or just Markdown: see below), I have a document that is prepared for the web, but which is also totally readable in plain text.

I get to create a human-readable document that can nonetheless be exported to a word processor as *.rtf (RTF). Have you ever seen a page of text that is marked up as *.rtf, for opening in Word or another word processor? It’s even worse than with HTML. (You can see an example if you take the RTF file linked above, change the suffix from *.rtf to *.txt, and open it in Apple’s TextEdit or in Microsoft Notepad.) But again, with MultiMarkdown, I have a document that is prepared for export as *.rtf to almost any word processor, but again which is also totally readable in plain text.

I get to write this file just once, and archive it as a single file, no matter whether I used it for word processing or web publishing. The same file, written in MultiMarkdown, can be exported as an *.rtf document, easily read in almost any word processor, or as HTML, easily read by any browser or pasted into a blog post or web site.

I get to compose this file in plain text, in any application that suits my stage in the writing process (collecting ideas, outlining, drafting, editing, publishing). It doesn’t feel like I am writing “markup,” it feels as much as possible like I am simply writing. The beauty of Markup is that most of it derives from email conventions: a line of white space between paragraphs, or asterisks surrounding a word or phrase to mark emphasis, or two asterisks for strong text. There are multiple ways (see below on Gruber’s Markdown) to write Web links that are wonderfully readable, completely unlike HTML web link markup.

I get to be sure that it will be readable in twenty years, without a word processor or web browser to render the formatting. Do you have any old files that you cannot read anymore because they only exist in an obsolete format like “AppleWorks”? The stuff I wrote during my Masters work can only be opened as plain text, and the text is entirely buried in obsolete markup and code. But the stuff I write today in Markdown is already human-readable in plain text, and will remain human-readable for as long as we have plain text.

This is the beauty of MultiMarkdown: plain text files, easily readable to the human eye, but already marked up for headers, sub-headers, ordered or unordered lists, emphasis, and footnotes…both for word processing via *.rtf or for web publishing via HTML. Yeah, it’s the writer’s holy grail.

What is MultiMarkdown?

John Gruber developed Markdown with the web-publishing end in view. Markdown allows almost any formatting one will need for most purposes: emphasis (usually italics), strong text (usually bold), paragraphing, lists, block quotes, hyperlinks to the web, and more. However, Gruber’s Markdown exporter only exports as HTML, because web-publishing is what Gruber has in mind.

Fletcher Penney developed MultiMarkdown as a supplement to, or extension of, Gruber’s Markdown. It accomplishes two things:

  • It exports Markdown as *.rtf rather than only as HTML. (It also exports to OPML, LaTex, and other formats that you may or may not know about or be interested in.)
  • It adds syntax for things like bibliography, footnotes, tables, and more.

So, MultiMarkdown incorporates all the features of Gruber’s Markdown, and extends the idea beyond web publishing to word processing. Note that you do have to install Fletcher’s MultiMarkdown script and support package in order to export MultiMarkdown plain text files as HTML, *.rtf, or other file formats.

My Workflow

I like this because I often don’t know where doodling, note-taking, and outlining might leave off and “writing” begin. I am learning to write in MultiMarkdown all the time, in every stage, because any of that stuff may, at some point, become part of the written piece. Composed in Markdown, anything I write is legible while I play around with it, and it won’t require additional formatting for word processors or for the Web once that writing sits in the final, published piece.

For example, this blog post was

  • begun as a note in NotationalVelocity,
  • moved into OmniOutliner while I played with structure and began some drafting,
  • imported via OPML into Scrivener for continued drafting and editing. From Scrivener I can compile it as HTML (as for this post in WordPress), or as *.rtf for word processing. I save it in Scrivener, but also compiled as plain text ( *.txt) for archiving.

At any of these stages I can compose freely in MultiMarkdown, working in whatever tools suits my present location and purposes, knowing that the result will be a human-readable plain text file formatted for word processing or for the Web.

What do you think? It can sound complicated, and there is a bit of a front-end learning curve (not much, for anyone who already habitually writes in “email style” paragraphing), but once learned, it is all simplicity itself. Can MultiMarkdown do for you what it does for me?

[MultiMarkdown and Me was written by G. Brooke Lester for Anumma.com and was originally posted on 2011/05/02. Except as noted, it is © 2011 G. Brooke Lester and licensed for re-use only under CC BY-NC-ND 3.0.]

Day in the Life of the Digital Humanities 2011

Along with everything else in life that you’ve been missing, the Day in the Life of Digital Humanities (“Day of DH”) 2011 came and went a couple of weeks back. What are the “Digital Humanities,” you ask? You could settle for me telling you that it’s humanities accomplished digitally, or you could ask the Wikipedia about it; or best of all, you could simply hear the explanations offered by those who have self-identified over the last three years as working in “digital humanities.” Here are just a few:

Digital Humanities is the application of humanities methodologies and theories to modern technology research. -Andy Keenan, University of Alberta, Canada

Under the digital humanities rubric, I would include topics like open access to materials, intellectual property rights, tool development, digital libraries, data mining, born-digital preservation, multimedia publication, visualization, GIS, digital reconstruction, study of the impact of technology on numerous fields, technology for teaching and learning, sustainability models, and many others. -Brett Bobley, NEH, United States

I think digital humanities, like social media, is an idea that will increasingly become invisible as new methods and platforms move from being widely used to being ubiquitous. For now, digital humanities defines the overlap between humanities research and digital tools. But the humanities are the study of cultural life, and our cultural life will soon be inextricably bound up with digital media. -Ed Finn, Stanford University, USA

On the Day of Digital Humanities, hundreds of folks who see their work in this way agreed to write a blog post about what they were doing that day, March 18, 2011. (This was the day that I became aware of the term, “digital humanities,” because the Day nosed its way onto my Twitter feed, whereupon I followed the tag #dayofdh for the rest of that day and the next.)

You will be excited to know that I’ve saved the best news: Because the fine folks at Day of DH have made the RSS feeds for the blog posts available as an OPML file (or, to translate, “Because blah blah the internet is cool”), I have been able to place the blog posts on my public NetVibes page! And you have a whole year to peruse them before Day of DH 2012!

[Day in the Life of the Digital Humanities 2011 was written by G. Brooke Lester for Anumma.com and was originally posted on 2011/04/05. Except as noted, it is © 2011 G. Brooke Lester and licensed for re-use only under CC BY-NC-ND 3.0.]

Closed Captioning for User-Generated Video (via ProfHacker)

[Changed title, but not URL, to reflect distinction between subtitles and closed-captioning.]

Yesterday, ProfHacker posted a blog entry about how to produce closed-captioning for your videos using the site Universal Subtitles. As ProfHacker points out, when you have created the subtitles, they exist only at the Universal Subtitles web site; but, you can download the subtitles as a file and upload that file to your video on YouTube. ProfHacker shows the process, step by step.

Embedded below is my first effort at closed captioning. The main glitch is that my videos often already have subtitles of varying kinds, because they are often language-learning videos. And, you cannot (I think) change where the closed-captioning sits: it is always at the bottom of the screen. Now, if your already-existing subtitles are YouTube “annotations,” you can always go into YouTube and move them around. But, if your subtitles were created with the video itself (as in iMovie or whatever), then you would have to actually go back and re-edit the video and upload the revised version (which would have a new URL on YouTube).

The take-away on this for me is that, when I produce subtitles in my videos (that is, subtitles that are not closed-captioning), I will want to keep them at the top or sides of the screen, so that there is room reserved at the bottom for closed-captioning. As you can imagine, the screen “real estate” will really be filling in at that point.

This is my video on how to sing Happy Birthday in Hebrew. In the few places where my subtitles and my closed-captioning collide, I have not tried to fix it (yet). Obviously, you will need to click the “cc” (closed captioning) button at the bottom of the video screen.

What experience do you have with closed captioning, whether needing it or producing it? What issues should I know about as I continue to closed-caption my videos?

[Subtitles for User-Generated Video was written by G. Brooke Lester for Anumma.com and was originally posted on 2011/03/11. Except as noted, it is © 2011 G. Brooke Lester and licensed for re-use only under CC BY-NC-ND 3.0.]

Podcast Ideas in Hebrew Bible

Not very long ago, Chris Heard canvassed his readers for suggestions about short podcasts on topics in Hebrew Bible: you can see the results for yourself. Mark Goodacre’s NT Pod continues to be well-received (and no surprise).

I have a lecture series in progress, geared toward my introductory students and designed to accompany a traditional course in Hebrew Bible. Each lecture is a podcast episode comprising a pair of 25–30-minute halves. The podcasts are slide-enhanced, and in *.m4a format, playable by iTunes, iPod, or QuickTime, and with some help from Blackboard can be viewed on a web browser as well. In their current revision, I consider them as still in “beta,” and I don’t plan to publish them to public directories until I’ve done some clean-up on them.

I would like, though, to plan a different kind of series, more after the pattern being laid down by Mark and Chris: 5–12-minute episodes, audio only, on manageable critical issues in biblical studies. I wouldn’t begin until Spring 2010, but I would like to begin thinking of ideas. Chris got good results on his query, so I am asking the same: what topics would you like to see addressed in such a format? Some ideas I already like are:

  • What are Old Testament Pseudepigrapha?
  • What is Apocalyptic?
  • Emergence of Israel in the Land, in four parts: chronology, rapid conquest model, gradual infiltration model, revolt model
  • DtrH and Redaction Criticism
  • Walls of Jericho
  • Finkelstein’s “Low Chronology”
  • “Satan” in the OT
  • YHWH, El, and Baal
  • YHWH and “his Asherah”
  • Who is Job’s “redeemer”?
  • What is the Exile?
  • ?

The audience would be about the same as that (apparently) envisioned by Mark: the intellectually curious layperson or the scholar outside of his own fields of expertise.

What would you like to see in a series of short podcasts in the academic study of the Hebrew Bible?

Rosetta Stone Arabic

I finally broke down and purchased the Arabic language module from Rosetta Stone. Anybody out there already have experience with Rosetta Stone language software?

I had long considered taking Arabic in a structured way from an accredited institution, whether brick-and-mortar or at a distance, but two factors have so far conspired against me:

  • My own teaching schedule is very tight, and almost always conflicts with whatever is available.
  • Institutions of learning may as well throw up electrified fences with ground-glass ramparts, if they refuse to keep contact information up to date. Nobody can tell whether our web sites were published last week, last year, or ten years ago, and when you send emails to the contact people on our web sites, those emails drop into our Big Black Hole. (I don’t mean my own school when I say “our” and “we”: I’m talking about a culture-spanning problem.) No schools want my tuition money badly enough to keep their contact information up to date or answer an email, so Rosetta Stone gets it. And I get a six-month money-back return policy, in the bargain.

Let me know if you’ve had any experience with Rosetta Stone. It will be a few days before I dig in, but monkeying around with the thing at the store was more fun than I’ve had with language since I puzzled out my first liver omen.

(Obligatory disclaimer: I don’t work for Rosetta Stone, and they haven’t given me anything in exchange for writing about their software. They did throw in an extra headset-and-mike, which was pretty cool.)

Moltmann! Live on Web! Today!

It’s all true. Jürgen Moltmann is delivering our convocation address this morning, and this afternoon he will have a round-table discussion with our theology faculty members Anne Joh, Nancy Bedford, and Stephen Ray.

Both events are to be webcast at the URL http://www.garrett.edu/convocation. Viewers will require Apple’s QuickTime Player (Mac/Windows).

Convocation is at 11:00 a.m. Central Time: tune in as early as 10:00 a.m.

Round-table discussion is at 1:30 p.m. Central Time: tune in as early as 1:00 p.m.