Categories
Conducting Music Technology

Going paperless as a conductor: iPad + forScore

One of the advantages of being half classical musician, half tech nerd is that I’ve actively enjoyed being forced to grapple with new technologies as a result of the pandemic. In order to keep doing some version of what we do, musicians have adapted to make use of video-conferencing, audio recording, and pretty much anything else we can get our hands on. I’m now the proud owner of a fancy webcam, various peripherals, and a ring light apparently designed for make-up tutorials (a potential side-hustle I will consider carefully).

Even before the pandemic, though, I had been thinking about going ‘paperless’, or something approaching it. Environmental reasons aside, I live in London now, and most of my scores are stowed in an attic. I don’t own a printer; keeping it stocked with ink (expensive) or paper (wasteful) would be a pain.

For most of my life, going paperless hasn’t been a viable option. The technology and hardware either haven’t existed or haven’t been cost-effective. Now, it seems, the tide may have turned. In this blog, I’m going to detail my experiences with going paperless, and how it’s turned out.

Tablet: iPad Pro 12.9in

I was advised by friends to go for the biggest screen possible – anything smaller than A4 doesn’t allow you to display enough of a score to work from. This led to the eventual purchase of a 12.9in iPad Pro 2020 – together with the most expensive pencil I have ever bought, the Apple Pencil 2. I decided it was worth doing it properly – and after all, as a professional tool, at least part of it will be claimable against my taxes this year.

Despite being a dyed-in-the-wool Windows advocate, I have to admit that Apple make a really good product. It’s quick, sturdy, and it looks professional, especially in the natty case I purchased for it. Apple fans tend to say ‘it just works!’ and, even though my customary response is ‘where’s the fun in that?’, it does indeed just work.

Score-reading app: ForScore

There are now a handful of apps for managing and viewing your scores, and a fair amount of variety between them. I’m indebted to the Scoring Notes blog for the thorough review of forScore which convinced me it was the one to go for.

forScore is available to download for a one-off payment of £19.99. It’s a powerful bit of software with a lot going on under the hood, though you don’t need to mess around for too long to figure out its basic functionality.

There’s still a degree of orientation required, and you have to get used to tapping the right part of the screen for what you need, for example to bring up the menu. In other words, it needs a little investment of time to ‘learn’ the software. For the first two or three rehearsals using it, I brought along hard copies just in case I couldn’t negotiate the app quickly enough, but it wasn’t long before I was happily zipping through my digital scores.

There’s no lag between page turns, which was something I had initially worried about – they respond instantaneously to a touch on the relevant side of the screen, in the same manner as Amazon’s Kindle. I’ve found I’m able to turn a page much more quickly – and with a more economical gesture – than when using a physical score, though this is a tradeoff for only being able to view one page of a score at a time.

It’s interesting the difference that this makes. As a conductor, you want to be able to absorb the salient points of a score at a glance, rather than spending all your time with your head down. Arguably, the two-page open layout of a regular physical score would be more useful in this regard. But it’s possible, with practice, to flick rapidly back and forth while conducting, due to the speed of the page-turns.

forScore has a wealth of other features including an onscreen keyboard and a metronome, which I haven’t used a great deal, but are nice to have.

Mark-up view in forScore

Markings

Remember that expensive pencil? Well, it does more than clip to the side of the tablet looking pretty (and charging via induction). forScore’s integration with the Apple Pencil is rather clever, and I’ve quickly grown accustomed to using it for markings.

It’s easy to reach for it, and as soon as you start marking the score, the software puts you into marking mode. This works well, and you can double tap on the Pencil to turn it into an eraser, which, with a little practice, is reasonably intuitive.

My only problem here was with not always remembering to exit marking mode (by clicking the ‘Done’ button) after having replaced the Pencil. As such, when I went to turn the page, I ended up jabbing fruitlessly at a corner before realising the software was still in mark-up mode. It turns out there is a feature buried in Settings which fixes this by automatically exiting mark-up mode after a short delay.

I’ve enjoyed marking up my scores in this new environment. I’m not a big colour-coder, but the potential is there, and it’s reassuring to think that you can scribble all over it and erase it later if you go overboard.

Changing the annotation settings

Scores

forScore is reasonably good at importing scores from cloud-based services such as Dropbox (which I use) and Google Drive. You can then edit their title and composer information in the metadata as you please.

Here I’ll admit to a tiny bit of frustration. The integration with cloud services such as Dropbox isn’t two-way, and I’d prefer it if my markings on a score could be synchronised to the cloud-saved file. As it is, you have to manually export the score back in order to do this (unless there’s something I’m missing), which is too fiddly to do regularly. As such, I have ended up with two digital copies of a piece: one unadulterated but on the cloud, accessible anywhere on any computer; and one beautifully marked-up, but accessible only on my iPad.

The other quibble concerns the Labels you are able to add to scores, helping you organise them in the digital library. It’s nice being able to give things ‘Tags’, ‘Genres’, and ‘Labels’, but it’s not clear how each are supposed to be different. This is because each field is actually customisable and can be anything you’d like. In practice, though, I find myself getting confused trying to remember whether I’ve decided that ‘Canticle’ or ‘Sacred’ are Genres or Tags, and as such I haven’t really made use of this function.

Conclusions

First, the pros. I can travel light, with one tablet instead of multiple scores. All the music I need for multiple projects is accessible in one place, with all my markings, backed up on the cloud. The device is robust, and using it is a pleasure. I make more markings, and spend more time with my scores, because they’re always right there, just a click away.

That said, it’s not without its drawbacks. One obvious thing that I haven’t mentioned is that in order to make use of it, you need to possess a pdf or scan of the score. This is all very well with music in the public domain, which these days is available on IMSLP or CPDL – but contemporary music is a different story. Publishers have been wary of digital downloads, perhaps waiting for an app which can control permissions, like Amazon’s Kindle. It would be great, for example, to be able to have heavy books such as choral warhorse Carols for Choirs or my Bärenreiter B Minor Mass available in pdf form.

And one more important warning: remember that the iPad itself, while not exactly heavy, is still weighty enough to slide off an insufficiently robust music stand. It’s enough to give you Black Mirror-style cracked-screen nightmares.

These caveats aside, I’m very glad I took the leap. I now find it difficult to imagine my life without the iPad as my primary score-machine. It looks good, it feels good to use, and it does pretty much everything I need it to. I don’t have to worry about printing a lot of music for a one-off gig. Summoning a score I need at the touch of a button – well, it feels like the future.

Also, I can amuse myself by playing its little onscreen keyboard for hours on end. Myself, mind – I doubt anyone else is amused…

Categories
Music Technology

An A.I. attempts to rewrite Thomas Tallis

Jukebox is a type of neural net – an network of artificial nodes which is ‘trained’ on a series of data, and can then be taught to use this data to generate new strings. These artificial intelligence networks have been used to create unique images, poetry, scripts, and music. Essentially, they work from one data-point to the next and try to work out what letter, pixel, or note should come next, based on its training. I first encountered them on the wonderful AI Weirdness blog, which is a rabbit-hole of the hilarious and surreal things that can now be done with this technology.

What makes Jukebox different from many of the varieties of generative music that have come before is that it’s trained not on symbolic datasets – for example MIDI files which encode digital musical instructions into code – but actual audio. Not only that, but it has also been conditioned to recognise the shape of words, meaning it can – sort of – generate these sounds too.

This means that you can feed it an audio sample, give it a few parameters such as a genre or artist to emulate, specify the words, and then ask it to predict what should come next. It bases these choices on what it has learned about the 1.2 million real songs that formed its ‘training’ dataset.

The results, as one might expect, vary wildly in quality. On the aforementioned blog, Janelle Shane posts some creations which are exciting and not a little horrifying – for example, a pastiche Frank Sinatra Christmas song which should belong to an album entitled ‘Music from the Uncanny Valley’.

Most of the results that have so far been posted by researchers have the flavour of I’m Sorry I Haven’t A Clue’s ‘One Song to the Tune of Another’ (see here if you need a description of this very complicated game). Thus you can get the AI to do Queen in the style of Nirvana, for example.

Inevitably, a large majority of its training data is non-classical in nature, but I still thought it would be interesting to prompt it with some choral music, to see what it would come up with. The results are surprisingly impressive, though naturally very odd.


Jukebox was primed with about twelve seconds of a recording of the classic Thomas Tallis banger ‘If ye love me’, and given the full lyrics. Now, it has a limited dataset of genres and artists to use as a template, and the closest I could find were ‘Classical’ for the genre and, yes, ‘Mormon Tabernacle Choir’ for the artist. Already the mind boggles.

It had three goes at generating 40 more seconds of the piece, transforming the input through a process of ‘upsampling’ at three different levels. Let’s have a listen to what it came up with after some four hours of labour:

1. If ye love meh

The neural net takes over on the last syllable of ‘commandments’, and in each sample it has a different idea of what chord should follow. Here, it plays it safe and repeats the chord, which works. It’s cool that it makes the phrase lengths broadly ‘vocal’ in nature, and simulates breaths before them too, presumably learning to ape the opening of the prompt.

Some extraneous, non-vocal sounds start to appear in the middle, including at one point what sounds like a train passing, or perhaps a snare drum. I wonder if that’s due to it using the Mormon Tabernacle Choir, with their often quite elaborate arrangements, as a model. For all it knows, the piece starts acappella and then goes on to become instrumental. It could also be misinterpreting the acoustical reverb as ‘new sounds’ in their own right, and trying to work out what they could mean.

It also mostly stays in key, until the very end, which normally unremarkable thing I point out as it is not a given in the other samples…

2. If ye…love…meeee….

This one’s ‘-ment’ chord is actually a cool choice – A minor rather than original F major. Afterwards, however, it goes off the rails a little earlier than the previous one. I like the little cymbal ‘ting’ after the second phrase. The choir’s vocal production becomes very slurred, and the AI forgets the key, if it ever knew what that was in the first place. The end becomes rather worrying and distorted, and the harmony is bizarre.

Presumably, because it isn’t given any information about what harmony actually is, it doesn’t know the rules except by what it’s heard before. It must base its moment-to-moment choices about what audio to generate on what previous bits of audio it knows are usually followed by. However, I can’t imagine there are very many examples in the dataset of an audio progression of the sort that happens at the end of this excerpt. How did Jukebox come up with it?

3. If ye love me, keep in the same key..?

Uh. Pretty out-there choice of a continuation chord on ‘commandments’, but it recovers pretty successfully and sticks the landing. The words also feel a little more present in this one, and it stays in a key and sort of in tune longer than the others, at least until a demonic final entry before the file mercifully ends. There’s some intriguing parallelism in the middle, during the extension of a word that I think might be ‘you’. And it remembers to be acappella throughout, which the other two didn’t manage. Probably the most successful.


What’s impressive is that, in all three of its goes, the AI learns that the phrases are preceded by breaths, and apes the length of the first phrase for most of the following ones, varying them subtly but plausibly. But the overall effect of the continuations (if one can ignore the ghostly distorting of the voices) is of someone dreaming a conclusion to a piece to which they only remember the opening. Like dreams, they lose coherence and stop making sense at various points. Still, given that the vast majority of its training is on popular music and other styles, it does a pretty creditable if slightly meandering job.

For me, the results of this are roughly equal parts disturbing, exciting, and hilarious. Disturbing, because the distorted voices end up sounding like something from a horror film. Exciting, because the computer isn’t bound by our conception of harmony or structure – it dreams up new combinations that we might never have thought of. Insomuch as it has worked out the rules, it’s done so by simply listening to a lot of music, like an alien tuning in from another planet and trying to understand how our music works.

As a tool for inspiring creativity, it has limitless potential, because it can always surprise us with its choices. It won’t be long before it gets better at understanding different genres and is able to produce highly competent pastiches – the musical equivalent of these non-existent people.

In the meantime it’s more likely to make me giggle than reflect on the mysteries of human existence. But it won’t be long. I, for one, welcome our new robotic musical overlords.

Categories
Choirs Creativity Technology

Making a ‘Virtual Choir’ video with free* software: Part 3 – Video

In this three-part series of posts, I’ll take you through why and how to make one of those charming multi-screen, multi-track musical videos, based on my own experiences. I’ve used software that’s freely available online [though see update below!], and I’m very much coming at this from the perspective of an amateur video editor, in the hope that my tribulations might make life easier for anyone contemplating putting one of these together.

Click here for Part 1 & Part 2

[Update, March 2021: I’ve recently done a couple more of these videos, and decided to return to these posts, to see if they can be made more helpful, in the light of my more recent experiences. Most importantly, I’ve downgraded the headline from ‘free’ to ‘free*’. It’s definitely possible to do this with freely available software – but I’ve found that spending a little money on professional editing software makes the process roughly 10 times easier and more enjoyable.]

We’ve got our audio. Now it’s time to put the video together.

Step 3: Transcoding the video

This sounds fancy, but it’s really just the process of making sure all the videos you’ve been sent will play nicely with each other. Different phones produce different kinds of files, and film at different frame-rates. Handbrake will put them all into a format that Premiere/Lightworks can handle.

NB Phone cameras generally use variable frame rate (VFR) to make the size of the file smaller. Many video editing programmes don’t like that, as it makes things much harder to line up – that’s why we’re ‘transcoding’ the videos to use a constant frame rate (CFR)

  • Add the file to Handbrake when prompted
  • From the presets, select ‘Production Standard’
  • On the ‘Video’ tab, make sure you’ve selected ‘Constant Frame Rate’, and specified a frame rate to work at. 30fps is fine for our purposes. It should be the same for all the video files in the project
  • Press ‘Encode’ to generate the new file, and give it a new name so you know it’s the version you’re going to use
  • Do this for all the videos you’ve been sent
Transcoding in Handbrake

Step 4: Assembling the video

  • Create your project in Premiere Pro/Lightworks (or use the preexisting conducting video project) and add all the newly-transcoded videos, each with their own Video and Audio track
  • To create that split-screen effect, select each clip, and make each video smaller (using Scale), then change its position along the X and Y axes (using Position) (DVE in Lightworks) (see Note below)
  • Soon enough, you’ll have a screen full of videos. Now, in the EDIT tab, you can line them up with each other by using the audio of each track, and lining up the ‘clap’ waveform, just as you did when lining up the audio
  • This done, you can mute all the audio tracks and import the one you’re actually going to use – the mixdown from Cubase we made in Part 2
  • Export the edit
  • You could leave it like this, but if you want to add transitions and fades-in etc, rather than use the same project, create a new project and import the video you just made. This reduces the burden on the computer processor
  • That’s it!

Note: The Grid

There’s some maths to be done here – work out by what factor you need to make each clip smaller in order for them to fit into the grid.* In the end I used a 7×7 grid to accommodate my 27 participants. I suspect there’s a more elegant solution out there. I could have used a 6×6 grid, of course, but then my conducting video would have been off-centre, and I couldn’t allow myself not to be the centre of attention!

This is greatly complicated by the fact that not everyone will have sent you a video of the same size. A video of dimensions 1980×1080 will need a different scale factor applied to it to make it the same size as one which is in 640×480. Get out the calculator if you can be bothered, or you can eyeball it if you’re feeling lucky.

I got sent a couple of portrait videos. At that stage I decided that rather than asking them to repeat in landscape, I would simply crop and scale them to look landscape, in a rather trial-and-error process.

In my first videos, I just nestled the videos up next to each other with no gap in between – I felt it looked neater than separating them. However, subsequently I experimented with ‘feathering’ the edges of each individual video, which helps make them look more uniform (see here for an example).

Premiere Pro has an effect called ‘Edge Feather’ which is supposed to do this, but for reasons best known to itself, it didn’t work in the largest video I’ve made (circa 40 participants). I hit upon the (very fiddly) solution of using an online picture editor to create a 7×7 grid, blurring the edges, then overlaying it on top of the other videos. Here is the result. In hindsight, it might have been wiser to create the grid before importing any of the videos.

Note: The Background

By default, your background will be black, but this makes the videos show up very starkly and will highlight any inconsistencies in the way they are filmed. Instead, I used the colour-picker tool to lift an off-white colour from the background wall of one of the videos, and created a background ‘matte’ from it, to go behind all the videos. I like the ‘clean’ effect it gives the final video.

Assembly in Premiere Pro

Final Thoughts

There are probably a number of ways I’ve made this more complicated than it has to be. I have, though, generated a work-flow that seems to get the results I’m after. As I’ve mentioned, you can take bits of it that you like, and incorporate them into your own way of doing it – let me know what you come up with!

I’m probably going to end up making more of these, and I’m keen to refine the process. I think it’s worth conductors dabbling – these formats are not going away. Judge for yourself below..!

Categories
Choirs Creativity Technology

Making a ‘Virtual Choir’ video with free* software: Part 2 – Audio

In this three-part series of posts, I’ll take you through why and how to make one of those charming multi-screen, multi-track musical videos, based on my own experiences. I’ve used software that’s freely available online [though see update below!], and I’m very much coming at this from the perspective of an amateur video editor, in the hope that my tribulations might make life easier for anyone contemplating putting one of these together.

Click here for Part 1 and here for Part 3

[Update, March 2021: I’ve recently done a couple more of these videos, and decided to return to these posts, to see if they can be made more helpful, in the light of my more recent experiences. Most importantly, I’ve downgraded the headline from ‘free’ to ‘free*’. It’s definitely possible to do this with freely available software – but I’ve found that spending a little money on professional editing software makes the process roughly 10 times easier and more enjoyable.]

We’ve looked at why we might want to have a go at a split-screen music video. Now let’s look at one way of actually doing it.

Note: This is just one of a thousand different ways you could approach this. I’m not claiming this is the best way – just the one that worked for me, which I mostly figured out as I went along.

Further note: I’m going to address this to the moderately tech-savvy. This is purely a guide to what I did – take all or none of it. It presupposes using YouTube tutorials to get the basics of the software, so I’m not going to cover these in the guide.

What you’ll need

This is the most basic version of the equipment you’ll need to put this together.

  • A reasonably well-specced computer
    • There’s no getting away from this, I’m afraid – video editing eats processing power for breakfast. You’ll need a reasonable amount of RAM and a decent CPU. If you’re using a MacBook, you’ve probably already got this. If not, check your system specs – I reckon 4-8 GB of RAM and a reasonably modern processor should do it, together with enough space on the hard drive for quite a few videos!
  • Audio editing software
    • I used Cubase, which is available as a free trial. If you need longer, it’s not too expensive to buy, or you could try Audacity, which is rather more fiddly, but free for life
  • Video editing software
    • Adobe Premiere Pro. It has a really good introductory tutorial built in. I initially used it on a free trial, but subsequently decided it was worth the money to purchase a subscription for now (~£20 per month)
    • There’s also Lightworks, which is free and does the same sorts of things, and Shotcut, which is also well-specced. However, I have found that these free editors become unstable after a certain number of tracks are added. A little investment in the software prevents a multitude of headaches down the line
  • Handbrake
    • This helps us make sure all the video files submitted to us can be edited by the software, by converting them all into the same format
  • Time

Step 1: Create the Guide

You could simply make your performers record audio and video at the same time. However, this can be a little overwhelming – it’s a lot of pressure to think about both the visual and the audio at the same time when you are recording yourself, and it makes editing and controlling the audio trickier.

We’re going to record the audio and video components of the video separately, then put them together afterwards. This means that the performers can focus entirely on getting their performance right, then, having done so, can effectively mime the video. This allows for a more engaging presentation.

Creating the Guide

The performers need a guide recording to perform along to. It can be as simple as a metronome, but the more the performers feel like they’re performing with others, the better, and some have used preexisting recordings for this purpose, grafting their own voices or instruments on top of it.

I don’t find either of these solutions particularly gratifying. Using a metronome can lead to a rather mechanical performance, and singing along to someone else’s recording doesn’t allow the freedom of your own interpretation.

Note: in fact, some the pieces I chose needed a flexible tempo, which a metronome would make impossible, and there weren’t any extant recordings to use.

Here’s how I made my guide recording:

  • Using my phone, I took a video of myself, clapping on the fourth beat of a metronome – beep, beep, beep, clap – followed by me conducting the piece to camera.
  • I then recorded myself playing the choral parts/accompaniment on the piano into the audio software (Cubase), while watching the video I had just made (making sure to clap along at the beginning), then exported this as a .wav file
  • In the video editor (Premiere Pro), I lined up my new piano recording with the video, by lining up where the two ‘clap’ waveforms were on the audio tracks – they’re pretty easy to spot. I then exported this video
  • Watching the new video, I repeated the piano recording process, except this time recording myself singing the vocal parts, always lining them up using the clap

Make a rough mix of the voices and piano in the audio editor by adjusting the track levels on the mixer until you’re happy. Then add it as the audio to your conducting video.

For a recent video, I actually made four different versions of this ‘guide’ mix, each one emphasising a particular voice-part by putting it forward in the mix, and the others back. This was a lot more work, but the singers found it helpful to have a strong lead on their part to sing along with.

After exporting, this left me with a video of me clapping, then conducting an invisible ensemble of piano and singers. By following myself conducting, instead of using a metronome, I was able to allow for breaths and a slightly more organic performance. It also forces you to learn whether you’re easy to follow or not!

Note: I asked friends to supply the voice-parts I couldn’t sing. If there’s no one around and you don’t feel like doing it, why not engage some professional singers to lay down guide tracks for a few bob – they’ll appreciate the work.

Send it to the Performers

Send the video to the group, along with detailed instructions as to how to contribute – everything from positioning the recording device, to warming up beforehand, and clapping with the guide. I based my guidelines on the excellent list available here (geared towards the acappella tradition but mostly applicable).

Experience suggests the following problems are most common (and need highlighting in the instructions!): the orientation of the videos (I prefer landscape, but everyone has to do the same or it looks messy); forgetting to clap in the audio/video/both.

Each participant records audio (with headphones in) and then video separately (no headphones), and sends you both files. Use a service that permits the transfer of large files, such as WeTransfer, wesendit, iCloud, Google Drive, etc.

Step 2: Assembling the Audio

Lining up clap waveforms in Cubase

As you receive the audio files, import them into Cubase, and line them up with the guide recording using the clap.

NB You might need to make sure they’re in a format Cubase can read – for example, it doesn’t like Apple’s m4a format, so I used this website to convert those to wav.

Hopefully this should mean they’re vaguely together with each other – you can make micro-adjustments if not. You can trim ‘rogue’ moments out, add some reverb to distance the sound a little, and use the Mixer to get the balance right between parts. Play about until you’re happy, then export to a single file. Remember to leave in the ‘clap’ so that you can synchronise it to the video in the next stage.

If you can access plugins, I’ve found the following invaluable, used on the whole mix: DeEsser (to de-emphasise those sibilants); Limiter (to prevent the audio from getting too loud and creating distortion); EQ (taking off highs and lows creates a bit of distance); Reverb. The latter presents an interesting challenge: you want it to sound like the listener is hearing a choir at the normal distance away (10 meters or so), but must reconcile this with the fact that the singers’ faces are right up by the screen, which psychologically suggests a more intimate sound.

If you’re just making an audio virtual recording, you can stop there. If hubris hasn’t yet got the better of you, though, the final stage is video. Hold on to your hats (and spare a thought for your poor computer).

Next week:

Making a ‘Virtual Choir’ video with free software: Part 3 – Video

Categories
Choirs Creativity Technology

Making a ‘Virtual Choir’ video with free* software: Part 1 – Why

In this three-part series of posts, I’ll take you through why and how to make one of those charming multi-screen, multi-track musical videos, based on my own experiences. I’ve used software that’s freely available online [though see note below!], and I’m very much coming at this from the perspective of an amateur video editor, in the hope that my tribulations might make life easier for anyone contemplating putting one of these together.

Click here for Part 2: Audio and here for Part 3: Video

[Update, March 2021: I’ve recently done a couple more of these videos, and decided to return to these posts, to see if they can be made more helpful, in the light of my more recent experiences. Most importantly, I’ve downgraded the headline from ‘free’ to ‘free*’. It’s definitely possible to do this with freely available software – but I’ve found that spending a little money on professional editing software makes the process roughly 10 times easier and more enjoyable.]

You can’t escape them, it seems. Open your social media account of choice and there they are: serried ranks of faces, at once charming and somehow alien, singing directly at you. It seems almost magical, like they’re under a spell.

These kinds of videos aren’t new, but Covid-induced lockdowns have prompted a remarkable surge in interest in this quintessentially 21st-century form of performance. But how hard are they to produce? Do you need to hire a professional video editor, or can it be done by anyone with a bit of time on their hands and a taste for masochism? More importantly, is this a bandwagon worth jumping on?

I’m going to try and answer those questions over three posts, from the perspective of a musical professional but a technological amateur. My hope is that it will be a helpful resource to anyone thinking about doing this over the next few months, or beyond.

Let’s begin with the philosophical, before moving on to the technical.

Why make a video?

First things first: ‘because everyone else is doing it’ probably isn’t a good enough reason. I’ve joked about the bandwagon, but ultimately I think it’s only worth doing if it satisfies certain criteria: will my ensemble enjoy it? will it serve our mission/purpose? does this format serve us? Let’s address them in order.

Will we enjoy it?

If you’re working with an amateur ensemble, you presumably want this to be an enjoyable venture, or at least not an actively disagreeable one. The difficulties for the amateur contributor are not inconsiderable and shouldn’t be underestimated: 1) noone is at their peak of technical or vocal health during lockdown 2) not everyone has the same technology, or aptitude for it 3) there’s nowhere to hide and no safety in numbers, and 4) hearing/seeing yourself alone can be a very disheartening experience – even for professionals! Not to mention that everyone is adapting to different demands on their time and energy.

My solution has been to be upfront about these difficulties – to stress that the final product will be worth it, and that noone is being judged on their performance. As I’ve said numerous times, I wouldn’t anyone judging me on the current state of my lockdown-lapsed breath control! I’ve encouraged members of my ensembles to just give it a go, and promised that most things can be fixed in the edit.

Additionally, I’ve put in the caveat that if we as an ensemble don’t think it represents us as we wish to come across, we won’t release it publicly. Which brings us to the next criteria: what does it do for us?

What does it do for us?

Ultimately, once things are out there in the public domain, it’s pretty hard to close the box. You’ve got to be fairly sure that what you do put out there is going to reflect positively on the group.

There have been some terrific videos, which will certainly have long-lasting reputational benefits to those ensembles. This one is effective. And this one, below, from my old choir, is really slick and shows the group at home in their core repertoire. But it’s probably fair to say that not all the groups that have put videos out there are going to want them to stay there for time immemorial. So take a moment to think about reputational benefit vs risk.

Perhaps there’s a particular repertoire that’s under-recorded that your group specialises in. There might be a unique interpretation you can bring to bear, or a piece that says something about your identity as a group, or about the current situation. I think these are the most compelling reasons (and incidentally, I think they apply to commercial CD recordings too).

Ultimately, the way I’ve framed it to my groups is this: we’ll challenge ourselves to have a go. If we think it represents us well or is of value in some way, we’ll release it. If not, at the very least it’s generated something that we can keep and share internally, a memento of a bizarre year.

There’s a solution to the current situation which is group-shaped, by which I mean there’s no one-size-fits-all approach. Each of my main choirs has come at it differently, and come up with approaches to addressing it which suit them. Some will involve remote recordings, but not all. There’s no shame in not doing these, and they’re not right for all situations.

Next steps

Now we know why we’re doing this and what we hope to get out of it. Next comes the fun part.

Next week:

Making a ‘Virtual Choir’ video with free software: Part 2 – Audio