Best practices...

I’m new to Synthesizer V (with the Solaria voicebank). I used Vocaloid years ago, but nothing for voice since.

What’s the best way to come up to speed on the terminology, technology, and available resources? I’ve tried looking through this forum (resources in particular), but don’t really come up with much, and it seems that lots of people are getting info from somewhere, but I can’t figure out where.

I love how easy it is to work with SV compared to my old Vocaloid (v3 - Avanna was the last one I had I think). It works really well inside of Cubase as a VST, the results, with very little effort are just amazing. I’m getting good results on day one (for a beginner!), but I just want to make sure I’m going on the right track and using best practices for programming it.

Current documentation isn’t their strong suit.

There’s an official video series here: Synthesizer V Studio: A First Glance - YouTube

New features get announced, but the documentation of how to use a feature can be non-existent. For example, I think the way I found out about how to use Tone Shift was in the comments section of YouTube.

1 Like

Yeah, I’ve watched the videos, but they’re not much help when the software is not in English (or at least a language using the Latin alphabet, where you can usually decipher words even if not in your native language). I’ll go back and check the comments sections though. Never thought of that.

It sadly seems to be the way everything goes these days. Technical manuals seem to be virtually extinct. I guess that’s what happens when you can roll out an update in next to no time and it changes everything. LOL. Maybe a Wiki would be a good thing.

Plus, the official forum and website doesn’t seem to be the primary source of information. The fact that the website seems more intent on being clever than usable doesn’t help.

And Twitter and Facebook seem to get the announcements long before any announcement appears on the website, and they are cut-and-pasted on the forum not by the admins, but by other users.

Finally, English appears not to be the primary market. And supporting documentation in multiple languages is entirely non-trivial.

So as a small company, the focus appears to be more on growing the technology than growing the user base in a more traditional way.

1 Like

There isn’t really much you can do wrong as far as “best practices” go. tbh as long as you open each menu and read through the options you’ll have a pretty good grasp of the software. If you’re not sure what something does, click on it and see what happens.

Off the top of my head, some things that wouldn’t be evident from just clicking though the UI:

  1. The phoneme list isn’t present anywhere in the UI. You can either display it with the script that came bundled with Solaria, or check the C:\Program Files\Synthesizer V Studio Pro\clf-data\english-arpabet-phones.txt file (adjust the path for your installation directory)
  2. “Instant mode” applies auto-pitch tuning to your notes automatically as you work, but these pitch bends don’t appear in the pitch deviation panel, which can be confusing if you don’t know what the feature does ahead of time
  3. When entering notes, you can use - to extend a phoneme across two or more notes, and + to advance to the next syllable in a word.

I’m sure there are probably more things but the vast majority of features will be clear based on the name, or by simply trying them out and seeing what they do.

All that said. Dreamtonics please, give us some proper documentation. Especially with the latest AI updates being increasingly esoteric and unpredictable, the software is steadily becoming less intuitive.


Any idea how to use the Expression Groups? How to create them and assign them? I just have one entry “Default”, and seemingly no way to add new ones.

Expression Groups do not apply to AI voices.

Standard voices are created by recording many individual samples representing each sound and transition between sounds. These samples are then recorded at various pitches, and the Expression Group option lets you choose which one will be used. For example, these are Genbu’s options:

AI voices use a machine-generated voice profile to generate sound instead of discrete source samples, so there is no discrete list to choose from.

1 Like

Thanks for the info. This was driving me nuts. Now I can just ignore and move on. :smile: