Hello, I am new here.
I try to copy real voice parameters (timing and pitch) to Rikka’s voice hoping to achieve some realism.
I wanted to know how “real” the Synth V can be. It is not a “classic” tuning, maybe it’s a little cheating, but it is interesting for me and maybe for someone else too. This way I can learn tuning from real voices.
I use Lua scripts for importing data from Praat phonetic analytic program (free). In the future I’d like to automate the process as much as possible. Still much work.
I will be glad to hear from you what you think about it.
Synthesizer V Studio Pro 1.3.0
Voice database Koharu Rikka AI ver.100 Japanese
So far I have two shortened Japanese songs from Hello!Project on YouTube:
(links to original songs in video description)
Koharu Rikka AI - Hatsukoi Cider (Yofuu Runo voice copy) 小春六花AI 「初恋サイダー」（豫風瑠乃のボイスコピー）
Koharu Rikka AI - Hikkosenai Kimochi (Oda Sakura voice copy) 小春六花AI「引越せない気持ち」（小田さくらのボイスコピー）
I don’t think there is really a problem using this method, we can learn a lot from real life vocals and still have areas that can be improved. The vocals in these two videos are very good, but I think they still need to be tuned specifically for the voicebank.
Thank you for creating and trying.
(Translated by translator)
Thank you for listening and commenting. I am trying different things. I hope my next attempt will be better.
I like Synth V. This is what I really wanted to do since I listened to a first vocaloid. Dreamtonics is the best allowing us to use scripts and also the best vocals quality.
Do you manually segment the song in Praat using a TextGrid and then import the timing and syllables information using Lua? That would make sense.
Are you also exporting a Pitch Listing and using that to build a Pitch Curve in SynthV? I would expect that would be really challenging!
Yes, I start in Praat, labeling timing (start/end of vowel in sylable, adding lyrics) with TextGrid and analyzing pitch. Then computing median pitch for notes and exporting to SynthV. Then manually tweeking phonemes lengths.
Also transfering relative pitch after subtracting note’s absolute pitch and simplyfying the contour.
That is 2nd try in Hatsukoi Cider.
At first I tried to copy pitch contour manualy by tweeking note’s pitch transitions and vibrato. (in Hikkosenai kimochi) It was very time consuming and not good result.
I thought about doing it by program, but I cannot figure out, how pitch curves are affected by parameters. Some splines probably, but too complicated for me.
For this purpose the relative pitch is a little unfortunate.
I can provide project files if someone is interested.
Thank you for the comment. I see maybe you thought about it too.
It is very interesting for me and I definitely will continue trying.
If you set the default Pitch Duration Left and Duration Right of the voice to zero, you wouldn’t have to worry about transitions. You would also want to set the Vibrato Depth to zero, so all the pitch variation would come from Praat data.
In that case, the Pitch Curve in SynthV would completely drive the pitch variation. The Pitch Curve can be +/- 1200 cents, where 100 cents is equal to a half step.
Instead of dealing in term of Pitch units (that is, frequency), you’ll need to convert frequency to half steps:
-- Convert frequency to half-steps
function freqencyToHalfSteps( frq )
return = 12*math.log10(frq/440)/math.log(2)
-- Convert half-steps to frequency
function halfStepsToFrequency( halfSteps )
return math.pow(math.pow(2, 1.0/12.0), halfSteps) * 440
So subtracting the pitch (in half steps) from the base pitch (in half steps) and multiplying by 100 should give you the same units the SynthV uses.
Or were you asking something else?
By the way, “tweaking” is probably the word you want - “tweeking” is a slang for odd behavior of methamphetamine users.
Thank you. I wanted to control the pitch not by Pitch Curve but by note parameters (Duration/Depth Left/Right, Vibrato…). So I wanted an equation to compute pitch from those. It was bad idea, I know now.
Even when I set Depth and Duration to zero, there are some ripples at the adjacent notes and control points of the Pitch Curve are very sensitive to manual shifting. Curves from adjacent notes barely meet at the same point. I know it can not be helped. It’s not meant for this purpose.
Sorry for the “tweeking”. I am not used to math , but I like to learn new words.
I believe you’ve got the option to choose between linear interpolation and spline interpolation on the control curves. So if you didn’t mind putting in lots and lots of control points, you could maybe, sort of, kind of get around the problem of the ripples.
Of course, I’m sure there would be other artifacts as a result.
I have totally forgotten different types of interpolation. I use only splines. I will try perhaps linear. Thanks.
Hi I know this is super late, but is there a way to get something similar without scripting? I wouldn’t mind missing out on the pitch copying, but I’d I like a way to plot out the notes faster without having to chart them word by word.
If I understand well - you want to use the Pitch quantization script to convert melody from the voice track, but you don’t have Studio Pro with scripting?
For Studio Basic the only option I see is to use external script to create SVP project file with the resulting notes.
I can try to rewrite the script for Lua command line.
It is possible, but is it what you want?
Yea that’s what I’m looking for, I’m sure a lot of Studio Basic guys would love to be able to use your method of tuning as well.
I saw it on YouTube, but you posted it here, too! I wanted to use this method last time, but I couldn’t try it because there was an error in the script. Do you happen to know how to solve this error?
This means - it cannot open the Pitch file. It is expected to be in the project file directory and named
If so, maybe try to use roman letters for the names.
What is your OS?
I will try it later, it will be only one time import and no chance to use other scripts.
And are the names and locations as described?
Yes, the folder and name were designated according to the YouTube video description. Is it because my Voicebank is not an AI? I’m currently using AI Lite Voicebank…