Overall it seems to me that manual animation of faces is the only way to get a decent result, it’s definitely amazing that something like this can be automated but to me a2f is a proof of concept/novelty. So, it is very important to allow custom metahuman heads into a2f, to preview what the result looks like.Īlso when I was transfering the animation to blend shapes (ideally this would not be a necessary operation each time) one side of the face looked different, the animation was not the same, there was significant discrepancy. Even adding an additive layer did not produce anything usable. Blindly exporting an animation that worked OK on the default a2f head into metahuman, the result was not really usable - mouth too wide open, and top teeth too exposed throughout. Furthermore I did not find a way to preview the animation on my custom mesh. I tried to export the face skeletal mesh for a metahuman into omniverse using the connector for UE5.1 - the skin texture did not work despite trying different export settings. ![]() ![]() The final resulting facial animation and lipsync should be exactly what you see inside Audio2Face.īump. You would then know exactly what you would be getting. and then begin the animation process with the character that will ultimately be using the animation so you don’t waste time tweaking the animation on some random generic proxy character. I don’t want to start off in Audio2face using some generic male avatar head… what if my character is female? It would be way better to be able to import your custom character… whatever that is… Character Creator, iClone, MetaHuman, DazStudio, Poser, custom mesh, etc. I actually think the whole approach to this software is off. If I had to tweak the animation a tiny bit here and there I would be okay with that, but the results I got were not usable at all and I ended up having to animate the entire thing by hand… which defeats the whole point of using this software. You need to make it so that the animation that Audio2Face generates is exactly the same when it’s applied to a custom character… and in this case, to the Epic Games, Metahuman characters. ![]() So yeah, there is definitely room for improvement. I basically had to redo the entire animation by hand … creating blend shape poses for all the phoneme shapes and then re-animating the entire thing. After I exported from Audio2Face and imported into Unreal the entire animation was way off… the mouth never even closes. The blendshape solve basically destroys all the great AI lipsync that Audio2Face generates. There is definitely room for improvement.
0 Comments
Leave a Reply. |