(Replying to PARENT post)
(I guess it might take a few years for the performance to get there)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I'm a bit disappointed though that they didn't also include results for a synthetic source video with "impossible" poses (e.g. joints bending backwards, stretching, separating from the body or performing full rotations). That would have been pretty interesting (though perhaps a bit unsettling) to see.
(Replying to PARENT post)
(Replying to PARENT post)
Using AI to transform anyone into a professional dancer might include using AI to process live video (webcam) of someone dancing and then giving them some feedback for improvement. In a word: coaching.
However this is using AI to produce composite videos of people dancing.
(Replying to PARENT post)
Maybe I'm in the minority, but I think if we take this idea and walk with it, it has the potential to trivialize actual accomplishment. Maybe I'm overthinking it.
(Replying to PARENT post)
- Mimic a target's body motions (this link)
- Mimic a target's facial expressions (deepfakes)
- Mimic a target's voice (lyrebird AI, etc)
related video, digital animation puppeteering
https://www.youtube.com/watch?v=YiOByO8J7xg&t=2s&list=LLI462...
Its not perfect by any means, but we're seeing a new age of CGI. Once perfected, I wonder how the entertainment industry will change as a result (Faster rendering times, less time to make scenes, puppeteering, not needing expensive famous actors or stunt doubles, digital identity copyrights, etc)
(Replying to PARENT post)
(Replying to PARENT post)
I appreciate that the detected poses and motions create clear pictures for what different parts of the body are doing. Particularly for ballet, if I had access to this technology (in a way that was user friendly), I'd love to see the difference between ballet styles (Vaganova, Cechetti, ABT, ect). I think it would be much clearer from a students' perspective, to see the stylistic difference in lines, shapes and movement.
This AI reminds me of Happy Feet, where they took Savion Glover's movement and choreography and applied it to the animation penguin. It doesn't seem too far-fetched. And lastly, for those who say this seems unnatural--dancing is unnatural to the body, hence the training and years put into it. So having an AI applied to it will only make it look more unnatural.
Artistically, this can be debated (as it has been), but in search for 'real life application,' I'd love to get my hands on this as a teaching tool.
sorry for the long post--this is my first time on this site--my boyfriend sent this to me & warned me that if i blabbed too long, this post would not be successful.
(Replying to PARENT post)
"(...) allows anyone to portray themselves as a world-class ballerina (...)"
Moreover, after AlphaGO took away Go from us, I started to wonder "what is left" for humans, and I believe that we are centuries away to have machines that achieve world class dancing level. My reasoning is than in things like Go, image or speech recognition, it is easier to "encode" the information for the ML to actually learn. On the other hand, encoding the movements of professional dancers is already quite difficult. Consider for example in the video linked here, the whole human body is mapped into ~20 points. Sure, this may be enough to portray someone as a dancer. But good luck making a dancing robot.
So, maybe I quit my programming career to become a dancer, it is less likely to be a job that the machines will take away ;-)
edit: grammar
(Replying to PARENT post)
(Replying to PARENT post)
Like competitive sports, art is all about display of human ability under constraints. This is why even in the age of photographs, we still value hand-painted canvases. Such techniques are simply going to make people more discerning between real effort v/s automated means of generating the same outcome.
Rather than thinking AI-assisted style transfers are the end of art, we should think that these are new tools for artists to do even more interesting stuff. See this upcoming tool for example: https://runwayml.com/
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I wonder if seeing yourself dance like this might speed up learning to actually dance like this...
(Replying to PARENT post)
(Replying to PARENT post)
I mean, who cares what you look like in some video? When you actually meet people, they'll know that it's bullshit.
Now, if you could manage it in meatspace, that would be cool!
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
> the team based their algorithm on the pix2pixHD architecture developed by NVIDIA researchers
Is it me, or is NVIDIA trying very hard to take credit for this UC Berkeley paper? (they're almost taking credit for Pytorch as well). Sure, this kind of work wouldn't be possible without their hardware, but in that case Intel could probably take credit for most of science in the last few decades.
(Replying to PARENT post)
And just like auto-tuned voices, it will come off as janky and fake.