You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We now use the improved pose-to-video based on diffusion models.
We start with a paragraph in German, translate it to German Sign Language:
Das Alte Museum wurde 1830 als erstes öffentliches Museum in Berlin eröffnet.
Im Obergeschoss können Sie bei einem großartigen Ausblick über den Lustgarten später mehr über die Geschichte des Museums und seinen Architekten Karl Friedrich Schinkel erfahren.
We choose to focus on one issue - visual inconsistency between signs.
After adding pose anonymization in 0072c52 the output is:
dgs-example.mp4
We note that the database lookup time was 9 seconds, this was optimized to 1-2 seconds, and could be improved further.
We recognize that sentences should be split. This will affect both the database search (only search up to one sentence) and in the video, lower and raise hands without cropping on sentence boundary. (Possibly, generate every sentence independently, then join them)
The text was updated successfully, but these errors were encountered:
The system mainly fails here on numbers (1830), and named entities (Karl Friedrich Schinkel) (which with some modifications, it could spell out).
It also did not make the sentence boundary clear, and basically ignored the punctuation (also, fixable).
In my opinion maybe the biggest problem here to address, is that the signing is performed in the spoken language word order. It is comprehensible, but not really sign language.
The smoothing between signs is too simplistic (can be easily seen in the skeleton video), and can be fixed.
The video quality is not the best. The generated interpreter has some artifacts even if the pose sequence was perfect. Not easily fixable.
The video is quite slow. Further work can be done to make the signing faster and tighter, decreasing the number of frames.
We now use the improved
pose-to-video
based on diffusion models.We start with a paragraph in German, translate it to German Sign Language:
The simple glossing gives:
The current system gives:
We choose to focus on one issue - visual inconsistency between signs.
After adding pose anonymization in 0072c52 the output is:
dgs-example.mp4
We note that the database lookup time was 9 seconds, this was optimized to 1-2 seconds, and could be improved further.
We recognize that sentences should be split. This will affect both the database search (only search up to one sentence) and in the video, lower and raise hands without cropping on sentence boundary. (Possibly, generate every sentence independently, then join them)
The text was updated successfully, but these errors were encountered: