Skip to content

Added a Character Expression System #323

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

amadeus737
Copy link

Leverages Unity's Timeline API to create custom sequencing components, which can be used to define a character's mood as a function of time. The character's mood is then used in other sub-systems to control its' eye textures, mouth textures, and animations. Additional features like eye blinking and lip-syncing are also supported and, ultimately, determined from the character's mood. Finally, localization is supported in this system. So, for example, the lip-syncing mouth textures used on a character will change with the active language. Combined, I just refer to the whole thing as a Character Expression System.

Forum link: https://forum.unity.com/threads/localized-phoneme-mood-pose-expression-system.1044784/#post-6760660

What will this PR bring to the project for everyone: more nuance in character expressions in dialogue / cutscene sections.

Why are these changes necessary: the current implementation has the same animation for every dialogue sequence. Since there will be many character interactions in the game, this will eliminate the repetitive animation and give much more flexibility in designing cutscene / dialogue sequences.

How did you implement them: Please refer to the forum post linked above for a detailed description of implementation.

Leverages Unity's Timeline API to create custom sequencing components, which can be used to define a character's mood as a function of time. The character's mood is then used in other sub-systems to control its' eye textures, mouth textures, and animations. Additional features like eye blinking and lip-syncing are also supported and, ultimately, determined from the character's mood. Finally, localization is supported in this system. So, for example, the lip-syncing mouth textures used on a character will change with the active language. Combined, I just refer to the whole thing as a Character Expression System.
@CLAassistant
Copy link

CLAassistant commented Jan 25, 2021

CLA assistant check
All committers have signed the CLA.

@ciro-unity ciro-unity added the enhancement New feature or request label Jan 25, 2021
Extended system to support multiple actors. Please reference the foum post in the original commit to understand changes.
…ow restored when exiting play mode

Now caches edit mode mainTexture settings for eye and mouth materials and blendshapes properties, and restores these settings after exiting play mode.
@ciro-unity ciro-unity added the on hold Depends on functionality we haven't decided on yet, or is beyond scope. Will be revisited later. label Jan 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request on hold Depends on functionality we haven't decided on yet, or is beyond scope. Will be revisited later.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants