Pages: [1]
  Print  
Author Topic: head.md3 enhancement theory  (Read 7767 times)
fromhell
Administrator
GET A LIFE!
**********

Cakes 35
Posts: 14520



WWW
« on: October 30, 2009, 06:48:11 PM »

The average amount of frames in a head.md3 is 1.

Elite Force single player had a client-side feature in which lipsync is faked by the volume of the current sample in the sound file played in the entity's local voice channel, switching head skins for each level.  In addition, there's eye blinking and 'meanface'. These are all extra skins though, and skins can be very messy to work with when you have lots of them.  The cleaner approach would be Deus Ex-style frames, one frame for each volume level. This would also allow the teeth to close too, and interpolation could make this look interesting. Half-Life does it by a single bone for the mouth being moved according to volume.

- 1 frame for neutral idle
- 1 frame for blinking, or dead
- 1 frame for MEANFACE (pain, or aggression)
- 1 frame for OUCHFACE (O_O)
- 7 frames for mouth opening
- Eyes looking at target (if visible) by having a frame offset used for eye direction

These features wouldn't be available for lodbias 1, for detail/performance/memory reasons.

I think this can be only implemented engine-side, at least for somehow passing the voice channel volume to cgame, where that will handle the rest.

Does Q3 have a 'voice channel' like Q1 and Q2 did? CHAN_VOICE? This could be trouble for overlapping pain sounds.
Logged

asking when OA3 will be done won't get OA3 done.
Progress of OA3 currently occurs behind closed doors alone

I do not provide technical support either.

new code development on github
andrewj
Member


Cakes 24
Posts: 584



« Reply #1 on: October 30, 2009, 09:35:04 PM »

7 mouth frames seems like overkill, perhaps 3 is enough (closed, mid-way, open) tor normal speech, with maybe a wide open frame for pain or shouting.

Ideally you'd want frames for lip-spreading vowels (like English /i/ or Japanese /u/) but that would require very sophisticated analysis of the sound file.

An alternative to real-time analysis is to perform the analysis by a separate program which puts the results into a file that would accompany each sound file.  I think that method could be done entirely within cgame.

Q3 does have CHAN_VOICE:
Code:
// sound channels
// channel 0 never willingly overrides
// other channels will allways override a playing sound on that channel
typedef enum {
        CHAN_AUTO,
        CHAN_LOCAL,             // menu sounds, etc
        CHAN_WEAPON,
        CHAN_VOICE,
        CHAN_ITEM,
        CHAN_BODY,
        CHAN_LOCAL_SOUND,       // chat messages, etc
        CHAN_ANNOUNCER          // announcer voices, etc
} soundChannel_t;
Logged
Pages: [1]
  Print  
 
Jump to: