This section describes how to analyze the facial expression for detected faces.
You must first detect faces by setting up a face detection task. For information about how to do this, see Detect Faces.
To analyze facial expression
Create a new configuration to send to Media Server with the process
action, or open an existing configuration that you want to modify.
In the [Analysis]
section, add a new analysis task by setting the AnalysisEngineN
parameter. You can give the task any name, for example:
[Analysis] AnalysisEngine0=FaceDetect AnalysisEngine1=FaceState
Create a new configuration section to contain the task settings, and set the following parameters:
Type
|
The analysis engine to use. Set this parameter to FaceState . |
Input
|
The track that contains detected faces that you want to analyze. Set this parameter to the ResultWithSource output track from your face detection task. For example, if your face detection task is named FaceDetect , set this parameter to FaceDetect.ResultWithSource . |
For example:
[FaceState] Type=facestate Input=FaceDetect.ResultWithSource
For more information about these parameters, refer to the Media Server Reference.
Save and close the configuration file. HPE recommends that you save your configuration files in the location specified by the ConfigDirectory
parameter.
|