Training MCCNN for face PAD =========================== This section describes the multi-channel face PAD netwotk described in the publication. It is **strongly recommended** to check the publication for better understanding of the described work-flow. .. note:: For the experiments discussed in this section, the WMCA dataset needs to be downloaded and installed in your system. Please refer to :ref:`bob.pad.face.baselines` section in the documentation of ``bob.pad.face`` package for more details on how to run the face PAD experiments and setup the databases. For reproducing the experiments with MCCNN. There are mainly four stages. Each of them are described here. Preprocessing data ------------------ The dataloader for training MCCNN assumes the data is already preprocessed. The preprocessing can be done with ``spoof.py`` script from ``bob.pad.face`` package. The preprocessed files are stored in the location ````. Each file in the preprocessed folder contains ``.hdf5`` files which contains a FrameContainer with each frame being a multichannel image with dimensions ``NUM_CHANNELSxHxW``. .. code-block:: sh ./bin/spoof.py \ wmca-all \ mccnn \ --execute-only preprocessing \ --sub-directory --grid idiap After this stage, the preprocessed files will be available in `` /preprocessed/``, which is notated from here onwards as ````. Training MCCNN -------------- Once the preprocessing is done, the next step is to train the MCCNN architecture. All the parameters required to train MCCNN are defined in the configuration file ``wmca_mccnn.py`` file. The ``wmca_mccnn.py`` file should contain atleast the network definition and the dataset class to be used for training. It can also define the transforms, number of channels in mccnn, training parameters such as number of epochs, learning rate and so on. Once the config file is defined, training the network can be done with the following code: .. code-block:: sh ./bin/train_mccnn.py \ # script used for MCCNN training /wmca_mccnn.py \ # configuration file defining the MCCNN network, database, and training parameters -vv # set verbosity level People in Idiap can benefit from GPU cluster, running the training as follows: .. code-block:: sh jman submit --queue gpu \ # submit to GPU queue (Idiap only) --name \ # define the name of th job (Idiap only) --log-dir /logs/ \ # substitute the path to save the logs to (Idiap only) --environment="PYTHONUNBUFFERED=1" -- \ # ./bin/train_mccnn.py \ # script used for MCCNN training /wmca_mccnn.py \ # configuration file defining the MCCNN network, database, and training parameters --use-gpu \ # enable the GPU mode -vv # set verbosity level For a more detailed documentation of functionality available in the training script, run the following command: .. code-block:: sh ./bin/train_mccnn.py --help # note: remove ./bin/ if buildout is not used Please inspect the corresponding configuration file, ``wmca_mccnn.py`` for example, for more details on how to define the database, network architecture and training parameters. The protocols, and channels used in the experiments can be easily configured in the configuration file. Running experiments with the trained model ------------------------------------------ The trained model file can be used with ``MCCNNExtractor`` to run PAD experiments with ``spoof.py`` script. A dummy algorithm is added to forward the scalar values computed as the final scores. For **grandtest** protocol this can be done as follows. .. code-block:: sh ./bin/spoof.py \ wmca-all \ mccnn \ --protocol grandtest \ --sub-directory -vv Evaluating results ------------------ To evaluate the models run the following command. .. code-block:: python ./bin/scoring.py -df \ /grandtest/scores/scores-dev -ef \ /grandtest/scores/scores-eval Using pretrained models ======================= .. warning:: The training of models have some randomness associated with even with all the seeds set. The variations could arise from the platforms, versions of pytorch, non-deterministic nature in GPUs and so on. You can go through the follwing link on how to achive best reproducibility in PyTorch `PyTorch Reproducibility `_. If you wish to reproduce the exact same results in the paper, we suggest you to use the pretrained models shipped with the package.