Skip to content

Refactoring tests : #227

Closed
Closed
@huguesdevimeux

Description

@huguesdevimeux

I was thinking about tests enhancements :

Testing videos

So I think two things should be tested: video metadata about the video and the hash of the video.
Concerning the video's metadata, I propose using ffprobe (by default installed with FFmpeg) :
ffprobe -v error -select_streams v:0 -show_entries stream=width,height,nb_frames,duration,avg_frame_rate,codec_name -print_format json test1.mp4
Output :

{
   "programs":[

   ],
   "streams":[
      {
         "codec_name":"h264",
         "width":854,
         "height":480,
         "avg_frame_rate":"15/1",
         "duration":"2.000000",
         "nb_frames":"30"
      }
   ]
}

For the video hash: FFmpeg has a built-in functionality allowing to compute it :
FFmpeg -i test1.mp4 -f hash -hash md5 hash.md5 -format_whitelist hash -loglevel error
Output the hash in a file. Note the latter is independent of the container-format (see.

I know, only the hash would be sufficient as metadata are contained within it, but doing these two tests will make debugging easier.

My question is: for the metadata, I only kept on purpose these 6 elements. Do you think it's enough? what other things should we test ?

Tests directories

I thought to refactor the whole architecture into something like that :

tests/
    conftest.py
    test_graphical_units/
        pre_rendered_frames/
            .. Here will be the content of tests_data.. (i.e the pre-rendered frames)
        test_creation.py
        test.geometry.py 
        ....
    test_logging/
        expected_logs/
            ... Pretty self-explanatory
        ... Here will the tests of what manim logs. 
    test_CLI/
        ... Here will be tested all the CLI flags. We will use videos tests here (see above)/.
        ... In fact, these are more end-to-end tests.
        pre_rendered_data/
            ... the information concerning the prerendered videos (i.e their hash and metadata). NOTE: No videos will be there, only hash and JSON.
    test_CONFIG/
            ... Here we will test the config. The idea is to use custom config in both CLI and from the file.
    utils/
        graph_unit_tests.py #Actual testing_utils.py. Contains all the functions used to test graphical units.
        logging_tests.py # Will Contain functions used to test logs (e.g functions used to get the log, to get the pre-generated logs, etc)
        CLI_tests.py # Same as above, except for the CLI tests.
        videos_tests # This will contain all the functions used to test a video (e.g. comparing hash and metadata) 
        dev_utils.py # not sure about this. Meant to contains all the functions and stuff to set new tests up.

Do you agree with this?
I would particularly like to get opinions about the fact that no videos will be contained, only hashes. One disadvantage of that is that as there a no visual proof of a mismatch the debugging could be quite harder (although rendering the scene with the master branch will be enough to get the original video).

EDIT : Added test CONFIG folder

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedWe would appreciate help on this issue/PRtestingAnything related to testing the library

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions