Design for PTL test execution report in JSON format


#1

Hi All,

Please review the new design for “PTL test execution report in JSON format” posted at: https://pbspro.atlassian.net/wiki/spaces/PD/pages/629014541/PTL+test+execution+report+in+JSON+format

Please let me know of any comments or suggestions.

Thanks,
Sarita


#2

Thanks Sarita,

What will be contents of test_conf ? Can you add an example?


#3

Hi Sarita,

It seems the key value pairs are constant in the json. Just a quick question, If I want to add any additional info about a test case in some string format or so, which of the keys mentioned in the json format I can use?


#4

Thanks for your inputs @kjakkali and @sujatapatnaik52, I have updated the document with your comments incorporated.


#5

@All,

Please review the updated design document. Below are the changes done; based on comments received along with some additional ones:

  1. Updated the terms with string specifics and angular braces for values (<>)
  2. Added explanation of string specifics of json file
  3. Added multiple test cases & test suites to template
  4. Added example file
  5. testparam was a repeated as test_conf, hence removed testparam
  6. Total test summary subsection and its fields added
  7. ‘measurements’ replaced with ‘test_run_info’ since it was a misnomer for use cases where the test gave out a string format of information of test execution

#6

@saritakh : Thanks. Updated design looks good to me.


#7

Thanks @saritakh. The changes looks good.


#8

Hi Sarita,
Inside test summary, we have result summary, I suggest to have total_run, succeeded, failed, errored etc as separate keys instead of a string.

And I have few questions

  1. “tests with failures” means failed, errored and timedout tests or only failed tests.
  2. In “test_run_info”, can a testwriter append with custom values per testcase for e.g.,
    def test_t1(self):
    test-steps
    self.test_run_info.append({custom:value}) ?
  3. How the requirements info will be fetched per testcase?

#9

Hi @saritakh, Thanks for working on the design. It looks good to me. Have few minor comments.

-typo “dictinary” at few places
-tags can be specified at test suite level as well, right? I have seen a few tests like that. how will they be represented in JSON format?

  • I agree with Vishwa having separate keys for total_run, Pass, failed, error, skipped and timeout.
  • Can you add examples for test_run_info? I think I am not completely clear on how to use that?

Also JSON file will not contain the actual output of the test but only test info, correct?


#10

@saritakh
Thanks for posting the design document. Follow are my comments.

  1. Can we make the epoch time in readable format for run_id like 083120181546_run something like this. We can also have one more field like total_time_in_epoch to purely store overall time needed in seconds.
  2. Instead of having status as string can we have a sub dictionary for it something like
    “test_results_by_type”: {
    “run”: 5,
    “succeeded”: 3,
    “failed”: 1,

    }
    also can this summary be created per test suite level as well?
  3. In test_summary dictionary can we have test_start_duration and test_end_duration instead of just having the overall test_duration. This way it is more helpful. Otherwise we need to parse each and every test case results individual dictionaries to achieve the same.
  4. It would be great to provide an example for test_run_info dictionary. Also please provide an example for requirements dictionary.

Thanks,
Suresh


#11

@All,

I have updated the design document based on review comments and new ideas, please review the same and let me know of any comments.

Below are the updates done:

  1. Updated the field explanations in the template instead of a separate list
  2. Added a dictionary for result_summary for count of tests
  3. Updated tests_with_failures explanation as tests that failed and errored out
  4. Replaced “test_run_info” back with “measurements”, since a new method is defined to add additional information to the json report
  5. Updated new method to set the “measurements” list of dictionary
  6. Added new method to add additional information of test execution result as a dictionary

#12

Thanks @vishwaks for your inputs, please find my replies below:

  1. Updated the field name explanation for tests_with_failures accordingly
  2. Below would be a sample of usage of set_test_measurements() method
    def test_t1(self):
    test-steps
    self.set_test_measurements({custom_fieldname1:value1,custom_fieldname2:value2})
  3. Requirements information i.e. cluster information would be fetched from the @requiremnts decorator specified at the test case. Refer to this link for more details: https://pbspro.atlassian.net/wiki/spaces/PD/pages/458883073/PP-1281+New+decorator+in+PTL+using+which+user+can+provide+cluster+information+required+for+a+test.

Please let me know of any clarifications


#13

Thanks @anamika for your inputs, please find my replies:

  • updated the typos
  • Ultimately the tags end up getting applied to the test case when specified at test suite level. Here all the tests will have the same value for “tags” field
  • updated methods should clarify how the data gets added to the json report.

Please let me know of any clarifications.


#14

@sujatapatnaik52,

The new method PBSTestSuite.add_to_test_run_data() satisfies the requirement mentioned by you. So now, one can create a dictionary in test code keeping in mind the hierarchy of the json report template; with whatever data to be filled; that is needed to be added to json report in string formats and passed to this method.

Please let me know of any clarifications


#15

Thanks @suresht for your comments,
I just now updated the document and missed your points by few minutes. As I see, the new changes satisfies your second comment.
Regarding comments 1, 3 & 4; I will get back to you on them as early as possible.


#16

Thanks for the information.

So in requirements section data will be populated by requirements decorator, in that case I have an question, in future requirements decorator might get more parameters and multiple logical operators support(=, !=, <,> etc), in requirements section in json info can logical operators will be populated? If not we have to I suggest to add it.


#17

Thanks for making the changes Sarita.

Interface: PBSTestSuite.set_test_measurements()

–> what attributes it is going to set? Do you have the list or it takes as an argument? A test writer might want to access that data inside the test as well. Do we need another helper method like get_test_measurement?

Interface: PBSTestSuite.add_to_test_run_data()

–> examples please. Also specify if this takes any arguments.


#18

@All,
I have updated the design document with below, please review and let me know of any comments:

  • Updated method 3 name as “add_additional_data_to_report()” instead of “add_test_run_info()”
  • Added sample test code and updated the sample json report accordingly

#19

Thanks @suresht for your inputs, please find my replies below and let me know of your views:

I think epoch time in readable format is not necessary since epoch time is just taken as a run identifier number in order to differentiate between 2 test run reports. Otherwise I do not think this data is needed anywhere else. I prefer to keep it as is.
2.
Already updated as sub dictionary.
I dont think it would be necessary to create summary at test suite level; also if added it would become duplicate data; since each test case gives out its result separately. Of course in the cases where the failure numbers are high in a test run, the test suites should be run individually in order to debug.
3.
I think test start time and test end time belong more to be part of test logs than part of test report. test duration is specified for the whole test run and test start & end time is given for individual test cases in order to search the test logs in case of failed test case.
4.
I have added an example for both the methods being called in test case code.


#20

Thanks @anamika, please find my replies below:

Interface: PBSTestSuite.set_test_measurements()
–> what attributes it is going to set? Do you have the list or it takes as an argument? A test writer might want to access that data inside the test as well. Do we need another helper method like get_test_measurement?

Sarita ==>
Updated the details that it accepts a dictionary. The dictionary can contain any key value pairs and is completely upto the test writer to fill it up with any type of these pairs.
I dont think that we need a helper method to get this data since it would already be part of the array and dictionary, which is locally accessible.

Interface: PBSTestSuite.add_to_test_run_data()
–> examples please. Also specify if this takes any arguments.

Sarita ==>
Updated the method name as add_additional_data_to_report() along with details that it accepts a dictionary
I have added an example of the method usage int est code.