Page History
Decisions
Topic | ADQRS BEST PRACTICE |
---|---|
Choice of Standards for Development |
|
Dataset Structure and Naming Conventions |
|
PARCATy/PARAM/ PARAMCD |
|
...
|
...
|
...
|
...
|
...
|
...
AVAL/AVALC |
|
Analysis and Baseline Flags |
|
Treatment Variables |
|
Total and Subscale Scores |
...
|
...
|
...
Imputations |
|
Tabled Items
- How much manipulation can be done on AVAL for records coming from SDTM before a new parameter should be used instead of copying --TEST/--TESTCD into PARAM/PARAMCD? For example, what if SDTM only collects character values (<=15 minutes, 15-30 minutes, etc.), but numeric codes are required for analysis- can those go into AVAL? What would be imputed- the character or the numeric value? Does this change if we are transforming or reversing collected scores for analysis?
- PARCAT1 holds the instrument name, and PARCATy holds the subscale name(s) associated with the item. How should items contributing to multiple subscale calculations be modeled?
- Recommendation has been made to publish these best practices in a more formal format than this Wiki page. Will revisit in 2023, once more supplements have been published and this is felt to be more stable.
- What should be included in an ADQRS supplement when an instrument such as SF-36 has been scored by an outside vendor in SDTM? Do we need to develop a full ADQRS supplement, or only a Readme file? Is it acceptable/safe to assume the vendor is handling missing values appropriately
...
Tabled Items
- Need to decide if ASEQ should be required, and if so, should it be unique across all ADQRS datasets, in case they are later combined.
- Need to develop list of expected variables for all ADQRS datasets.
- How to handle data sent from a vendor with all scores computed, so there are no separate SDTM/ADaM datasets.
- Do we need to define baseline in our Best Practices?
- Should we recommend that each questionnaire be stored in a separate dataset, or leave that up to the sponsor to decide?
Overview
Content Tools