Comments on ‘ICRP, 201x. Diagnostic Reference Levels in Medical Imaging’
p. 11, line 13 – there is a typo: “from at”
Because some of this terminology is relatively new, it took me several reads to understand the subtle differences between Ka,r, Ka,e and Ka,i. I think some additional clarification is required.
State that Ka,r is a quantity displayed on fluoroscopy / interventional equipment.
Include a simple relationship between Ka,e and Ka,i. For example, my understanding is that Ka,e would be measured using a dosimeter (ion chamber or TLD, but presumably not a lead-backed solid state device) placed on the patient and Ka,i would be measured using a dosimeter placed at the same location (i.e the same focus to surface distance) without the patient in the beam. Does Ka,e = Ka,i x backscatter factor?
p. 17, line 13 – SSDE. I think that AAPM Report 220 (2014): Use of water equivalent diameter for SSDE calculation should also be referenced.
p. 19, line 31 – there is a typo: “Values DRL quantities” should be “Values of DRL quantities.”
p.20, line 2 – there are no Australian sources listed here e.g. ARPANSA, ANZSNM. I don’t know if we want to push for an inclusion…
p.26 – Use of AD and median values (also covered in Section 2.6.2). The median of the distribution of national data seems to be proposed as both an “achievable dose” and an investigation level for poor image quality. Perhaps I have misinterpreted this, but I find it confusing. I’m concerned about setting too many guideline values as over-simplifying the optimisation process and I wouldn’t want this advice to be taken too literally e.g. “try to reduce your dose to the AD but don’t go below it.”
It is quite possible that in a survey of healthcare facilities, the range in data is as much a reflection of the age of the equipment / technology and it is of the technique / protocol, or even a reflection of varying population size. Reducing doses to the AD could be completely inappropriate for some equipment and may result in undiagnostic image quality (the ICRP document does stress the importance of image quality in Paragraphs 110, 111 and 120 in Chapter 2). When doing an audit at national level, you would need to be VERY prescriptive in defining a standard exam (particularly for more complex exams such as CT) and a standard sized patient.
p.27, line 17 – Based on my own recent experience of CT dose audit, I don’t believe it is a task that should be undertaken by an “administrator.” I have no problem with an administrator facilitating data collection but I think it is essential that analysis and subsequent action should be undertaken by someone trained in the physics of medical imaging and should ideally be a multi-disciplinary approach.
p. 32, line 31 and p. 43, line 4 – “The accuracy of DRL quantity data produced by and transferred from x-ray systems should be periodically verified by a medical physicist.” Yes, I completely agree. My question is: should it be corrected? I would expect that the accuracy would typically be verified during physics QC measurements. However, the tolerance levels are not particularly strict. For example, in EU RP 162, suspension levels are:
Radiographic KAP meter accuracy > ± 25% overall uncertainty
Fluoroscopy KAP meter accuracy > 35% deviation between measured and indicated values
Accuracy of CTDIvol > 20% deviation between measured and indicated values
Taking the above information to extremes, you could find that 2 fluoro units differ in the displayed KAP value for a certain examination by 70%, but if corrected, the KAPs would be the same.
These tolerance levels will vary from country to country (as will testing frequency). Vendor tolerance levels will also be different.
Paragraph 135 – See comment above re: accuracy of KAP meters.
Paragraph 136 – It is implied that in the absence of a KAP meter for radiography, Ka,e is the simplest approach. I’m not sure I agree. I think the use of TLDs for patient dosimetry is slightly outdated. Ka,e can be calculated, but in addition to a measure of x-ray tube output (for which a table of typical results in provided), you also need backscatter correction factors. Should a reference be provided for these or is empirical determination recommended? I remember using NRPB Report 186 for these factors, but they were dependent upon exam, projection, filtration and kVp. Furthermore, Focus to Skin Distance is required. Would this be measured or estimated? I would argue that Ka,i is simpler, but less indicative of the actual patient dose.
Section 3.5 Mammography
Paragraph 145 – I think the information is slightly outdated; it does not reflect digital mammography systems, which may operate up to 38 kV (even higher for tomo and contrast-enhanced, but I don’t think these need to be considered at this stage). Furthermore, tungsten is the most common target material for digital mammo, typically used with Rh, Al or Ag.
Paragraph 146 – This statement is true IF you are comparing the same views e.g. CC or MLO view in contact mode. Assessment mammography may additionally use magnification or spot views, which would be associated with a different dose.
Paragraph 151 – I think that the Wu and Boone methods of DG calculation should also be referenced as these are used in the USA and Australia.
Further comments: it is paramount to define the view (e.g. CC, MLO) when setting a mammography DRL. Options would be the DG for the MLO view or the DG for a standard 4-view screening examination. I would opt for the MLO view.
Just as general radiology requires standard-sized patients, mammography requires a standard breast thickness to be defined e.g. the UK chose 55 mm. This then impacts on what data is included in the dose audit e.g. breasts of 50 – 60 mm compressed thickness in the MLO view. The wider the range of thicknesses included, the wider the range of breast composition (i.e. glandularity), which also affects dose.
Note that DG can be extracted from the DICOM header for integrated DR systems. This makes large scale dose audit much simpler. DG calculation is not particularly straightforward for large numbers of patients.
p. 67, line 32 – there is a typo: “patints”
My overall impression of this chapter is that analysis could be complicated! So many options are included e.g. complexity analysis, comparison of median, 10th, 25th and 75th percentile of local data to corresponding percentiles of “benchmark” data – would these even be available?! I wasn’t aware that national DRLs were published at this level of detail.
As the Chapter points out, there are so many factors that contribute to KAP and CAK during interventional procedures. Although optimisation is obviously important, we need to consider what we are trying to achieve. I liked the IPEM 88 approach of setting warning and investigation levels at 2x and 3x the median value.
I feel that there are so many variables involved in these procedures that it will be difficult to draw meaningful conclusions from a dose audit and you would be making speculations based on physics principles. I would suspect that operator experience and expertise could be one of the largest sources or variation, but even the equipment itself is becoming increasingly complex e.g. flat panel detectors replacing ii, changes in focal spot size with field size selection, additional filtration etc… I think that regular QC is probably a better indicator of poor equipment performance than dose audit.
I’m not sure that DR needs its own section. I would say that the process of setting DRLs for radiographic equipment with DR detectors is exactly the same as it would have been for film, but obviously the two image receptor technologies should not be included in the same audit (I suspect that we need to differentiate between film, CR and DR). For DR systems with a KAP meter, it may be possible to extract KAP from the DICOM header to enable dose audit to be carried out quickly on a large scale.
Paragraph 273 – I think there is a discrepancy between the number of patients included in a survey recommended here and in other chapters. I thought that 30 patients were also required for interventional fluoroscopy (not just diagnostic fluoroscopy) and also for CT.
Section 7.1.3 Corrective action. I wonder if more guidance is required regarding the term “exceeded.” E.g. by how much, when comparing local to national and local to local over time.
p. 119, line 18 (point 16). This information re: breast thickness should be presented in the mammography section (3.5).
I’m not entirely sure what the rationale is for setting DRLs for different breast thicknesses, given that there is no requirement to set DRLs for different patient weights in general radiology (other than paediatric). I thought the very definition of DRLs was for “standard examinations for standard-sized patients.”
I appreciate that within a certain breast thickness range, there will additionally be a range of breast compositions and also that there are a range of kV, target / filter combinations available on mammography units. However, AEC calibration and phantom measurements during QC should cover this to a certain extent. It is very common to calculate DG for a range of phantom thicknesses and tolerance levels already exist.
Jenny Diffey, PhD
Senior Medical Physics Specialist
Hunter New England Imaging
Further Comments on ‘ICRP, 201x. Diagnostic Reference Levels in Medical Imaging’
The document is basically sound, aside from a few typos (as noted by Jenny Diffey). The only major comment I have is that the status of Achievable Doses (AD) is a little unclear.
There seem to be mixed messages – on page 103 it is noted that if the DRL is exceeded, an optimisation strategy is recommended. On pages 48 and 49, the report seems to say that this is not enough, and comparing median doses with the AD is the best way to achieve optimisation.
It seems akin to setting a speed limit of 60 kph and then saying “but 50 kph would be even better”. So is it OK to go at 60 kph? If not, why not make 50 kph the limit? And so on.
Hence I think there needs to be a clearer description of the respective roles of ADs and DRLs in the optimisation process.
Health Technology Management Unit
Perth, Western Australia