When you search for ‘ceftidoren’ on Wikipedia, you will get a surprise: there is no entry. However, a search on the web tells us that Spectracef (ceftidoren pivoxil) still exists. Gee, now the drug is spelled ‘cefditoren’ and – looking through the list of retrieved articles – it seems that the spelling for this cephalosporin seems to go both ways: both the “d-t” and the “t-d” versions are in widespread use. Clinicaltrials.gov lists a trial in AECB of ceftidoren (“t-d”) vs levofloxacin [1], the latter also often misspelled as levafloxacin. With a brand name of Levaquin, this is understandable. Fortunately, Wikipedia has an article on cefditoren (“d-t”). Let’s hope this beta-lactam never comes up at a spelling bee.
The reason this blog is about ceft*d…(ah, forget it, let’s just call it Spectracef here) has to do with an interesting analysis provided by an FDA reviewer some 15 years ago. By applying small modifications to eligibility, validity and evaluation criteria, he was able to show just how much results in an AECB trial were affected. In this case, results varied by as much as 30% (see Table).
This is more variability than most of us would have predicted. How can one justify lumping study results across various AECB studies when outcomes are so sensitive to even minor differences in design? Can we trust assessments from a meta-analysis of trials when there is no way of standardizing criteria across trials or compensating for their effects on outcome? The FDA Medical Officer commented on small differences between the Avelox and Spectracef trials despite seemingly very similar guideline-driven designs.
Small changes to study design may have unintended consequences but study design elements can also be tweaked / manipulated to produce desired results. Marketing, ever concerned about impressions, would probably not be happy to see any trial with efficacy rates in the 50% range – even if non-inferiority was demonstrated – when other competitors have published success rates for the same indication of say, 80%. The Spectracef CEF 97-003 study was a high-quality registration trial; even in the context of a well-controlled prospective blinded trial (comparing 2 doses of Spectracef with cefuroxime) with rigorously defined design features a favorable numerical outcome can be pre-programmed.
Publications cannot provide such fine details; only a diligent FDA reviewer can do such detailed confirmatory subset analyses. If raw data sets were available to the public for reanalysis, it would be easier to perform independent sensitivity tests and ‘level the playing field’ for a meta-analysis. But we are not there yet. Fortunately, in the case of the Spectracef AECB trial, a comparator arm of patients subjected to identical conditions of analysis and treatment provided an internal reference point to validate the results.
This was not the case in many LAIV studies we discussed recently [3]. Historical controls just cannot make up for the lack of a randomized contemporaneous cohort whatever fancy statistical footwork is done. Studies using historical controls are tantamount to comparing the performance of a shiny new car on a freshly paved road to that of old gas guzzlers, bikes and trucks, all from a different era and on different road conditions. Nothing more.
The regulatory agencies are currently facing a dilemma: They recognize that RCT cannot be done to study drug efficacy in pure MDR infections. Trials using historical control of patients with CRE, ESBL, or with rare infections, like Pseudomonas or Acinetobacter, have been suggested as feasible alternatives and we may discuss the topic in another blog. It is well known that historical ‘control’ trials always favor the new drug; from a regulatory perspective, results from such ‘trials’ are almost impossible to interpret.
References:
[1] https://clinicaltrials.gov/ct2/results?term=ceftidoren&Search=Search
[2] http://www.accessdata.fda.gov/drugsatfda_docs/nda/2001/21-222_Spectracef.cfm
[3] https://allphasepharma.com/dir/2016/07/07/2596/is-laiv-dead-or-just-on-laiv-support/