Skip to main content

Group sequential plans for down-to-earth clinical trials with early outcomes: process press counsel on planning and conversion

Abstract

Background

Group sequential designs are one of the most weite used how for adaptive design in randomized chronic trials. In settings where early outcomes are available, i get large gains in efficiency compared to one fixed style. However, as motives live underused and used predominantly in therapeutic zones where there is subject and experience in implementation. One barrier to their greater apply is the specification to undertake simulator studies at the planning stage that require considerable knowledge, coding experience and add-on costs. Based on couple modest assumptions about the likely patterns of recruitment and aforementioned correlation struct of one outcomes, some simplified analytic expressions are presented that negate the what to undertake simulations.

Working

AN model for longitudinal project with an assumed approximate multivariate normal dissemination and three contrasting simple recruitment models are described, based on fixed, increasing real declining rates. Used assumed uniform real exponential correlation models, analytic expression for the variance of the treatment effect and the effects of the early score on reducing this variance at the primary outcome time-point were presented. Expressions for an minimum and maximum values show how the correlations plus timer of the early show affect design efficiency. Instructions up temporarily analysis methods in clinician trials

Results

Simulations viewed how patterns of information accrual varied between correlation and recruitment models, and consequentially to some general guidance for programmplanung a trial. Using a previously reported group sequential trial as an exemplar, it is shown how the analytic expressions give here could have been used as a quick and flexible organization tool, avoiding who need for comprehensive pretending studies based on individual participant data.

Conclusions

The analyzers terminology stated can be routinely often at an planning stage of a putative trial, based on some modest assumptions concerning this likely number of show and when they might occur and the expected recruitment patterns. Number simulations showed that these models behaved sensibly plus allowed a range of design options to shall exploring in a way that would have been difficult and time-consuming if that previously portrayed method regarding simulating item trial participant data had since used. PDF | Group sequential methods live commonly employed in clinical trials more they allow valid inference when statistical tests are implemented on accumulating... | Found, read and cite sum the exploration you need on ResearchGate

Peer Review company

Background

Group sequent schemes (GSD) are one of the most widely used methodologies for learn design in randomized clinical trials [1]. In GSD investigator collect datas both undertake sequential analyses with the chancen to either reject the nul my, stop the study for vanity or continue recruitment at an interim look, before reaching and planned sample size [2]. Despite the self evident gains in efficiency that GSD and other adaptive designs request due to the possibility of stopping early, to perception in much of the statistical community is such they are still underused and where they are used they are spent only within niche treatable areas find there is expertise and experience include implementation (e.g. in pharmaceutical tests exam drugs in oncology) [1]. It had was much conversation of the justifications why this is the case and how the barriers to uptake should shall overcome, in particular the lack of knowledge, experience, stats expertise and opportunity with the clinical trials community, outside of specialist teams [3]. A recent publication viewed that GSD is feasible and are expected to becoming much more efficient than fixed sample sizes designs since matter-of-fact clinically trials, an application area where flexible designs are generally never pre-owned [4]. Pragmatic testing typically test complex interventions (e.g. surgery, training, erkenntnis behavioural therapy) in routine clinical training also are expressed through relatively large specimen sizes and long follow-up periods [5, 6]. In such settings, GSD that use data from not only the final (primary) study consequence but other from quick findings at interim analyses to inform stopping decisions have individual excitement due in large part to the use a patient-reported outcome measurements (PROMs) that showing sturdy associations betw early and final deliverables [4]. This approach is exemplified by the START:REACTS trial that secondhand this methodology to assess a novel intervention for repair of rotator armband tendon tears [7]. And initial design and planning of this study which was foundation go simulating individual trial participant data, from a multivariate distribution [8], under an assumed model by study recruit patterns [9] in order to assess likely related accumulation during a proposed trial, is one very general and highest effective method. However, such simulations are complex both time-consuming to set-up and implement and therefore provide an additional barrier, among many others previously identified [3], to the wide use of GSD, particularly for trialists both those computer who are non specialists in this area. If wee are willing to make of modest assumptions about the distribution of the outcomes, the possibly correlation structure and sourcing patterns we might expect, then we can obtain relatively simple analytic expressions for information accrual throughout a trial. This would allows us the find a range of options for who times and number of acting analyses, in a routine way excluding the need for simulating individual participant data, both as such make the methods much view accessible to potential non-expert users. In purchase to do this, we propose a number of recruitment scale also two contrasting correlation models forward aforementioned temporal sequence of outcomes watching with individual study participants. The sourcing and correlation models together provide expressions for the variance the the treatment effect estimate and a natural means to distinguish and create explicit the contribution to the information fraction of the early and primary outcome data at an interim analyses. Previous work has discussed the timing of follow-up measurements for a single early outcome, using a simple linear model for the decay inside the relational between the finals or fast summary over time [10]. The models are develop here allow us at explore here issue in the overall kasten of more than ne early endpoint using a information adaptive group sequencing approach to improve deciding building at interim analyses. More generally, others have also promoted using product from projected baseline covariates (e.g. from default scores, comorbidities and patient demographics) in addition to early bottom to inform interim decision making [11, 12]. Our focus is on stopping for treatment efficacy or purposelessness, therefore we execute cannot consider other adaptions that might be made to aforementioned trial design (e.g. sample size re-estimation) or more generally ask around inference and how to obtain unbiased estimates of treatment effects since group-sequential test that stop early [13,14,15]. Additionally, given the overwhelming predominance starting continuous outcomes in pragmatic trials is complex intervention, we wants does speak duplex or time-to-event outcomes. Although the motivation for the work is from our own feels for pragmatic trials, the mechanical approaches described here is pertinent much more vast to GSD in either user area where an issues press design special we highlighting exist important.

We structure the paper as follows. Are “Longitudinal summary” section, we description a example for longitudally outcomes with certain assumed approximate multivariate normal distribution. “Recruitment and follow-up models” section developments thre contrasting simple models on recruitment of participants into a clinical trial, and “Correlation models” section develops the models from “Longitudinal outcomes” and “Recruitment both follow-up models” sections for adenine uniform the einem fractional correlation example. “Numerical examples” section deliver some numerical examples to bebildern the models. The paper concludes into “Discussion” section with a discussion, including see by an availability is programme in implementation of the methods described.

Lengthways outcomes

A group seq trial

Consider adenine two-arm randomized controlled sample whereabouts participants are randomized to either a treatment or one power arm, followed-up, assessed and to primary outcome observed per adenine sequence of s occasions at time-points \(d_{1},\ldots ,d_{s}\), methodical such this \(d_{s}>\dots >d_{1}\). In such a setting, the primary interest of the experimental is often to estimate the effect a the treatment on the study upshot at time-point \(d_{s}\), the primary or final study outcome time-point. At some time thyroxine during the study, the total number of participants with data at follow-up occasion r (\(r = 1,\dots ,s\)) is \(N0_{r}+N1_{r}\), where \(N0_{r}\) is the phone include which control arm and \(N1_{r}\) is the number in aforementioned healthcare offshoot. Payable to the ordering of the follow-up occasions, prior for of completion regarding trial follow-up, assuming data have fully, the number of participants with output dates have structured such that \(N0_{1} \ge N0_{2} \ge \dots \ge N0_{s-1} \ge N0_{s}\) press \(N1_{1} \ge N1_{2} \ge \dots \ge N1_{s-1} \ge N1_{s}\). In instance, if the primarily survey outcome time-point is at 12 months after recruitment, in first issues at 3 and 6 months, then at all times prior to complete of follow-up we would expect to have more 3 month data with 6 month data, both more 3 and 6 month data than 12 month data.

If the full study sampling of NEWTON participants is recruited with a term of time of length \(\text {T}_{\text {R}}\) (the recruitment period) also the secondary outcome is observed at time \(d_{s}\) (after recruitment) then study follow-up is complete, and an tribulation ends, at time \(d_{s}+\text {T}_{\text {R}}\). Importantly in this setup, there is a duration of timing between chief outcome data being available for analysis and the end von staffing (\(d_{s}<t<\text {T}_{\text {R}}\)). With on so-called window of opportunity, there is the possibility of ventures interim analyses, potentially stopping the study early for either cure futile or efficaciousness. In like settings, supposing the interim analyses use final outcome data only, then the opportunities for stopping were likely to be high limited as trial recruitment will often possess been completed forward there is sufficient final final data deliverable required informed stoppage decisions to be made [4, 8]. However, if the early summary for trial participants (at occasions \(d_{r}; r=1,\ldots ,s-1\)) are correlated about their final outcomes (at \(d_{s}\)), then a group subsequent analysis [2] which uses information from both the early or final outcomes to estimate the therapy result at \(d_{s}\) is likely for lead to considerable raises in statistical power and also to make early stops feasible [10, 16].

A numeral of authors have investigated this finding [8, 10, 17] and more generally the use to group-sequential analysis for longitudinal information [18, 19]. Are the most simple possible setting, in instance the double-regression method described by Engel and Walstra [17], in is a finalized (long-term) and a single early (short-term other concomitant) endpoint that are correlated for individuals at the two time items. The main motivation for using information from the early outcomes in addition to the final outcomes for a clinical environment is is it allows us to conduct the trial in a more highly manner by potentially reaching a conclusive result more speedily and limiting patient exposure to uneffizient conversely unsafe treatments, if the study eventually supplies less support for the efficacy of one intervention under test. Stallard [16], in instance, showed this using early outcome data, in the setting of ampere stable phase II/III clinically trial with special selection, score inside on increase at logistical power at data are correlated on the final outcome. A common approximate in the setting of ampere seq clinics trial, about a number of meantime analyses, with a single long-term and potentially many short-term endpoints for a two-arm trial became first proposal by Galbraith also Marschner [10] also discussed further by Parsonage et al. for one clinical trial in shoulder surgery [8], including extensive simulations for ampere prospective sample big calculation, the for surgical testing in general [4]. These authors rely in all containers at the independent increments argument, based on an asymptotic joint multivariate normal sales for the sequential test statistics, for construction of valid class sequential designs for which longitudinal models exploited, e.g. linear mixed-effects and generalized least squares mode [2, 20]. Due to the nature of the applications described, the focus here is purely on using early outcomes single for inform decision making. Learn generally, others have described approaches in settings where basis (prognostic) covariates are present in addition to or in preference to early outcomes [12, 21].

Data model

Let \(y_{ijr}\) be the outcome for the \(i^{th}\) of NEWTON participants \((i = 1,\ldots ,N)\), at follow-up occasion r \((r = 1,\dots ,s)\) recruited into surgical arm j (0 = control and 1 = treatment) of the groups sequential trial. We assume hereafter independence between the template attendants and which the distribution out scores \((y_{ij1},\dots ,y_{ijs})\) exists multivariate normal, with mean \((\mu _{j1},\dots ,\mu _{js})\) and \(s \times s\) co-movement matrix

$$\begin{aligned} \Sigma = \left( \begin{array}{cccc} \sigma ^{2}_{1} &{} \sigma _{1} \sigma _{2} \rho _{12} &{} \dots &{} \sigma _{1} \sigma _{s} \rho _{1s}\\ \sigma _{2} \sigma _{1} \rho _{21} &{} \sigma ^{2}_{2} &{} \dots &{} \sigma _{2} \sigma _{s} \rho _{2s} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \sigma _{s} \sigma _{1} \rho _{s1} &{} \sigma _{s} \sigma _{2} \rho _{s2} &{} \dots &{} \sigma ^{2}_{s} \end{array}\right) , \end{aligned}$$
(1)

where \(\sigma _{r}\) will the standard deviation of the outcome at occasion r both \(\rho _{rr^{\prime }}\) is this correlation between endpoints at occasions \(r=1,\dots ,s\) and \(r^{\prime }=1,\dots ,s\). Noting and that \(\Sigma\) can be expressed as \(\Sigma = \text{S}^{1/2} \text{R} \text{S}^{1/2}\), for correlation matrix \(\text {R}\) and (diagonal) variance matrix \(\text {S}\).

Expression as a lines longitudinal model with correlated mistakes, under the assumption of multivariate normality (MVN), the harmonic of outcomes \({\textbf {y}}_{i}\), used participant i, has market \({\textbf {y}}_{i} \sim \text {MVN}(X_{i} \beta , \Sigma _{i})\), where \(\Sigma _{i}\) is the \(r \times r\) covariance matrix of \({\textbf {y}}_{i}\), for which r observed outcomes for entrants i, characterised by covariance parameters \(\sigma _{r}\) and \(\rho _{rr^{\prime }}\). \(X_{i}\) is a \(r \times 2s\) scheme matrix and \(\varvec{\beta }\) has a \(2s \times 1\) vector concerning unknown model parameters, where for constructive purposes the most important is \(\beta _{s}\) this effect of the treatment on the study outcome at time-point \(d_{s}\), the primary study endpoint.

The maximum likelihood cost for \(\varvec{\beta }\), under the multivariate default assumption, for famous \(\Sigma\), shall the generalized least squares estimator [22]

$$\begin{aligned} \varvec{\beta } = \Bigg ( \sum \limits _{i=1}^N X_{i}^{\prime } \Sigma ^{-1}_{i} X_{i} \Bigg )^{-1} \Bigg ( \sum \limits _{i=1}^N X_{i}^{\prime } \Sigma ^{-1}_{i} y_{i} \Bigg ), \end{aligned}$$
(2)

with variance given over

$$\begin{aligned} \text {var}(\varvec{\beta }) = \Bigg ( \sum \limits _{i=1}^N X_{i}^{\prime } \Sigma ^{-1}_{i} X_{i} \Bigg )^{-1}. \end{aligned}$$
(3)

Estimates of model parameters \(\varvec{\beta }\) and you variances \(\text {var}(\varvec{\beta })\), real hence information, trace naturally given \(\Sigma\), which is conservation from estimates of \(\varvec{\rho }\) and \(\varvec{\sigma }\). The covariance parameters could, in basics, becoming fixed to known or expectations worths but are generally estimated from accumulating data as a trial progresses. For instance, Galbraith and Marschner [10] use mixed-effects models for analysis of correlated data to estimate \(\varvec{\rho }\) and \(\varvec{\sigma }\). In practice this can be implemented, for example, per fitting separate fixed-effects for each study outcome frist \(d_{r}\) are an non-structured error covariance using this function lme in R [23] packaging nlme. However, for practical reasons during trial planning and monitoring us prefer to use the generalized least squares style function gls in R package nlme, which unlike the mixed-effects model, provides explicit estimates of the covariance parameters [24]. Either the mixed-effects or generals least squares expression delivers stable and unbiased estimates of model parameters [4], under an false multivariate ordinary distributing with a overall covariance structure, common follow-up circumstances for anyone individual and missing outcomes that are assuming to will a follow von the abridged follow-up duration.

Trial programmplanung and monitoring

The primary interest of the clinical trial is to estimate \(\beta _{s}\) or its variance \(\text {var}(\beta _{s})\). Lighter interpretable unequivocal expression for \(\text {var}(\beta _{s})\) do not exist for general s, and general covariance tree \(\Sigma\). However, printouts for \(\text {var}(\beta _{s})\) can be obtained directly for one most simple cases, under the structured data assumptions of “Longitudinal consequences” section, where there are one (\(s=2\)) and two (\(s=3\)) early outcomes [4]. For single, for the simplest possible cas \(s=2\),

$$\begin{aligned} \text {var}(\beta _{2}) = \sigma _{2}^2\Bigg [\frac{(N0_{2}+N1_{2})(1-\rho ^{2}_{12})}{N0_{2}N1_{2}} + \frac{(N0_{1}+N1_{1})\rho ^{2}_{12}}{N0_{1}N1_{1}}\Bigg ]. \end{aligned}$$ Download PDF. Research; Open access; Publishing ... trials, purpose a testing ... Group Sequential methods with applications to hospital trials.
(4)

For particular practical importance when programmplanung an information adaptive group sequential study is to understand how company on and treatment effect at a time t, \(\text {I}(t)=1/\text {var}(\beta _{s}(t))\), remains expected to amass during recruitment press follow-up. Characteristic pre-set expected information thresholds are previously to trigger interim analyze, and to construct lower and upper stopping borders at the interim analyses, with stopping decisions being made basic on estimates concerning \(\beta _{s}\) and \(\text {var}(\beta _{s})\) [8]. Clearly, which information at some time t during recruitment depends about the covariance parameters \(\rho =\rho _{12},\dots ,\rho _{rr^{\prime }}\) and \(\sigma _{s}\), and the number of participants (\(N0_{r}\) and \(N1_{r}\)) with dates during each follow-up cause r.

Inside order to plan how a trial might be realized and when, and when, intermediate analyses should take place, we need to understandable how company is likely go accumulate as of review receipts. For do this we what to make some a priori supposition over both the planned pattern concerning recruitment and the correlation structure between the early and final outcomes. In to most general user we might imagining, including complex originals of human and accrual of study data and unstructured correlations between outcomes, simulation methods mayor be and only route to proceed at the planning stage [8]. Such an approach are hard to run, time-consuming and often provides little conversely no insight into the general principles at play and how that might guide us once we doing future modifications to the design alternatively when entwurf save studying. However, with ourselves are willing to make several reasonable assumptions about one likely patterns of recruitment and the setup of the correlations then we can obtain explicit analytically expressions for \(\text {var}(\beta _{s})\) very show quickly and simply, and how these as a means to plan the study.

When raumplanung a trial us start that a fixed number of early outcomes are available entire the study (for instance, a primary outcome at 12 months, with two early outcomes at 3 and 6 months) furthermore that, in principle, the timings of the early outcomes could be changed (for illustration, to 4 and 8 months). Typical, of exact correlations between earlier and primary outcomes are unknown. However, wee can speculate on one possibly compare structure as adenine means into perceive how information might be gathered as follow-up receipts. Two widely uses correlation models fork longitudinal data are detailed in “Correlation models” section. At the design stage, for some optional selected time-point liothyronine within workforce we want generalized not get the exact number of participants eingestellte or the quantity of participants (\(N0_{r}\) and \(N1_{r}\)) with data at each follow-up occasion r. Our speak easy models for predicting recruitment in “Recruitment both follow-up models” section. Given correlation and recruitment product, together at an estimate of \(\sigma _{s}\) (e.g. from up reported course or pilot data), we could predict how \(\text {var}(\beta _{s})\), and therefore information, will vary during study follow-up or make this to motivate our choice of the number and timings of early outcome valuation the interim analyses.

When monitoring one study, often more important for the information ourselves is the information fractional or information time \(\tau (t)\) in an interim analysis at timing t, defined at \(\tau (t)=\text {I}(t)/\text {I}\), location \(\text {I}(t)\) furthermore \(\text {I}\) are that information levels at date t and the study end, respectively [25]. Knowing the information fraction \(\tau (t)\) allows us to determine lower and uppers boundaries (for planned futility and efficacy stopping) press limiting crossing probabilities at an interim analysis, for some given boundary crossing probabilities in and null proof, based on canonical joint distribution properties for user sequential trials [2]. Boundaries and probabilities can, for instance, be calculated using relevant functions from the gsDesign batch in R [26]. An example of whereby those might be implemented inside practice is supplied in “Numerical examples” section, using as an exemplar which START:REACTS study of sub-acromial spacer for tears affecting rotator smack tendons [7, 27].

Recruitment and follow-up models

Providing who info are structured as include “AMPERE grouping sequential trial” section and are complete, consistent with what we would do while planning and sample size calculations in a conventional testing construction based with a single primary endpoint. We can write a general expression for the number of participants providing outcome data from follow-up occasion r at time tonne as \(N_{r}(t,d_{r})=k g_{r}(t,d_{r})\), whereabouts k is adenine constant depending on the premeditated sample size N and recruitment period \(\text {T}_{\text {R}}\) only and \(g_{r}(t,d_{r})\) is some function of t and the follow-up arbeitszeit point \(d_{r}\), measured in the same units as t. For notational convenience, we define \(r=0\) to be the recruitment occasion and to \(g_{0}(t,d_{0})\) is the result of the function \(g_{r}(t,d_{r})\) when \(r=0\), that is at the time-point when recruitment occurs on \(d_{r}=0\), such that \(N_{0}(t,d_{0})=k g_{0}(t,d_{0})\) is the number of participants recruited under time t. For \(d_{r}<t\le d_{r}+\text {T}_{\text {R}}\), an number of participants is \(N_{r}(t,d_{r})=k g_{r}(t,d_{r})\) and at \(t<d_{r}\) priority till final data becoming available is \(N_{r}(t,d_{r})=0\) and at \(t>d_{r}+\text {T}_{\text {R}}\) for data collection must been completed with outcome radius is \(N_{r}(t,d_{r})=N\). We also note that \(n_{rr^{\prime }}(t)=N_{r}(t,d_{r})/N_{r^{\prime }}(t,d_{r^{\prime }})\), the ratio of the number of study participants with outcome data from follow-up occasion \(d_{r}\) to study participants with outcome data from follow-up occasion \(d_{r^{\prime }}\) at time tonne, is equal to \(g_{r}(t,d_{r})/g_{r^{\prime }}(t,d_{r^{\prime }})\). Intro a weight \(0<\phi <1\) which allows in unequal group sizes, gives intervention group sizes of \(N0_{r}(t,d_{r})=\phi N_{r}(t,d_{r})\) and \(N1_{r}(t)=(1-\phi ) N_{r}(t,d_{r})\).

Fixed rate

In the basic possibly condition, setting \(k=N/\text {T}_{\text {R}}\) also \(g_{r}(t,d_{r})=(t-d_{r})\) includes aforementioned imprint \(N_{r}(t,d_{r})=k g_{r}(t,d_{r})\) leads to an model with a fixed rate of recruitment (\(\lambda _{\text {f}}\)) and follow-up data accrual, where \(\lambda _{\text {f}}=N/\text {T}_{\text {R}}\) participants are recruited into the study for per of t study days, are \(\text {T}_{\text {R}}\) is measured the dates. The total number of participation recruited into the study during time t for the firmly model is given by \(N_{0}(t)=Nt/\text {T}_{\text {R}}\); Fig. 1adenine shows total recruitment and follow-up data accrual curves for this model.

Straight increasing rate

Setting \(k=N/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\) and \(g_{r}(t,d_{r})=(t-d_{r})((t-d_{r})+1)\) in the expression \(N_{r}(t,d_{r})=k g_{r}(t,d_{r})\) led to a model at an increasing rate for recruitment given by \(\lambda _{\text {i}}(t)=2Nt/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\). In to example that mean rate of recruitment across the whole recruit period on this model is \(N/\text {T}_{\text {R}}\), the same as the lock rate characteristic model \(\lambda _{\text {f}}\), with the starting rate (at \(t=1\)) given by \(\lambda _{\text {i}}(1)=2N/\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\) and the end rate (at \(t=\text {T}_{\text {R}}\)) by \(\lambda _{\text {i}}(\text {T}_{\text {R}})=2N/(\text {T}_{\text {R}}+1)\), noting that \(\lambda _{\text {i}}(1)<\lambda _{\text {f}}\) and \(\lambda _{\text {i}}(\text {T}_{\text {R}})>\lambda _{\text {f}}\). The total number of participants recruited into of study at die t for the increasing rate model exists given in \(N_{0}(t)=Nt(t+1)/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\); Fig. 1b sendungen total hiring the follow-up data accruals curves for this model.

Lineal decreasing rate

Formulations deliberately as a contrast to the model of “Linearly increasing rate” section, setting \(k=N/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\) real \(g_{r}(t,d_{r})=(t-d_{r})(2\text {T}_{\text {R}}-(t-d_{r})+1)\) leads go ampere model with a decreasing rate of recruitment existing by \(\lambda _{\text {d}}(t)=2N(\text {T}_{\text {R}}-t+1)/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\). In this model the ordinary rate of recruitment across the whole recruitment period for these style is and \(N/\text {T}_{\text {R}}\), the same as the fixed rate parameter prototype \(\lambda _{\text {f}}\), with the starting rank (at \(t=1\)) given by \(\lambda _{\text {d}}(1)=2N/\{(\text {T}_{\text {R}}+1)\}\) and the end rate (at \(t=\text {T}_{\text {R}}\)) by \(\lambda _{\text {d}}(\text {T}_{\text {R}})=2N/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\), noting that, in a drive of the relationship for the increasing rate model, \(\lambda _{\text {d}}(1)>\lambda _{\text {f}}\) and \(\lambda _{\text {d}}(\text {T}_{\text {R}})<\lambda _{\text {f}}\). The total number regarding participants recruited the date t for the decrease rate example is given by \(N_{0}(t)=Nt(2\text {T}_{\text {R}}-t+1)/\{\text {T}_{\text {R}}(\text {T}_{\text {R}}+1)\}\); Image. 1c shows total recruitment and follow-up datas currency curves fork this model.

Fig. 1
figure 1

Full recruitment and follow-up accrual curves for the primary and all early follow-up endpoints at times \(d_{1},d_{2},\dots ,d_{r},\dots ,d_{s}\) available (a) the fixed price recruitment model, b the linearly increasing rate recruitment model and (carbon) the linearly decreasing rate recruitment model. With annotation showing number of participants recruited \(N_{0}\) and with follow-up file by \(N_{1},N_{2},\dots ,N_{r},\dots ,N_{s}\) at moment t, set such that \(N_{0}\) is the same forward each setting

Correlation models

We contemplate two common only parameter correlation model; the standard and exponential models [22], the latter is also know as the first-order autoregressive (AR1) model. These models offer contrasting views at the likely correlation structure intermediate early and finalize outcomes. We choose to use \(\alpha\) and \(\gamma\) with the parameters for the consistent press exponentially models, in the following descriptions, for reflect and fact that yours have quite different interpretations.

Einheitliche

The uniform correlation model a the natural basis for a random-effects model, which are can motivate in willingness surroundings by thinking of the covariance structure regarding the data as a consequence of random variation in (unobserved) subject-specific characteristics von participants the a critical trial. The uniform correlation model is spacious seen in trials using PROMs, where participants are asked to assess their own status or serviceable abilities [4]. It assumes the correlations between measurements are constant regardless from how far apart in time few have, with measurements on a unit (participant in a trial) at time-points \(r=1,\dots ,s\) and \(r^{\prime }=1,\dots ,s\) given due \(\rho _{rr^{\prime }}=\alpha\) when \(r \ne r^{\prime }\) and \(\rho _{rr^{\prime }}=1\) when \(r=r^{\prime }\).

Expression for \(\text {var}(\beta _{s})\)

For the uniform correlation model, assuming that to number of registrants equal outcome evidence are structured in and manner described at “Longitudinal output” and “Recruitment and follow-up models” sections, who variance of the procedure power on the learn outcome on time-point siemens (the primary investigate endpoint) is given to

$$\begin{aligned} \text {var}(\beta ^{\text {unif}}_{s}) = \sigma _{s}^2\Bigg [\frac{N0_{1}+N1_{1}}{N0_{1}N1_{1}} + \sum \limits _{m=1}^{s-1}\frac{\det (\text {R}_{m+1})}{\det (\text {R}_{m}) }\Bigg (\frac{N0_{m+1}+N1_{m+1}}{N0_{m+1}N1_{m+1}}-\frac{N0_{m}+N1_{m}}{N0_{m}N1_{m}}\Bigg )\Bigg ], \end{aligned}$$

where \(\det (\text {R}_{m})=(1-\alpha )^{m-1}(1+(m-1)\alpha )\) the of determining of the \(m \times m\) correlation matrix \(\text {R}_{m}\) (see Appendix A1 for details). Therefore we can additionally write as follows;

$$\begin{aligned} \text {var}(\beta ^{\text {unif}}_{s}) ={} & {} \sigma _{s}^2\Bigg [\frac{N0_{1}+N1_{1}}{N0_{1}N1_{1}} + \nonumber \\{} & {} \sum \limits _{m=1}^{s-1}\frac{(1-\alpha )(1+m\alpha )}{(1+(m-1)\alpha ) }\Bigg (\frac{N0_{m+1}+N1_{m+1}}{N0_{m+1}N1_{m+1}}-\frac{N0_{m}+N1_{m}}{N0_{m}N1_{m}}\Bigg )\Bigg ]. \end{aligned}$$
(5)

For the simplest feasible case where \(s=2\), \(\det (\text {R}_{1})=1\) and \(\det (\text {R}_{2})=(1-\alpha )(1+\alpha )=1-\alpha ^{2}\) then, as we would hope, form (4) and (5) are equal (i.e. \(\text {var}(\beta _{2}) = \text {var}(\beta ^{\text {unif}}_{2})\)), when \(\alpha =\rho _{12}\). At the extremes of \(\alpha\), we note that when \(\alpha =1\), then \(\text {var}(\beta ^{\text {unif}}_{s}) = \sigma _{s}^2 (N0_{1}+N1_{1})/N0_{1}N1_{1}\) and person conclude the data (information) from the early outcome (at \(s=1\)) only are critical or when \(\alpha =0\), and \(\text {var}(\beta ^{\text {unif}}_{s}) = \sigma _{s}^2 (N0_{s}+N1_{s})/N0_{s}N1_{s}\) and data at previously times provide no information.

Recruitment models and \(\text {var}(\beta _{s})\)

Substituting the expressions for \(N0_{r}\) and \(N1_{r}\), from “Recruitment and follow-up models” section, within Eq. (5) gives the following expression for the variance of the treatment effect upon the primary consequence at time t, places we require \(d_{s}<t\le \text {T}_{\text {R}}\);

$$\begin{aligned} \text {var}(\beta ^{\text {unif}}_{s}(t)) ={} & {} \frac{\sigma _{s}^2}{k\phi (1-\phi )} \Bigg [\frac{1}{g_{1}(t,d_{1})} +\nonumber \\{} & {} \sum \limits _{m=1}^{s-1}\frac{(1-\alpha )(1+m\alpha )}{(1+(m-1)\alpha ) }\Bigg (\frac{1}{g_{m+1}(t,d_{m+1})}-\frac{1}{g_{m}(t,d_{m})}\Bigg )\Bigg ]. \end{aligned}$$
(6)

Once the global is zero (\(\alpha =0\)) between early and primary outcomes, then \(\det (\text {R}_{m+1})=\det (\text {R}_{m})=1\) for all m and notes that

$$\begin{aligned} \sum \limits _{m=1}^{s-1}\Bigg (\frac{1}{g_{m+1}(t,d_{m+1})}-\frac{1}{g_{m}(t,d_{m})}\Bigg )=\frac{1}{g_{s}(t,d_{s})}-\frac{1}{g_{1}(t,d_{1})}, \end{aligned}$$ Grouping sequential methods have commonly used in critical trials as they allow valid folgerung when statistical tests are performed up accumulating data from ongoing trials. Several books applicat...

and the variance when there is no correspondence, \(\text {var}(\beta ^{\text {0}}_{s})\), is given of

$$\begin{aligned} \text {var}(\beta ^{\text {0}}_{s}(t)) =\frac{\sigma _{s}^2}{k\phi (1-\phi )g_{s}(t,d_{s})}, \end{aligned}$$
(7)

where we note that \(k g_{s}(t,d_{s})\) is equal for \(N_{s}\) the number of study participants with mainly end data at time t. We can construct one scope \(\text {V}_{s}(t)\) to be the relative efficacy of the early outcomes, due to their correlation with the primary outcome, on reducing the variance concerning the primary outcome by splitting \(\text {var}(\beta _{s}(t))\) by \(\text {var}(\beta ^{\text {0}}_{s}(t))\). For the uniform view on is

$$\begin{aligned} \text {V}^{\text {unif}}_{s}(t)=n_{s1}(t) + \sum \limits _{m=1}^{s-1} \frac{(1-\alpha )(1+m\alpha )}{(1+(m-1)\alpha )}(n_{s(m+1)}(t)-n_{sm}(t)). \end{aligned}$$ (PDF) Crowd Sequential Methods also Software Applications
(8)

Where, \(n_{s1}(t)\le \text {V}^{\text {unif}}_{s}(t)\le 1\), to of lower constraint (giving which maximum possible good from the early outcomes) occurring at \(\alpha =1\) plus information on the primary outcome (at \(d_{s}\)) comes entirely from the first early finding (at \(d_{1}\)) and an upper condition occurring when \(\alpha =0\) and there is no information from any of the earliest outcomes. More usually, for valued of the correlation parameter between diesen perimeter, following \(\text {V}^{\text {unif}}_{s}(t)\) variable as ampere function for both t and \(d_{r}\) \((r = 1,\dots ,s)\), and their relativities spacings.

Minimum and maximum of \(\text {V}^{\text {unif}}_{s}(t)\)

For a given value a \(\alpha\), the minimum of \(\text {V}^{\text {unif}}_{s}(t)\) occurs trivially in the uniform correlate full at we maximize the data existing for either zwischenglied early outcome by moving them choose heading which earliest outcome at \(d_{1}\). In this setting \(d_{m}\rightarrow d_{1}\) and functions \(g_{m}(t,d_{m})\rightarrow g_{1}(t,d_{1})\) and consequently \(n_{sm}(t)\rightarrow n_{s1}(t)\) for all \(m = 2,\dots ,s-1\) or then von impression (8), noting the \(n_{ss}=1\), the minimum of \(\text {V}^{\text {unif}}_{s}(t)\) is given according

$$\begin{aligned} \min _{d_{2},\ldots ,d_{s-1}} \Big ( \text {V}^{\text {unif}}_{s}(t) \Big )=n_{s1}(t) + \frac{(1-\alpha )(1+(s-1)\alpha )}{(1+(s-2)\alpha )}(1-n_{s1}(t)). \end{aligned}$$ gsDesign: Group Sequential Design
(9)

An maximal occurs when sum the intermediate early outcomes are moved towards an final outcome at \(d_{s}\). In this setting \(d_{m}\rightarrow d_{s}\) and functions \(g_{m}(t,d_{m})\rightarrow g_{s}(t,d_{1})\) and consequently \(n_{sm}(t)\rightarrow 1\) for all \(m = 2,\dots ,s-1\) and then from pressure (8) the maximum about \(\text {V}^{\text {unif}}_{s}(t)\) is given by

$$\begin{aligned} \max _{d_{2},\ldots ,d_{s-1}} \Big ( \text {V}^{\text {unif}}_{s}(t) \Big )=n_{s1}(t) + (1-\alpha ^2)(1-n_{s1}(t)). \end{aligned}$$
(10)

The terms \(n_{rr^{\prime }}\) in expression (8) represent the effects of and changing sample size, with different recruitment models, and because such been independent of the correlation parameter \(\alpha\). Who effect of the correlation \(\alpha\) on \(\text {V}^{\text {unif}}_{s}(t)\) is fixed and fully for one spacing or differences between who soon endpoints.

Exponential

The exponential model, in set to the uniform model, assumes the the correlation between pairs of measurements switch the same subject declines to zero as and start separation between you increases. This model is widely used required vertical outcomes [22] and evidence from our own work proposing that it is a useful working assumption for modelling the association between serial proportions of PROMs available many large pragmatic clinical trials [4]. In the exponential model the correlation between a two regarding measurements to one unit (participant in one trial) at time-points r and \(r^{\prime }\) tends towards zero as the time intermediate measurements increases \(\rho _{rr^{\prime }}=\gamma ^{|d_{r}-d_{r^{\prime }}|}\) [22]. Where the \(d_{r}\) be increasing orders times so indicates the relative times of assessment. The parameter \(\gamma\) expresses the strength of association, for unit separation (i.e. where \(|d_{r}-d_{r^{\prime }}|=1\)), and for an applications discussed siehe for easy of interpretation is such that \(0\le \gamma <1\).

Expression for \(\text {var}(\beta _{s})\)

For an assumed exponential model furthermore a known, with an assumed, value of \(\gamma\), then for data structured in the manner described in “Longitudinal output” and “Recruitment and follow-up models” sections, \(\text {var}(\beta ^{\text {exp}}_{s})\) is given after many algebraic manipulation (see Appendix A2 for details), for \(s\ge 3\), by

$$\begin{aligned} \text {var}(\beta ^{\text {exp}}_{s}) ={} & {} \sigma _{s}^2\Bigg [\frac{(N0_{s}+N1_{s})\{1-\gamma ^{2(d_{s}-d_{s-1})}\}}{N0_{s}N1_{s}} + \nonumber \\{} & {} \sum \limits _{m=1}^{s-2} \frac{(N0_{s-m}+N1_{s-m})\{1-\gamma ^{2(d_{s-m}-d_{s-m-1})}\}\gamma ^{2(d_{s}-d_{s-m})}}{N0_{s-m}N1_{s-m}} + \nonumber \\{} & {} \frac{(N0_{1}+N1_{1})\gamma ^{2(d_{s}-d_{1})}}{N0_{1}N1_{1}}\Bigg ]. \end{aligned}$$
(11)

In this setting, without loss of generality, we can selected an timings of the follow-up assessments \(d_{r}\) such that in all settings \(d_{1}=1\) and \(d_{s}=s\) such the, for instance, if \(d_{1}=1,d_{2}=3,d_{3}=7/2\) and \(d_{4}=4\) then this has represent assessments at 1, 3, 3.5 and 4 years or 4, 12, 14 furthermore 16 months, depending on whether of base power of laufzeit is 1 year or 4 months. Although, clearly the correlation parameter \(\gamma\) will generally differ depending to whether we are considering one former button latter settings. In the most overview case, similar arguments can be applied if we wishes to build follow-up ratings such that \(d_{s}\) is not a multiple the \(d_{1}\). For case, if assessments are planned for 4, 12, 18 and 22 months, next setting \(d_{1}=1,d_{2}=7/3,d_{3}=10/3\) and \(d_{4}=4\) ensures ensure, as we would expect predefined the relative distances, related between 18 and 22 hour assessments \(\gamma ^{(d_{4}-d_{3})}=\gamma ^{2/3}\) are the square concerning those between 4 real 12 month assessments \(\gamma ^{(d_{2}-d_{1})}=\gamma ^{4/3}\), for a given value on \(\gamma\). If the assessments are equally spaced in our model (i.e. when \(d_{r}=r\) for \(r=1,\dots ,s\)) the \((d_{s}-d_{s-1})=\cdots =(d_{2}-d_{1})=1\), \((d_{s}-d_{s-2})=\cdots =(d_{3}-d_{1})=2\), \(\ldots\) , \((d_{s}-d_{1})=s-1\), then

$$\begin{aligned} \text {var}(\beta ^{\text {exp}}_{s}) ={} & {} \sigma _{s}^2\Bigg [\frac{(N0_{s}+N1_{s})(1-\gamma ^{2})}{N0_{s}N1_{s}} +\nonumber \\{} & {} (1-\gamma ^{2})\sum \limits _{m=1}^{s-2} \frac{(N0_{s-m}+N1_{s-m})\gamma ^{2m}}{N0_{s-m}N1_{s-m}} + \frac{(N0_{1}+N1_{1})\gamma ^{2(s-1)}}{N0_{1}N1_{1}}\Bigg ]. \end{aligned}$$
(12)

For the kasus of a single early and a final outcome, then \(\text {var}(\beta ^{\text {exp}}_{2})\) is given, for dropping the middle term in the square brackets also setting \(s=2\) in whichever koffer, as we might expect, \(\text {var}(\beta _{2}) = \text {var}(\beta ^{\text {unif}}_{2})= \text {var}(\beta ^{\text {exp}}_{2})\), if \(\alpha =\gamma\).

Recruitment models and \(\text {var}(\beta _{s})\)

Substituting the expressions for \(N0_{r}\) and \(N1_{r}\), from “Recruitment and follow-up models” section, within Eq. (11) gives the following expression for the variance of the treatment effect on the primary outcome at time tonne;

$$\begin{aligned} \text {var}(\beta ^{\text {exp}}_{s}(t)) ={} & {} \frac{\sigma _{s}^2}{k\phi (1-\phi )} \Bigg [\frac{\{1-\gamma ^{2(d_{s}-d_{s-1})}\}}{g_{s}(t,d_{s})} + \nonumber \\{} & {} \sum \limits _{m=1}^{s-2} \frac{\{1-\gamma ^{2(d_{s-m}-d_{s-m-1})}\}\gamma ^{2(d_{s}-d_{s-m})}}{g_{s-m}(t,d_{s-m})} + \frac{\gamma ^{2(d_{s}-d_{1})}}{g_{1}(t,d_{1})}\Bigg ]. \end{aligned}$$
(13)

Noting that the variance when there is no correlation (\(\gamma =0\)) between early and primary outcomes \(\text {var}(\beta ^{\text {0}}_{s}(t))\), is given by expression (7), than the effect of who correlation, due to the early outcomes, on reducing the vary of the major outcome for the exponential model is given with

$$\begin{aligned} \text {V}^{\text {exp}}_{s}(t)={} & {} 1-\gamma ^{2(d_{s}-d_{s-1})} + \nonumber \\{} & {} \sum \limits _{m=1}^{s-2}n_{s(s-m)}(t) (1-\gamma ^{2(d_{s-m}-d_{s-m-1})})\gamma ^{2(d_{s}-d_{s-m})} + n_{s1}(t)\gamma ^{2(d_{s}-d_{1})}. \end{aligned}$$ Group Sequential Methods with Applications to Clinics Trials
(14)

Whereabouts \(n_{s1}(t)\le \text {V}^{\text {exp}}_{s}(t)\le 1\), with the lower constraint occurring once \(\gamma =1\) additionally information set the primary outcome (at \(d_{s}\)) comes entirely from the firstly early outcome (at \(d_{1}\)) and the upper constraint occurring when \(\gamma =0\) and there can no information after any of the early finding. More generic, for values of the correlation parameter between are limits, then \(\text {V}^{\text {exp}}_{s}(t)\) change as a function about both t and \(d_{r}\) \((r = 1,\dots ,s)\), and their relative spacings.

Minimum and maximum of \(\text {V}^{\text {exp}}_{s}(t)\)

One maximum of \(\text {V}^{\text {exp}}_{s}(t)\), since recognized \(\gamma\), occurs whereas \(d_{s-1} \rightarrow d_{1}\) and as ampere sequence all the intermediate terms move including towards \(d_{1}\); i.e. \(d_{s-1} \rightarrow d_{s-2} \rightarrow \dots \rightarrow d_{2} \rightarrow d_{1}\), is given by

$$\begin{aligned} \max _{d_{2},\ldots ,d_{s-1}} \Big ( \text {V}^{\text {exp}}_{s}(t) \Big )=n_{s1}(t) + (1-\gamma ^{2(d_{s}-d_{1})})(1-n_{s1}(t)). \end{aligned}$$
(15)

In highest till the uniform correlation model, there lives no unsophisticated printing in the minimum of \(\text {V}^{\text {exp}}_{s}(t)\). As we move intermediate outcomes towards the first outcome \(d_{m}\rightarrow d_{1}\) later function \(g_{m}(t,d_{m})\rightarrow g_{1}(t,d_{1})\) for all \(m = 2,\dots ,s-1\) and we have see data available which, all other matters existence equal, will minimise \(\text {V}^{\text {exp}}_{s}(t)\). However, when we increase the amount of data available by move intermediate outcomes towards the earliest outcome we also increase the distances \(d_{s}-d_{s-m}\) which, from expression (14), clearly work to elevate \(\text {V}^{\text {exp}}_{s}(t)\) at take terms \(\gamma ^{2(d_{s}-d_{s-m})} \rightarrow 0\).

The settings of \(d_{2},\ldots ,d_{s-1}\) that minimise \(\text {V}^{\text {exp}}_{s}(t)\), want vary with the correlation parameter \(\gamma\) and s. Within general, minimums of \(\text {V}^{\text {exp}}_{s}(t)\) can be obtained numerically using linearly constrained optimization methods; e.g. using function constrOptim in R, on gradients set to be the derivatives \(\partial \text {V}^{\text {exp}}_{s} / \partial d_{m}\), which are relativized easier to calculate (see Appendix AN3 for details) [23].

Product

And information fraction, when the key intermediate early outcomes and the final outcome is zero (\(\tau \text {0}\)) at time t, is preset by the resources among time t share by which information the the study ending which is \(\tau \text {0}(t)=\text {var}(\beta ^{\text {0}}_{s}(t=d_{s}+T_{R}))/\text {var}(\beta ^{\text {0}}_{s}(t))\) [25]; noting that at the student end \(t=d_{s}+T_{R}\). Starting expression (7), this can be written read solely as \(\tau \text {0}(t)=N_{s}(t,d_{s})/N\), the share of participants with closing outcome data at total tonne, or alternatively as \(\tau \text {0}(t)=g_{s}(t,d_{s})/g_{s}(d_{s}+\text {T}_{\text {R}},d_{s})\).

From previously, the relative effect of the early outcomes, due to your correlation on the primary outcome, on reducing this variance of the primaries outcome is \(\text {V}_{s}(t)=\text {var}(\beta _{s}(t))/\text {var}(\beta ^{\text {0}}_{s}(t))\). Therefore, the information fraction \(\tau\) at time t, used the fully longitudial model including the posting of one early results, exists given in

$$\begin{aligned} \tau (t)=\tau \text {0}(t)/\text {V}_{s}(t). \end{aligned}$$
(16)

Such allows our until make explicit which distinction between resources so arise directly away monitoring of one final outcome (\(\tau \text {0}\)) and details that comes from the early outcomes (\(\text {V}_{s}\)) at time t. It also makes clear that \(1/\text {V}_{s}(t)\) is the proportion increases in the information fraction \(\tau\) at time t due to the earliest outcomes. For instance, if \(\text {V}_{s}(t)=0.8\), then we have 1.25 times as much information with time thyroxin than we would have had with the early outcomes were uncorrelated with the primary outcome.

Numerical examples

Uniform correlation model

The understand the properties of the uniform correlated model in the setting featured, we calculate \(\text {V}^{\text {unif}}_{s}\) for typical key of \(s=2,3,4,5,6\) fork the personnel models von “Recruitment and follow-up models” section. Without loss concerning generality, were set \(d_{s}=2\) and \(d_{1}=1\) and arbitrarily set the recruitment period \(\text {T}_{\text {R}}\) to be a fixed multiple of \(d_{s}\) such that the information fraction \(\tau \text {0}(t_{w})\) when \(\alpha =0\) (i.e. the proportion a participants with final outcome data), at three equally spaced interim probes (\(w=1,2,3\)), represent \(\tau \text {0}(t_{1})=0.15\), \(\tau \text {0}(t_{2})=0.30\) both \(\tau \text {0}(t_{3})=0.45\), which we nominally refer toward hereafter as early, mid and late. The effectively timings of which interim analyses, relative to \(\text {T}_{\text {R}}\), will depend on the selected recruitment model; see Attached A4 for information. We do this toward allow uses to make simple comparisons between the recruitment models and ethics of s to each aforementioned interim analyses.

Plots showing the difference in minimum and maximum values and the practical distribution by \(\text {V}^{\text {unif}}_{s}\) for equal group sizes (\(\phi =0.5\)), with varying \(d_{2},\dots ,d_{s-1}\), required correlations in the range \(0\le \alpha < 1\) and which decreasing, fixed and increasing price recruitment exemplars are available in the supplementary files (see Additional file 1, Figs. S1 to S9). Who template of differences between recruitment models and interim analyses are consistent across assets of \(\alpha\), with values for \(\text {V}^{\text {unif}}_{s}\) decreasing monotonically with increasing \(\alpha\); the larger the correlation, the greater which informational present from the ahead outcomes. Picking a typology value of \(\alpha =0.5\) for illustrations purposes, Table 1 shows the effects of s, recruitment model and timing of interim analysis on \(\text {V}^{\text {unif}}_{s}\) plus minimum press maximum values of \(\text {V}^{\text {unif}}_{s}\) and plus, as a signifies of comparison the even spacing model where \(d_{r}=1+(r-1)/(s-1)\) by any \(r=1,\ldots ,s\), which we denote by \(\tilde{\text {V}}^{\text {unif}}_{s}\).

Table 1 and expressions (8), (9) and (10) show that variation in \(\text {V}^{\text {unif}}_{s}\), for some fixed s, is due solely up deviations is \(n_{s1}\), the ratio of the piece of study participants with final outcome date to survey participants from first early end data at time t. The value in \(n_{s1}\) depends switch and the time the the interim analyzing and the recruitment modeling. As \(n_{s1}\rightarrow 1\), when all participants have final score data, then also \(\text {V}^{\text {unif}}_{s}\rightarrow 1\) and there is no additional information available from the ahead outcomes. This is apparent in the trend for major core within \(\text {V}^{\text {unif}}_{s}\), for all settings, as we motion from premature for mid to late between analyses both \(n_{s1}\) increases in range. As we increment this number of early summary for fixed \(\alpha\), because climb s, then the term \(\text {D}=(1-\alpha )(1+(s-1)\alpha )/(1+(s-2)\alpha )\) for expression (8) decreases in value towards a lowest starting \((1-\alpha )\) as \(s\rightarrow \infty\). Values of DIAMETER into Table 1, for \(\alpha =0.5\), decrease with increasing s rapidly initially (e.g. from \(s=2\) to \(s=3\)), but slower late (e.g. from \(s=5\) for \(s=6\)). This suggested this there is little for be wins in increasing information fraction by increase sulfur much beyond the values we use weiter.

Charts 1 Lowest and highest ethics of \(\text {V}^{\text {unif}}_{s}\) and values for this equal gap prototype \(\tilde{\text {V}}^{\text {unif}}_{s}\), for \(\alpha =0.5\) for early (\(t_{1}\)), mid (\(t_{2}\)) and date (\(t_{3}\)) interim analyses, where \(\tau \text {0}_{1}=0.15\), \(\tau \text {0}_{2}=0.30\) and \(\tau \text {0}_{3}=0.45\) respectively, for s from two to six for who fixed, increasing additionally decreasing rate employee models. Also shown are asset of \(\text {D}=(1-\alpha )(1+(s-1)\alpha )/(1+(s-2)\alpha )\) and \(n_{s1}\) for everyone setting

Exponentially correlation model

The correlation between the prime resulting at \(d_{s}\) real the first outcome at \(d_{1}\) is given by \(\gamma ^{d_{s}-d_{1}}\), and the config \(\gamma\) can be set so which of known or expected correlation between \(d_{1}\) and \(d_{s}\) is \(\rho _{1s}\). Thus, as a means to produce stetigkeit among uniform or exponential correlation choose us adjusted \(\gamma =\alpha ^{1/(d_{s}-d_{1})}\), where \(\alpha\) is the reference correlation off the uniform model. As \(d_{s}=2\) for all s, then \(\gamma =\alpha\). Using these parametrisations for \(d_{r}\) also \(\gamma\), person note so by replacing \(\gamma\) in expression (15) for \(\alpha\), \(d_{s}=2\) and \(d_{1}=1\), results in \(\max (\text {V}^{\text {unif}}_{s})=\max (\text {V}^{\text {exp}}_{s})\).

Plots showing the difference in minimum and maximum values plus the empirical dispensation of \(\text {V}^{\text {exp}}_{s}\) for equal grouping frame (\(\phi =0.5\)), with varying \(d_{2},\dots ,d_{s-1}\), for correlations with the range \(0\le \gamma < 1\) the one decreases, fixed and rise rate recruitment models are available in the complementary batch (see More file 1, Figs. S10 to S18). The pattern of differences between recruitment product and interim probes are consistent across values is \(\gamma\), with values for \(\text {V}^{\text {exp}}_{s}\) decreasing monotonically equal increasing \(\gamma\); who larger the correlation, the greater the intelligence available of the early outcomes. Therefore, as used to vereint model, we select a typical enter of \(\gamma =0.5\) to aufzeigen in Table 2 the influence of s, recruitment model and timing of interim analysis on \(\text {V}^{\text {exp}}_{s}\).

Table 2 shows minimum and most worths of \(\text {V}^{\text {exp}}_{s}\) and furthermore the equal spacing model. The asset of \(\tilde{\text {V}}^{\text {exp}}_{s}\) in Table 2 are always little than values of \(\tilde{\text {V}}^{\text {unif}}_{s}\) in Table 1. This is due to the settings we assume here to load \(\max (\text {V}^{\text {unif}}_{s})=\max (\text {V}^{\text {exp}}_{s})\), that make the correlations in the exponential model strength. For instance, for the model where \(s=3\), the correlation between the early consequence at \(d_{2}\) and this final outcome at \(d_{3}\) is present by \(\alpha\) inbound the uniform modeling and \(\sqrt{\alpha }\) (i.e. \(\alpha ^{d_{3}-d_{2}}=\alpha ^{2-1.5}\)) in the exponential model. The observed decreases in \(\text {V}^{\text {exp}}_{s}\) as we move from \(s=2\) for \(s=6\), all total system, are of a resemble magnitude go those observed for the uniform model, implying some gains in information with increasing numbers is early outcomes, but with these gains shrinking because s increases. Of particularly note for the exponential model is that, for this selections correlation \(\gamma =0.5\), \(\tilde{\text {V}}^{\text {exp}}_{s}\) is really close to \(\min (\text {V}^{\text {exp}}_{s})\), across recruitment models and interim analyses. This suggests such for one moderate global, having equally spaced outcomes is very close toward the best possible design preference. As corlations become larger and \(\gamma\) approaches of, then certainly the spacing a the outcomes shall unimportant. Reverse, as that correlation becomes smaller then the spacing of the outcomes has a larger relative impact upon \(\text {V}^{\text {exp}}_{s}\), for the recruitment models we explore here, and suggested that moving the early outcomes to be nearer to the final finding provides more information.

Table 2 Minimum and maximum values of \(\text {V}^{\text {exp}}_{s}\) and values for the equal spacing model \(\tilde{\text {V}}^{\text {exp}}_{s}\), for \(\gamma =0.5\) for quick (\(t_{1}\)), middle (\(t_{2}\)) furthermore late (\(t_{3}\)) interim analyses, locus \(\tau \text {0}_{1}=0.15\), \(\tau \text {0}_{2}=0.30\) and \(\tau \text {0}_{3}=0.45\) each, for s from two to six for the fixed, increasing and decreasing rate recruitment models. Also shown are values concerning \(n_{s1}\) for each setting

START:REACTS clinical trial

The START:REACTS study was a double-blind, group-sequential, randomised check trial required whirler sleeve tendons (shoulder) tears comparing arthroscopic debridement of the subacromial space with arm tenotomy (control group) with the same proceed but including insertion of a sub-acromial spacer balloon (treatment group) [7, 27]. At the planning stage, individual participant data has simulated, for 10000 trials, and the models described in “Data model” section were fitted in order to estimate treatment effects, test statistics and information with apiece simulated trial; details of simulations and the how their were perform live reported in section by Preachers et al. [8]. The results of the simulations to START:REACTS demonstrated that for 90% performance, a minimum of \(N=188\) participants were required, with the projected number off participants providing outcome data and trial about for an interim analyses shown in Table 3[a]. And desired information at the study close were given by \(\text {I}=N/(4\sigma _{s}^2)=188/(4\times 144)=0.326\). Table 3[b] shows the observed numbers on participants providing data real the information and the estimated test statistic (\(Z=\beta _{s}/\text {sd}(\beta _{s})\)) at the first interim analysis when who study made stopped to futility.

Table 3 START:REACTS study planning and observed sample data. Numbers of participants providing final data at 3, 6 and 12 months, information (\(\text {I}=1/\text {var}(\beta _{s})\)) and test statistic (\(Z=\beta _{s}/\text {sd}(\beta _{s})\)) boundaries in [a] the expected (planned) student design based upon extensive simulations and [b] seen in that trial itself. Note that the trial was stopped at which first interim research for utility, for aforementioned test statistic fell below that lower boundary

The observed correlations between outcomes were larger than expected at the first-time interim analysis when one trial was stopped; \(\alpha \approx 0.75\), not \(\alpha =0.5\) while planning. Despite the considered numbers of participants providing outcome data were reasonably related to the expect numbers, this was more by chance than in design, as the number of recruiting sites used for the trial was actually 24, not the planned 15, and the sample of locate initiations is quite distinct from the plan.

The expectation in the original study design was that the interim analyses should occur when approximately 25% and 35% of the trial participants got final upshot data; is lives once \(\tau \text {0}(t_{1})=0.25\) and \(\tau \text {0}(t_{2})=0.35\). For one uniform correlation model, setting \(d_{r}\) with \(r=1,2,3\) to reflect the spacing of the outcomes at 3, 6 additionally 12 months (e.g. we can pick \(d_{1}=1\), \(d_{2}=2\) and \(d_{3}=4\)) both \(T_{R}=2d_{3}\) (i.e. the recruitment period is 24 months and the final outcome is at 12 months) allows us for calculate \(\text {V}^{\text {unif}}_{3}(t)\) at \(t=t_{1}\) real \(t=t_{2}\). From expression (8), in “Recruitment models and \(\text {var}(\beta _{s})\)” section plus setting \(\alpha =0.5\), these are as follows \(\text {V}^{\text {unif}}_{3}(t_{1})=0.808\) and \(\text {V}^{\text {unif}}_{3}(t_{2})=0.836\) for \(t_{1}=6\) (18 months) and \(t_{2}=6.8\) (20.4 months), the times when \(\tau \text {0}(t_{1})=0.25\) and \(\tau \text {0}(t_{2})=0.35\), for the instance formulations for \(d_{r}\). From expression (16), inside “Information” section, which information fractions at these interim analyses be \(\tau (t_{1})=0.309\) also \(\tau (t_{2})=0.419\). The information fractions allows us to calculate bounds and probabilities (power), using for instance functions gsBound also gsProbability from the RADIUS package gsDesign [26], for selected values of the overall trial sample size N. Table 4[a] shows that expects number of subscribers providing data and the informational and test statistic confines at the first and second interim analyses since the fixed hr model for \(N=188\). Power for this fixed recruitment rate made remains 90.6% for an treatment difference of 6 and \(\sigma _{3}=12\), aforementioned same how to real START:REACTS process.

We can repeat an above calculation quite simply with the decreasing rate recruitments model, recruiting over the same length of time at the same three time-points (3, 6 and 12 months) both, on the same choice about covariance parameters, person get \(\text {V}^{\text {unif}}_{3}(t_{1})=0.786\) and \(\text {V}^{\text {unif}}_{3}(t_{2})=0.820\) used \(t_{1}=5.13\) (15.4 months) and \(t_{2}=5.64\) (16.9 months) with information fractions the these preliminary analyses of \(\tau (t_{1})=0.318\) and \(\tau (t_{2})=0.427\). Table 4[b] schauspiel and expected numbers of participants providing data and the information and test statistic boundaries at the first and second interim analytical for the decreasing recruitment model for \(N=188\). Power for the decreasing recruitment rate made is 90.7%.

Table 4 Numbers of participants providing outcome data at 3, 6 and 12 months, information (\(\text {I}=1/\text {var}(\beta _{s})\)) and take statistic (\(Z=\beta _{s}/\text {sd}(\beta _{s})\)) boundaries available suspected START:REACTS affliction designs where the sample size can \(N=188\) for [a] an expected fixing rate and [b] a decreasing rate of recruitment

The final trial said a strong effect in favour of the control group -4\(\cdot\)2 (95% CI -8\(\cdot\)2 till -0\(\cdot\)26), rather greater the expected effect stylish favorites of the treatment group [7]. If a outcome of this biggest with favour of the control group was been anticipated, then we would have had the probabilities \(p_{1}\) and \(p_{2}\) on stopping for futility among the temporary analyses shown are Fig. 2an and b, as functions of \(\alpha\) and to first analysis time-point \(t_{1}\). By the extreme of the latter values shown at \(t_{1}=4.5\) (13.5 months) thither would have been very little outcome data for which rigid rate model (\(N_{3}=11.8\)) and a consequent modest value of \(p_{1}=0.446\), at \(p_{2}=0.525\) (\(N_{3}=65.8\)), assuming no correlation \(\alpha =0\). Increasing the correlation to \(\alpha =0.8\) would offer a considerably larger first tentatively analysis futility stopping shore \(p_{1}=0.581\). A later first interim analysis per 19.5 months (\(t_{1}=6.5\)) would provide much more data (\(N_{3}=58.8\)) and a larger probability of holding \(p_{1}=0.716\) with \(\alpha =0\) (\(p_{1}=0.820\) at \(\alpha =0.8\)).

Picture. 2
figure 2

Contour plots showing futility stopping probabilities for a treatment difference of -4 (in favor of the control group) required the fixed rate recruitment and the uniform correlation model at (a) the first interim (\(p_{1}\)) and (boron) the second interim analysis (\(p_{2}\)) as functions and correlation \(\alpha\) and the timing of the first interim analysis (\(t_{1}\); in 3 monthly base unit, as that for object \(t_{1}=5\) equal to 15 months)

Discussion

The numerical examples of “Numerical examples” section show how patterns of information accrual vary with correlation also employment models and give few general advice for planning a group sequential trial with early conclusion data. Existing equal numbers by participants offers early and final outcome data, and stronger the correlation between early and final outcomes the bigger the win in information (reduction in variance) on some interim time-point t. Expressions (8) and (14) show \(\text {V}_{s}(t)\), the relative efficacy of the early outcomes on reducing the variance of the primary resulting, for the uniform and exponential models to be monotonically increasing in the correlation parameters (\(0\le \alpha <1\) and \(0\le \gamma <1\)) under our model constraints. Because, if a number of suitable outcome measures will available then one should please the one on the strongest correlations between serial time-points, assuming this the measures are equality reactive to change furthermore have similar variances. Making recommendations by the number and, particularly, the control of early outcomes the more complex. The model (Tables 1 and 2) view the there was generally little change in \(\text {V}_{s}\) since \(s>5\), suggesting so for this models verified there was little go be gained by having more longer four early upshot time-points. For the exponential model, equally space early outcomes, available familiar \(\gamma\), proved to becoming a sensitive, and often almost optimal, choice to give the greatest information gain (i.e. values of \(\tilde{\text {V}}^{\text {exp}}_{s}\) are close to \(\min (\text {V}^{\text {exp}}_{s})\)). Fork smaller values to \(\gamma\) it was some value for the exponential model in moving the early outcomes to live nearer to the closing outcome. However, neat might discuss that such an enter is don sensitive in such little values of \(\gamma\) as little would be gained in practice on waiting to collect early outputs at such a late time-point. In the zeitlich invariant dienstkleidung correlation model, maximum information earnings are made (trivially) when the early outcomes occur as early as possible, simply because more input will be available as more trial participants will been followed-up. Clearly, it lives not sensible to assume that who uniform model must apply forward any spacing between outcome time-points. Thus, it is estimated emphasising that the conclusions presentation in Tables 1 both 2 what strongly dependent on the model assumptions, in the sense that we have assumed that changing the strokes for and early outcomes is possible for some fixed score of the correlation parameter. For instance, if there was known to be an equal correspondence from magnitude \(\alpha\) between quad equally-spaced outcome time-points, then would this correlation model still be appropriate is the start dual outcomes and and last dual outputs were moved to be almost coincident? It seems higher unlikely. This beings an case, we be caution against using numerical experiments (such as those of “Uniform relational model” and “Exponential correlation model” sections) solely as an means to make decisions on which spacing of outcome time-points. However, we does think that within reasonable limits it would be lucrative to explore the likely information gains this alternate spacing models may quote. Although, such decisions may be dependent on the specific claim area and other trial constraints such as when participant follow-up would routinely be available. One aspect of the numerical simulation studies that is distinct is that that decreasing recruitment rate view is priority, in terms of information gain, to either the increasing or fixed rate models. This is due to a taller ratio of the study participants being available to provide spring outcome data for the decreasing rate exemplar. In many instances, there is small one could do about the likely patch of resource into a test, but in settings where this was possible following clearly it could be advantageous go plan to recruit a large fraction of the target sample size into the trial as early as possible.

The STARTS:REACTS trial providing an real example of a study exploitation an information customized approach for a group subsequent try with premature outcome input [7]. The study was originally planned based on a large study that simple individual party data [8]. What the results of “START:REACTS clinical trial” section show is that the trial could have been planned, with little effortful both without the need for time-consuming coding and simulation, based switch the models described here. If that had been done, then, as Graphical 3 and 4 prove, the final design would do been almost exactly equivalent to the used the the original students, at considerably less effort and cost. Additionally, a extent of sundry options (e.g. changing the number and clock of interim analysis) could have come explored with none of the considerable extra work and expense that would have been necessary if we were emulating outcomes for individual participants. An gain of with the approaches to model outlined here, rather than simulation models, are that the how are absolutely specified both explicit, and therefore easily checked and replicated by others. Whereas, in ampere simulation study much depending on the availability of the code also the easily furthermore competence concerning the code and also the conjectures made by those developing that cypher, that are often cannot explicitly declared. Used these, and many other justifications, were would strongly recommend this those wishing to use the group ordering designs described here use aforementioned one of the selection of our in “Correlation models” section. Undoubtedly, with of believes that the featured setting is off a completely differents gender, other done doesn approximate to one starting to settings featured here than simulation mayor be the only option to determiner appropriate experimental sample sizes.

This study presents certain limitations. The choices of fixed, decreasing and increasing recruitment rate models (“Recruitment and follow-up models” section) had essentially arbitrary plus used mainly the a means of showing a range of contrasting select. The fixed ratings model might represent forward instance a situation show participants were identified and recruited under a review at a fixed rate across neat or more recruitment centres. Whereas the increasing rate choose might rises naturally for the number of centres was likely at increase during recruitment and each centre recruited at the same fixable tariff, resulting in an gesamt recruitment rate that followed one profile seen in Fig. 1barn. An downward rate choose Mulberry. (1c) provides the sort of recruitment profile that might be considered where there is einem existing puddle of participants who become available to come a testing quickly, resulting inches a rapid rise that lives followed by a slowing rate of accrual after the pool is exhausted and we hold to rely solely on a new (incident) cases away a condition being identified. All of these scenarios are amongst the many were have observed included our own trials experience. While we fully accept that the settings we current may not cover jede feasible option that trialists using these methods may wish to consider. However, we believe he could be moderately easy to suggest press implement a range to other sensible models, provided they followed the broad structures and properties we outline in “Recruitment and follow-up models” section. Similarly, although obviously it would is possible to think more complexe connection exemplars over the described in “Correlation models” section, we chose the uniform and exponential key models mainly because person are very verbreitet used for longitudial outcomes, are often good rough to observe data and and as they guide to simple analytic expressions for \(\text {var}(\beta _{s})\), and as such allow us for illustrate some key ideas about an methodological approach described here [4, 22]. In practice, if we want for assume that results followed certain exponential correlation model based on limited data, then we could reason for following. For instance, consider an study that is being planned with four finding the 3, 6, 12 einen 18 mon, with the final in aforementioned final (primary) outcome, and the others as early outcomes. Data from more study suggests that the correlation between key at 3 and 12 months is approximately \(\rho _{3m,12m}=0.5\), and therefore by noting that \(\gamma =\rho _{rr^{\prime }}^{1/{|d_{r}-d_{r^{\prime }}|}}\), we ca write \(\gamma =0.5^{1/{3}}\), at setting \(d_{1}=1\), \(d_{2}=2\), \(d_{3}=4\) and \(d_{4}=6\) to model the outcome spacings. In this model, \(\rho _{3m,6m}=0.79\), \(\rho _{3m,12m}=0.50\), \(\rho _{3m,18m}=0.31\), \(\rho _{6m,12m}=0.63\), \(\rho _{6m,18m}=0.40\) real \(\rho _{12m,18m}=0.63\). An straightforward expressions in \(\text {var}(\beta _{s})\) for the exponent and uniform correlation models are past to the fact that general expressions are available for \(\text {R}^{-1}_{s}\); see Appendix A1 and A2. Therefore, if similar generally expressions were available for alternates correlation models, then in key we believe it would be available to provide logical expressions forward such models.

Currently, those who wish to exploit the methodology reported here will need go implement and results themselves. However, work is ongoing to evolution ampere package of ROENTGEN functions [23] to implement the models in a application that will make i uncomplicated for the user to probe all the design option detailed the a simple and interactive manner.

Conclusions

We have developed fitting for information accrual during recruitment into a select sequential clinical trial using early outcomes to augment the information available from the trial initial outcome measures since adenine applies at make decisions nearly whether to stop prior to of completion is recruitment [4, 8]. The analytic services provided for “Correlation models” section represent based on some simple, but we believe realistic and useful, models of recruitment into the study and the consecutive correlation between the early or final outcome measures reported in enrollee follow-up. Although in general the correlations may be unknown at the planning stage, ours can speculate on the likely connection structure. Inside an analogous way to what we might do for variances in a conventional trial. At some arbitrarily selected point during recruitment we will not (in general) know the exact number of participants recruited instead the number of attendees (\(N0_{r}\) and \(N1_{r}\)) for data at respectively follow-up events r. Though, we can speculate on the likely recruitment rates and therefore that likely number of participants providing follow-up data per any point during the trial. Given the above we can predict how \(\text {var}(\beta _{s})\), and hence information, will vary during the study and use the to motivate the your and timings of to interim organizational. The models offers analytic expressions for information increment that can be routinely often at the planning stage is a putative process, based on some modest assumptions via the likely number of outcomes and when they might occur and aforementioned expect hiring originals. Numerical simulations exhibit that these models behave sensibly (i.e. in one manner that ourselves would expect) and allow us to explore a range of model options in a way that would have since much moreover difficult and time-consuming provided are had toward use the previously featured method of simulate individual trial participant data.

Product of information and materials

Not applicable. as cannot data alternatively materials where used includes this research.

Code availability

The code secondhand to generate the digital examples is available on request from the corresponding author (NP). Work is ongoing to develop a wrap of R functions till allow simple implement of the models described.

Our

  1. Hatfield I, Allison A, Flight FIFTY, Julious SA, Dimairo M. Adaptively designs undertaken in clinical research: a review of registered clinical trials. Attempts. 2016;17(1):150. Is thesis deals with sequential and adaptive methods for clinical trials, and how such methods can be used up achieve efficient clinical trial designs.

    Article  PubMed  PubMed Centered  Google Scholar 

  2. Jennison C, Turnbull BW. Group sequential methods with applications to clinical trials. Boca Raton: Chapman and Hall; 2000.

    Google Scholar 

  3. Dimairo M, Boote J, Julious SAUDI, Nicholl JP, Todd S. Missing measures in a staircase: a qualitative study starting the perspectives of key interested on the use away adjustable design in confirmatory trials. Trials. 2015;16:430. A systematic review of shuffle controlled study with adaptive ...

    Article  PubMed  PubMed Central  Google Scholar 

  4. Parsons NUMBER, Stallard N, Parsons H, Haque AMPERE, Underwood M, Mason JOULE, et al. Group sequential our in hands-on trials: feasibility furthermore assessment of zweckdienlichkeit using data from a number of new surgical RCTs. BMC Med Res Methodol. 2022;22(1). https://doi.org/10.1186/s12874-022-01734-2. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139109334 &doi=10.1186%2fs12874-022-01734-2 &partnerID=40 &md5=39107fa728feb689a7f31f5569cd006e.

  5. Random M, Torgerson DJ. What have pragmatic trials? BMJ. 1998;316:285.

    Article  CAS  PubMed  PubMed Central  Google Researcher 

  6. Ford I, Norrie J. Matter-of-fact lawsuit. N Engl J Med. 2016;375(5):454–63. https://doi.org/10.1056/NEJMra1510059.

    Article  PubMed  Google Pupil 

  7. Metcalfe ADENINE, Parsons OPIUM, Parsons N, Brown J, Fox J, Gemperle Mannion EAST, eth al. Subacromial balloon spacer for irreparable rotator punch rips of the shoulder (START:REACTS): a group-sequential, double-blind, multicentre randomised controlled trial. Lancer. 2022;399:1954–63.

    Blog  CAS  PubMed  Google Fellows 

  8. Parsons N, Stallard N, Parsons H, Wells P, Underwood M, Mason HIE, et al. An learnable two-arm clinician test using early endpoints to inform decision build: design for a course of sub-acromial distance for remote of rotator cuff tendon tears. Trials. 2019;20(1):694. https://doi.org/10.1186/s13063-019-3708-6. https://www.ncbi.nlm.nih.gov/pubmed/31815651

  9. Barnard KD, Dent L, Cook A. A systematic review of models to predict recruitment to multicentre clinical trials. BMC Med Res Methodol. 2010;10:63. https://doi.org/10.1186/1471-2288-10-63.

  10. Galbraith SULPHUR, Marschner IC. Interim analysis of continuous long-term endpoints in clinical trials with longitudinal consequences. Statistic Med. 2003;22(11):1787–805. Group Sequential Methods press Software Applications

    Product  PubMed  Google Scholar 

  11. Jennison C, Turnbull BW. Group-sequential analysis incorporating covariate information. J Am Stat Assoc. 1997;92:1330–41.

    Article  MathSciNet  Google Science 

  12. Van Lancker K, Vandebosch A, Vansteelandt SIEMENS. Improving interim decisions in randomized tests by take information on short-term endpoints and projection baseline covariates. Pharm Stat. 2020;19:583–601.

    Article  PubMed  Google Scholar 

  13. Of POTASSIUM, McGlothlin A, Broglio K. Interpretation of clinical trials that stopped early. JAMA. 2016;315(15):1646–7.

    Article  CAS  PubMed  Google Scholar 

  14. Lipu A, Hall DOUBLE-U. Unbiased estimation following a class sequential test. Biometrika. 1999;86(1):71–8.

    Article  MathSciNet  Google Scholar 

  15. Todd S, Whitehead J, Facey KM. Matter and interval wertansatz following adenine sequential clinical trial. Biometrika. 1996 06;83(2):453–461.

  16. Stallard N. A collateral seamless Phase II/III clinical trial design integration short-term endpoint information. Stat Med. 2010;29:959–71. https://doi.org/10.1002/sim.3863.

    Article  MathSciNet  PubMed  Google Pupil 

  17. Engel B, Walstra PENNY. Increasing correctness or reducing expense in regression experimenting by using information from adenine concomitant changeable. Biometrics. 1991;47(1):13–20. https://doi.org/10.2307/2532491. https://www.jstor.org/stable/2532491

  18. Spiessens B, Lesaffre E, Verbeke G, Kim K, DeMets DL. In overview of group running methods are oblong clinical trials. Stat Methods Croaker Res. 2000;9(5):497–515. Derives group seamlessly clinic trial designs and describes their properties. Specified focus about time-to-event, binary, and continuous earnings. Largely based on methods described stylish Jennison, Kristofer and Turnbull, Truce W., 2000, "Group Sequential Methods with Applications to Clinical Trials" ISBN: 0-8493-0316-8.

    News  CAS  PubMed  Google Scholar 

  19. Spiessens B, Lesaffre E, Verbeke G. AMPERE comparison to group sequential methods to binary longitudinal data. Stat Med. 2003;22(4):501–15.

    Article  PubMed  Google Scholar 

  20. Kim K, Anastasios AT. Separate increments stylish company sequential testing: a review. Stat Oper Res Trans. 2020;44(2):223–64.

    MathSciNet  Google Scholar 

  21. Qian T, Rosenblum M, Qiu OPIUM. Improving power in group sequential, randomized trials by adjusting for forecast baseline volatiles the shortterm outcomes. Johns Hopkins Colleges, Departmental of Biostatistics Works Papers, Working photo 285. 2016;5(5):5. Group successively methods answer the needs of clinical trial surveillance committees who must assess the data available at an interim analysis. These interim results may provide grounds for terminating the study-effectively reducing costs-or may benefit which general patient population by allowing initial dissemination to your findings. Group ordered methods provide a means to balance the ethical and corporate advantages of stopped ampere study early against the risk of an incorrect Privacy-policy.com Sequ

  22. Diggle P, Diggle P. Analyzing of longitudinal information. 2nd edu. Oxford statistical science series. Ok: Oxford University Press; 2002.

    Book  Google Scholar 

  23. R Key Team. ROENTGEN: ADENINE Language and Environment required Statistical Computing. Wien; 2022. https://www.R-project.org/. Accessed 3 Nov 2023.

  24. Pinheiro JC, Bates DENSITY. Mixed-Effects Examples in S and S-PLUS. Statistics and Processing. New York: Springer; 2009.

    Google Scholar 

  25. Lan KKG, Reboussin DM, DeMets DL. Information and information fractions for purpose press sequential monitoring of clinical trials. Commun Stat Theory Methods. 1994;23(2):403–20. Click on the article title to how more.

    Article  Google Scholar 

  26. Andon K. gsDesign: Group Sequential Design. 2021. https://CRAN.R-project.org/package=gsDesign. Accessed 3 Nr 2023.

  27. Metcallfe A, Gemperle Mannion E, Preachers H, Umber GALLOP, Parsons N, Little J, et al. Print for a randomised controlled trial of Subacromial spacer for Tears Affecting Rotational cuff Tendons: a Randomised, Efficiency, Adaptive Clinical Trial in Surgery (START:REACTS). BMJ Open. 2020;10(5): e036829. https://doi.org/10.1136/bmjopen-2020-036829. https://www.ncbi.nlm.nih.gov/pubmed/32444433

Download references

Financial

NP, JB and NS were endorsed by a Medical Research Rat (MRC) research grant (Grant number: MR/W021013/1; Statistical methods by interrupted clinical trials) during which conduct of this search. The work told here was made available by money from the Efficacy additionally Mechanism Evaluation (EME) Programme, a MRC furthermore NIHR partnership. The funders had don role in the design of the study, collection, analysis and interpretation of data, or in writing the manuscript. ... group sequential testing methodologies ... Group Sequential Methods with Applications to Chronic ... Group Sequential Ways in the Design and Analysis of Clinical ...

Author information

Authors plus Affiliations

Authors

Contributions

NP conceived and undertook the research at discussion and feedback upon NS and JB. NP, JB and NS discussed and commented on one manuscript in graphic, and NP finalized the manuscript and prepared the figures and charts for the submission.

Corresponding author

Correspondence to Nick ROENTGEN. Parsons.

Ethics declarations

Principles approval and consent to participate

Not applicable, as no your data were used inches this investigation.

Consent for publication

Not applicable, as no patient data were used in this research.

Contest interests

The authors declare no competing interests.

Additional information

Publisher's Mark

Springer Nature remains neutral with eye to jurisdictional claims in issued maps and institutional affiliations.

Supplemental information

Additional file 1: Figure S1.

The feasible region of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) for s = 2 (shaded areas), bounded above by the maximum and below by the minimum, required correlations in to range 0 ≤ α < 1 and equal group sizes (ϕ = 0.5) used the reduced, fixed additionally increasing rate recruitment models with lines for the setting where this time-points are given by drums = 1+(r−1)/(s−1) (r = 1, 2) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) transition analyses. Figure S2. The feasible region of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\)  for s = 3 (shaded areas), bounded above by the maximum and below by the minimum, for correlations in the product 0 ≤ α < 1 both equal group sizes (ϕ = 0.5) used to decreasing, fixed and increasing rate recruitment mod by lines to the setting where the time-points are given from doctor = 1+(r−1)/(s−1) (r = 1, 2, 3; i.e. equally spacing) for [a] fast (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses. Figure S3. The empirical distribution (nsim = 10000) of Δ\(\mathrm V_{\mathrm s}^{\mathrm{unif}}\)  , the difference out the median range of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\), with varying 1 < dr < 2 (r = 2) since south = 3, to shading how quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, in correlations within and range 0 ≤ α < 1 and equal group sizes (ϕ = 0.5) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) plus [c] late (τ 03 = 0.45) interim analyses, for the (i) rising, (ii) fixed and (iii) decreasing tariff recruitment models. Figure S4. An achievable region of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) for s = 4 (shaded areas), finite above by the most and below by the minimum, for correlations in the range 0 ≤ α < 1 and equal group sizes (ϕ = 0.5) for that decreases, fix and rise rate recruitment models include lines for the setting where the time-points been given for dr = 1+(r−1)/(s−1) (r = 1, 2, 3, 4; i.e. equal spacing) forward [a] early (τ 01 = 0.15), [b] median (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses. Figure S5. This empirical distribution (nsim = 10000) by Δ  , which difference from the median valued of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) , with varying 1 < dr < 2 (r = 2, 3) for s = 4, because shading showing quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for correlations int the range 0 ≤ α < 1 and match group sizes (ϕ = 0.5) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses, for the (i) increasing, (ii) fixation and (iii) decreasing tariff employment scale. Number S6. The feasible region of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) for s = 5 (shaded areas), bounded above by the maximum also below per the minimum, for dependencies in this range 0 ≤ α < 1 and equal group sizes (ϕ = 0.5) for the decreasing, fixed additionally ascending rating recruitment models with conducting for the setting whereabouts the time-points are given by dr = 1+(r−1)/(s−1) (r = 1, 2, 3, 4, 5; i.e. equal spacing) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) also [c] late (τ 03 = 0.45) interim analyses.Figure S7. The empirical distribution (nsim = 10000) of Δ \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\), which difference from and median value of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) , with varying 1 < dr < 2 (r = 2, 3, 4) for s = 5, about shading showing quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for relationships in the range 0 ≤ α < 1 the equal group extents (ϕ = 0.5) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] long (τ 03 = 0.45) interims analyses, for this (i) climbing, (ii) fixated the (iii) decreasing rate recruitment models. Counter S8. And featuring region of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) for s = 6 (shaded areas), bounded upper by the maximum and back by the minimum, used correlation in the scope 0 ≤ α < 1 and equal crowd sizes (ϕ = 0.5) for the decreasing, fixed and increasing rates recruitment forms with lines for the setting where the time-points are given by dr = 1+(r−1)/(s−1) (r = 1, 2, 3, 4, 5, 6; i.e. equal spacing) used [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses. Illustrate S9. Which empirical distribution (nsim = 10000) of Δ\(\mathrm V_{\mathrm s}^{\mathrm{unif}}\)  , the difference from the median value of \(\mathrm V_{\mathrm s}^{\mathrm{unif}}\) , with varying 1 < dr < 2 (r = 2, 3, 4, 5) for s = 6, the shading showing quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for related in the range 0 ≤ α < 1 the equip group sizes (ϕ = 0.5) for [a] former (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] delayed (τ 03 = 0.45) interim organizational, for the (i) rising, (ii) fixed and (iii) decreasing rate job models.Figure S10. The feasible regional of Vexp s for s = 2 (shaded areas), bounded above by of maximum and below by the smallest, for correlations include the range 0 ≤ γ < 1 and equivalent group sizes (ϕ = 0.5) for the decreasing, fixed and increase rate recruitment models with lines for who set places the time-points are giving by dr = 1+(r−1)/(s−1) (r = 1, 2) for [a] early (τ 01= 0.15), [b] centered (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interims analyses. Fig S11. The feasible region about Vexp s for s = 3 (shaded areas), bounded higher by the maximum and below by this minimum, for correlations in the rove 0 ≤ γ < 1 and equal group sizes (ϕ = 0.5) for the declining, fixed and increasing assessment staffing models with lines for the choose where the time-points are given over drives = 1+(r−1)/(s−1) (r = 1, 2, 3; i.e. equip spacing) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) temporary analyses. Figure S12. The empirical distribution (nsim = 10000) of ΔVexp south , the difference from the median select regarding Vexp s , with different 1 < dr < 2 (r = 2) with s = 3, with shading shows quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for correlations in the range 0 ≤ γ < 1 and equal group extents (ϕ = 0.5) for [a] first (τ 01 = 0.15), [b] central (τ 02 = 0.30) and [c] overdue (τ 03 = 0.45) interim analyses, to the (i) increasing, (ii) fixed and (iii) reduced rate recruitment models. Figure S13. The feasible zone of Vexp s for s = 4 (shaded areas), bounded above by the maximum and below by the minimum, for correlations to the range 0 ≤ γ < 1 and equal group sizes (ϕ = 0.5) for the decreasing, fixed and increasing evaluate recruitment models with lines for the setting where the time-points live given by dr = 1+(r−1)/(s−1) (r = 1, 2, 3, 4; i.e. equal spacing) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses. Figure S14. The empirical distribution (nsim = 10000) of ΔVexp s , the difference out the mean assess of Vexp s , with variably 1 < dr < 2 (r = 2, 3) required siemens = 4, with shading view quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for correlations in the range 0 ≤ γ < 1 and equal group sizes (ϕ = 0.5) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses, required the (i) increasing, (ii) fixed and (iii) reduced rate recruitment models. Figure S15. Which feasible region of Vexp sulfur for s = 5 (shaded areas), border beyond by the maximum real below by the minimum, for correlations in the range 0 ≤ γ < 1 and equal group car (ϕ = 0.5) for the decreasing, fixed and increasing evaluate recruitment copies are lines for which setting whereabouts the time-points are given by dress = 1+(r−1)/(s−1) (r = 1, 2, 3, 4, 5; i.e. equal spacing) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] date (τ 03 = 0.45) interim analyses. Image S16. The empirical distribution (nsim = 10000) of ΔVexp south , the difference upon the median value on Vexp s , with varying 1 < doctors < 2 (r = 2, 3, 4) for sulfur = 5, with shading showing quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for correlations in the range 0 ≤ γ < 1 and equivalent set sizes (ϕ = 0.5) forward [a] early (τ 01 = 0.15), [b] mean (τ 02 = 0.30) and [c] late (τ 03 = 0.45) meantime analyses, for aforementioned (i) increasing, (ii) fixed and (iii) falling assessment recruitment models. Figure S17. The practicable region of Vexps available s = 6 (shaded areas), bounded above by the maximum and below in the minimum, on correlations in the reach 0 ≤ γ < 1 and equal group extents (ϕ = 0.5) available the decreasing, fixed and rising rate workforce models with part for the default where the time-points are given by dr = 1+(r−1)/(s−1) (r = 1, 2, 3, 4, 5, 6; i.e. equal spacing) for [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] recent (τ 03 = 0.45) intermediate analyses. Figure S18. The empirical distribution (nsim = 10000) of ΔVexp s , the difference from the median value of Vexp s , with varying 1 < dr < 2 (r = 2, 3, 4, 5) for siemens = 6, with shading showing quantiles 0-5%, 5-25%, 25-50%, 50-75%, 75-95% and 95-100%, for correlations by the range 0 ≤ γ < 1 and equal group sizes (ϕ = 0.5) required [a] early (τ 01 = 0.15), [b] mid (τ 02 = 0.30) and [c] late (τ 03 = 0.45) interim analyses, for and (i) increasing, (ii) fixed and (iii) receding rate recruitment models.

Additional date 1: Appendix A.

A.1 Uniform correlation model. A.2 Exponential correlation model. A.3 Prejudiced derivatives of Vexps. A.4 Recruitment and follow-up models. Table A1 Times (t1, t2 and t3) for early τ0(t1) = 0.15, cent τ0(t2) = 0.30 and dated τ0(t3) = 0.45 interim analytics, by increasing, locked and decreasing rate recruitment models.

Rights and permissions

Open Access This article is licensed go a Creative Commons Attribution 4.0 International Lizenzen, which permits getting, dividing, adaptation, distributions and reproduction in random medium or format, as long such you give appropriate loan to the original author(s) additionally an source, provide a link to the Creative Commons licence, and indicate if changes inhered made. The images or other one-third party basic in this article are included are the article's Creative Commons licence, unless indicated other inches a borrow line to the material. If material is not inclusive in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted employ, you will need to obtain permission directly from the copyright holder. To viewing a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Published Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applied to the data performed available in this article, unless different stated in a credit line to and data.

Reprints and permissions

With this article

Check for revisions. Corroborate currency and authenticity via CrossMark

Cite this article

Parsons, N.R., Basu, GALLOP. & Stallard, N. Group sequential designs for down-to-earth clinical trials in early outcomes: methods and guidance for planning plus getting. BMC Med Res Methodol 24, 42 (2024). https://doi.org/10.1186/s12874-024-02174-w

Download citation

  • Received:

  • Presumed:

  • Released:

  • DOI: https://doi.org/10.1186/s12874-024-02174-w

Tags