Insulin Resistance in Diabetes: The Promise of Using Induced Pluripotent Stem Cell Technology

**information**about the inherited factors underlying the development of different forms of IR. Expand abstract.

**information**about the disease, these models cannot recapitulate all the aspects of this complex disease as seen in each individual. Patient-specific iPSCs can overcome the hurdles faced with the classical mouse models for studying IR. iPSC technology can generate cells genetically identical to IR individuals, which can help in distinguishing between genetic and acquired defects in insulin sensitivity. Combining the technologies of the genome editing and iPSCs may provide important

**information**about the inherited factors underlying the development of different forms of IR. Further studies are required to fill the gaps in understanding the pathogenesis of IR and diabetes.

4/10 relevant

Preprints.org

Quantification of Differential **Information** using Matrix Pencil

**information**of one class over the other. Expand abstract.

**information**between two classes rather than modelling individual classes separately. The classes are viewed on a common frame of reference in which one class would have a constant variance, unlike the other class which would have unequal variance along its basis vectors which would capture the differential

**information**of one class over the other.This, when mathematically formulated, leads to the solution of Matrix Pencil equation.The theory of binary classification was extended to a multi-class scenario.This is borne out by illustrative examples on the classification of the MNIST database.

7/10 relevant

arXiv

Areas of areas generate the shuffle algebra

**information**added by subsequent levels is equivalent to iterated areas. Expand abstract.

**information**added by subsequent levels is equivalent to iterated areas. On the way to this main result, we characterize (homogeneous) generating sets of the shuffle algebra. We finally discuss compatibility between the area operator and discrete integration and stochastic integration and conclude with some results on the linear span of the areas of areas.

4/10 relevant

arXiv

The Holographic Nature of Null Infinity

**information**about massless excitations can be obtained from an infinitesimal neighbourhood of the past boundary of future null infinity and does not require observations over all of future null infinity. Moreover, all

**information**about the state that can be obtained through observations near a cut of future null infinity can also be obtained from observations near any earlier cut although the converse is not true. We provide independent arguments for these two assertions. Similar statements hold for past null infinity. These statements have immediate implications for the

**information**paradox since they suggest that the fine-grained von Neumann entropy of the state defined on a segment $(-\infty,u)$ of future null infinity is independent of u. This is very different from the oft-discussed Page curve that this entropy is sometimes expected to obey. We contrast our results with recent discussions of the Page curve in the context of black hole evaporation, and also discuss the relation of our results to other proposals for holography in flat space.

5/10 relevant

arXiv

Value of **Information** Analysis via Active Learning and Knowledge Sharing
in Error-Controlled Adaptive Kriging

**information**from observations among surrogate models to update likelihoods of multiple events of interest. Expand abstract.

**information**to better characterize involved uncertainties is among decision alternatives. Value of

**information**(VoI) analysis is a mathematical decision framework that quantifies expected potential benefits of new data and assists with optimal allocation of resources for

**information**collection. However, a primary challenge facing VoI analysis is the very high computational cost of the underlying Bayesian inference especially for equality-type

**information**. This paper proposes the first surrogate-based framework for VoI analysis. Instead of modeling the limit state functions describing events of interest for decision making, which is commonly pursued in surrogate model-based reliability methods, the proposed framework models system responses. This approach affords sharing equality-type

**information**from observations among surrogate models to update likelihoods of multiple events of interest. Moreover, two knowledge sharing schemes called model and training points sharing are proposed to most effectively take advantage of the knowledge offered by costly model evaluations. Both schemes are integrated with an error rate-based adaptive training approach to efficiently generate accurate Kriging surrogate models. The proposed VoI analysis framework is applied for an optimal decision-making problem involving load testing of a truss bridge. While state-of-the-art methods based on importance sampling and adaptive Kriging Monte Carlo simulation are unable to solve this problem, the proposed method is shown to offer accurate and robust estimates of VoI with a limited number of model evaluations. Therefore, the proposed method facilitates the application of VoI for complex decision problems.

8/10 relevant

arXiv

Inference of Polygenic Factors Associated with Breast Cancer Gene
Interaction Networks from Discrete Data Utilizing Poisson Multivariate Mutual
**Information**

**information**is generally a well regarded approach for developing networks of association in data science of complex systems across many disciplines, until now a good method to accurately and efficiently compute entropies from such processes as been lacking. Expand abstract.

**information**theoretic, but a major obstacle has been the discrete nature of such data that is well described as a multi-variate Poisson process. In fact despite that mutual

**information**is generally a well regarded approach for developing networks of association in data science of complex systems across many disciplines, until now a good method to accurately and efficiently compute entropies from such processes as been lacking. Nonparameteric methods such as the popular k-nearest neighbors (KNN) methods are slow converging and thus require unrealistic amounts of data. We will use the causation entropy (CSE) principle, together with the associated greedy search algorithm optimal CSE (oCSE) as a network inference method to deduce the actual structure, with our multi-variate Poisson estimator developed here as the core computational engine. We show that the Poisson version of oCSE outperforms both the Kraskov-St\"ogbauer-Grassberger (KSG) oCSE method (which is a KNN method for estimating the entropy) and the Gaussian oCSE method on synthetic data. We present the results for a breast cancer gene expression data set.

7/10 relevant

arXiv

7/10 relevant

bioRxiv

Integrating smFRET, SAXS and NMR data to infer structural ensembles of an intrinsically-disordered protein

**information**provided by SAXS and Chemical Shifts, respectively, with information on specific inter-residue distances from PRE and smFRET. Expand abstract.

**information**provided by SAXS and Chemical Shifts, respectively, with

**information**on specific inter-residue distances from PRE and smFRET. Our integrative modelling approach and quantitative polymer-physics-based characterization of the experimentally-restrained ensembles could be used to implement a rigorous taxonomy for the description and classification of IDPs as heteropolymers.

4/10 relevant

bioRxiv

A Bayesian phase 2 model based adaptive design to optimise antivenom dosing: application to a dose-finding trial for a novel Russell's viper antivenom in Myanmar

**information**across dosing levels, and converge rapidly to the contextually defined `optimal dose'. Expand abstract.

**information**from clinical studies to infer the relationship between dose and efficacy or dose and toxicity. Antivenom dose-finding studies usually recruit too few patients (e.g. less than 20) relative to clinically significant event rates (e.g. 5%). Model based adaptive dose-finding studies make efficient use of accrued patient data by using

**information**across dosing levels, and converge rapidly to the contextually defined `optimal dose'. Adequate sample sizes for adaptive dose-finding trials can be determined by simulation studies. We propose a model based, Bayesian phase 2 type, adaptive clinical trial design for the characterisation of optimal initial antivenom doses in contexts where both efficacy and toxicity are measured as binary endpoints. This design is illustrated in the context of dose-finding for Daboia siamensis Eastern Russell's viper) envenoming in Myanmar. The design formalises the optimal initial dose of antivenom as the dose closest to that giving a pre-specified desired efficacy, but resulting in less than a pre-specified maximum toxicity. For Russell's viper efficacy is defined as the restoration of blood coagulability within six hours, and toxicity is defined as anaphylaxis. Comprehensive simulation studies compared the expected behaviour of the model based design to a simpler rule based design (a modified `3+3' design). The model based design can identify the optimal dose after fewer patients than the rule based design. Open source code for the simulations can be used to calculate sample sizes under a priori beliefs of efficacy and toxicity. Antivenom dose-finding trials would benefit from using standard model based adaptive designs. Dose-finding trials where rare events (e.g. 5% occurrence) are of clinical importance necessitate larger sample sizes than current practice. We will apply the model based design to determine a safe and efficacious dose for a novel lyophilised antivenom to treat Daboia siamensis envenoming in Myanmar.

4/10 relevant

bioRxiv

Age of **Information** in a Decentralized Network of Parallel Queues with
Routing and Packets Losses

**Information**with a very low complexity. Expand abstract.

**Information**in a network of multiple sources and parallel servers/queues with buffering capabilities, preemption in service and losses in served packets. The servers do not communicate between each other and the packets are dispatched through the servers according to a predefined probabilistic routing. By making use of the Stochastic Hybrid System (SHS) method, we provide a derivation of the average Age of

**Information**of a system of two parallel servers (with and without buffer capabilities) and compare the result with that of a single queue. We show known results of packets delay in Queuing Theory do not hold for Age of

**Information**. Unfortunately, the complexity of computing the Age of

**Information**using the SHS method increases highly with the number of queues. We therefore provide an upper bound of the average Age of

**Information**in a parallel server system of an arbitrary number of M/M/1/(N + 1) queues and its tightness in various regimes. This upper bound allows providing a tight approximation of the Age of

**Information**with a very low complexity.

8/10 relevant

arXiv