Automation of Active **Space** Selection for Multireference Methods via Machine Learning on Chemical Bond Dissociation

**spaces**are correctly predicted with a considerably better success rate than random guess (larger than 80% precision for most systems studied). Expand abstract.

**space**self-consistent field (CASSCF) method one performs a full configuration interaction calculation in an active

**space**consisting of active electrons and active orbitals. However, CASSCF and its variants require the selection of these active

**spaces**. This choice is not black-box; it requires significant experience and testing by the user, and thus active

**space**methods are not considered particularly user-friendly and are employed only by a minority of quantum chemists. Our goal is to popularize these methods by making it easier to make good active

**space**choices. We present a machine learning protocol that performs an automated selection of active

**spaces**for chemical bond dissociation calculations of main group diatomic molecules. The protocol shows high prediction performance for a given target system as long as a properly correlated system is chosen for training. Good active

**spaces**are correctly predicted with a considerably better success rate than random guess (larger than 80% precision for most systems studied). Our automated machine learning protocol shows that a “black-box” mode is possible for facilitating and accelerating the large-scale calculations on multireference systems where single-reference methods such as KS-DFT cannot be applied.

7/10 relevant

chemRxiv

Accurate Multi-Objective Design in a **Space** of Millions of Transition Metal Complexes with Neural-Network-Driven Efficient Global Optimization

**space**of transition metal complexes. Expand abstract.

**space**that contains the optimal trade-off between multiple design criteria. We demonstrate this approach for the simultaneous optimization of redox potential and solubility in candidate M(II)/M(III) redox couples for redox flow batteries from a

**space**of 2.8M transition metal complexes designed for stability in practical RFB applications. We employ latent-distance-based UQ with a multi-task ANN to enable model generalization that surpasses that of a GP. With this approach, ANN prediction and EI scoring of the full 2.8M complex

**space**is achieved in minutes. Starting from ca. 100 representative points, EGO improves both properties by 3-4 standard deviations in only five generations. Analysis of lookahead errors confirms rapid ANN model improvement during the EGO process, achieving suitable accuracy for predictive design in the

**space**of transition metal complexes. The ANN-driven EI approach achieves at least 500-fold acceleration over random search, identifying a Pareto-optimal design in around five weeks instead of fifty years.

8/10 relevant

chemRxiv

Combining Automated Microfluidic Experimentation with Machine Learning for Efficient Polymerization Design

**spaces**. Expand abstract.

**spaces**. In this work, we aim to present a new methodology for studying such complex reactions using machine-learning-assisted automated microchemical reactors. A custom-designed rapidly prototyped microreactor is used in conjunction with in situ infrared thermography and efficient, high-speed experimentation to map the reaction

**space**for a zirconocene polymerization catalyst. Chemical waste was decreased by two orders of magnitude and catalytic discovery was performed in one hour. Here we show that efficient microfluidic technology can be coupled with machine learning algorithms to obtain high-fidelity datasets on a complex chemical reaction.

4/10 relevant

chemRxiv

Solving connectivity problems parameterized by treedepth in
single-exponential time and polynomial **space**

**space**. Nevertheless, this has remained open for connectivity problems. In the present work, we close this knowledge gap by applying the Cut\&Count technique to graphs of small treedepth. While the general idea is unchanged, we have to design novel procedures for counting consistently cut solution candidates using only polynomial

**space**. Concretely, we obtain time $\mathcal{O}^*(3^d)$ and polynomial

**space**for Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree on graphs of treedepth $d$. Similarly, we obtain time $\mathcal{O}^*(4^d)$ and polynomial

**space**for Connected Dominating Set and Connected Odd Cycle Transversal.

9/10 relevant

arXiv

Simplification of Indoor **Space** Footprints

**spaces**. Expand abstract.

**spaces**. The method simplifies polygons in an iterative manner. The simplification is segment-wise and takes account of intrusion, extrusion, offset, and corner portions of 2D structures preserving its dominant frame.

6/10 relevant

arXiv

Memory is one representation not many: Evidence against wormholes in memory

**space**. Expand abstract.

**space**. Alternatively, if search is constrained to one static landscape, then moving between distant locations necessarily means traveling through the intermediate

**space**. To distinguish between these two scenarios, we had people name all the countries they could think of (verbal fluency task) in three different conditions. When people were free to retrieve countries in whatever fashion they liked, they relied on at least three dimensions: predominantly on spatial distances on the map and to lesser extent on phonetic distance and country frequency in media. However, when people were asked to retrieve countries either by the letters of the alphabet or along country borders, people’s retrieval sequences deviated from the “free” default, consistent with the instructed strategy. This shift in retrieval patterns did not affect the number of retrieved countries nor their distribution, but it did lead to increases in retrieval times. These increases in retrieval time scaled to the extent that the retrieval strategy disagreed with the default, supporting the notion of a static rather a dynamic landscape. We conclude that when people are searching for countries, irrespective of what guides their search, they are largely searching the same underlying memory landscape.

6/10 relevant

PsyArXiv

Functions of bounded mean oscillation and quasiconformal mappings on
**spaces** of homogeneous type

**space**BMO and the theory of quasiconformal mappings on

**spaces**of homogeneous type $\widetilde{X} :=(X,\rho,\mu)$. The connection is that the logarithm of the generalised Jacobian of an $\eta$-quasisymmetric mapping $f: \widetilde{X} \rightarrow \widetilde{X}$ is always in $\text{BMO}(\widetilde{X})$. In the course of proving this result, we first show that on $\widetilde{X}$, the logarithm of a reverse-H\"{o}lder weight $w$ is in $\text{BMO}(\widetilde{X})$, and that the above-mentioned connection holds on a metric measure

**space**$\widehat{X} :=(X,d,\mu)$. Furthermore, we construct a large class of

**spaces**$(X,\rho,\mu)$ to which our results apply. Among the key ingredients of the proofs are suitable generalisations to $(X,\rho,\mu)$ from the Euclidean or metric measure

**space**settings of the Calder\'{o}n--Zygmund decomposition, the Vitali Covering Theorem and the Radon--Nikodym Theorem, and of the result of Heinonen and Koskela which shows that the volume derivative is a reverse-H\"{o}lder weight.

9/10 relevant

arXiv

SimEx: Express Prediction of Inter-dataset Similarity by a Fleet of Autoencoders

**space**from a model performing a certain task, or fine-tuning a pretrained model with different datasets and evaluating the performance changes therefrom. However, these practices would suffer from shallow comparisons, task-specific biases, or extensive time and computations required to perform comparisons. We present SimEx, a new method for early prediction of inter-dataset similarity using a set of pretrained autoencoders each of which is dedicated to reconstructing a specific part of known data. Specifically, our method takes unknown data samples as input to those pretrained autoencoders, and evaluate the difference between the reconstructed output samples against their original input samples. Our intuition is that, the more similarity exists between the unknown data samples and the part of known data that an autoencoder was trained with, the better chances there could be that this autoencoder makes use of its trained knowledge, reconstructing output samples closer to the originals. We demonstrate that our method achieves more than 10x speed-up in predicting inter-dataset similarity compared to common similarity-estimating practices. We also demonstrate that the inter-dataset similarity estimated by our method is well-correlated with common practices and outperforms the baselines approaches of comparing at sample- or embedding-spaces, without newly training anything at the comparison time.

4/10 relevant

arXiv

Spherical functions and local densities on the **space** of $p$-adic
quaternion hermitian matrices

**space**$X$ of quaternion hermitian forms of size $n$ on a ${\mathfrak p}$-adic field with odd residual characteristic, and define typical spherical functions $\omega(x;s)$ on $X$ and give their induction formula on sizes by using local densities of quaternion hermitian forms. Then we give functional equation of spherical functions with respect to $S_n$, and determine the explicit formulas of $\omega(x;s)$. On the other hand, we define the spherical transform on the Schwartz

**space**${\mathcal S}(K\backslash X)$ based on $\omega(x; s)$ and study the Hecke module structure of ${\mathcal S}(K \backslash X)$.

7/10 relevant

arXiv

On the linear structure of cones

**spaces**to general cones. Expand abstract.

**spaces**which do not seem to provide natural interpretations of continuous data types such as the real line, Ehrhard and al. introduced a model of probabilistic higher order computation based on (positive) cones, and a class of totally monotone functions that they called "stable". Then Crubill{\'e} proved that this model is a conservative extension of the earlier probabilistic coherence

**space**model. We continue these investigations by showing that the category of cones and linear and Scott-continuous functions is a model of intuitionistic linear logic. To define the tensor product, we use the special adjoint functor theorem, and we prove that this operation is and extension of the standard tensor product of probabilistic coherence

**spaces**. We also show that these latter are dense in cones, thus allowing to lift the main properties of the tensor product of probabilistic coherence

**spaces**to general cones. Last we define in the same way an exponential of cones and extend measurability to these new operations.

5/10 relevant

arXiv