Automation of (Macro)molecular Properties Using a Bootstrapping Swarm Artificial **Neural** **Network** Method: Databases for Machine Learning

**neural**

**network**using the training data served as input for BSANN, we can predict properties and their statistical errors of new molecules using the plugins provided from that web-service. Expand abstract.

**neural**

**network**method as a machine learning approach. In this method, a (macro)molecular structure is represented by a so-called description vector, which then is the input in a so-called bootstrapping swarm artificial

**neural**

**network**(BSANN) for training the

**neural**

**network**. In this study, we aim to develop an efficient approach for performing the training of an artificial

**neural**

**network**using either experimental or quantum mechanics data. In particular, we aim to create different user-friendly online accessible databases of well-selected experimental (or quantum mechanics) results that can be used as proof of the concepts. Furthermore, with the optimized artificial

**neural**

**network**using the training data served as input for BSANN, we can predict properties and their statistical errors of new molecules using the plugins provided from that web-service. There are four databases accessible using the web-based service. That includes a database of 642 small organic molecules with known experimental hydration free energies, the database of 1475 experimental pKa values of ionizable groups in 192 proteins, the database of 2693 mutants in 14 proteins with given values of experimental values of changes in the Gibbs free energy, and a database of 7101 quantum mechanics heat of formation calculations. All the data are prepared and optimized in advance using the AMBER force field in CHARMM macromolecular computer simulation program. The BSANN is code for performing the optimization and prediction written in Python computer programming language. The descriptor vectors of the small molecules are based on the Coulomb matrix and sum over bonds properties, and for the macromolecular systems, they take into account the chemical-physical fingerprints of the region in the vicinity of each amino acid.

8/10 relevant

bioRxiv

Characterizing **neural** coding performance for populations of sensory neurons: comparing a weighted spike distance metrics to other analytical methods.

**neural**

**network**we tested and outperforms the more traditional spike distance metrics. Expand abstract.

**neural**population. For stimuli to be discriminated reliably, differences in population responses must be accurately decoded by downstream

**networks**. Several methods to compare the pattern of responses and their differences have been used by neurophysiologist to characterize the accuracy of the sensory responses studied. Among the most widely used analysis, we note methods based on Euclidian distances or on spike metric distance such as the one proposed by van Rossum. Methods based on artificial

**neural**

**network**and machine learning (such as self-organizing maps) have also gain popularity to recognize and/or classify specific input patterns. In this brief report, we first compare these three strategies using dataset from 3 different sensory systems. We show that the input-weighting procedure inherent to artificial

**neural**

**network**allows the extraction of the information most relevant to the discrimination task and thus the method performs particularly well. To combine the ease of use and rapidity of methods such as spike metric distances and the advantage of weighting the inputs, we propose a measure based on geometric distances were each dimension is weighted proportionally to how informative it is. In each dimension, the overlap between the distributions of responses to the two stimuli is quantified using the Kullback-Leibler divergence measure. We show that the result of this Kullback-Leibler-weighted spike train distance (KLW distance) analysis performs as well or better than the artificial

**neural**

**network**we tested and outperforms the more traditional spike distance metrics. We applied information theoretic analysis to Leaky-Integrate-and-Fire model neuron responses and compare their encoding accuracy with the discrimination accuracy quantified through these distance metrics to show the high degree of correlation between the results of the two approaches for quantifying coding performance. We argue that our proposed measure provides the flexibility, ease of use sought by neurophysiologist while providing a more powerful way to extract the relevant information than more traditional methods.

4/10 relevant

bioRxiv

A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels

**neural**

**networks**have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. Expand abstract.

**neural**

**networks**have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective baseline that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the

**neural**

**network**on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on both synthetically generated incorrect labels and realistic large-scale noisy datasets demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.

6/10 relevant

arXiv

Post-Disturbance Dynamic Frequency Features Prediction Based on
Convolutional **Neural** **Network**

**neural**

**network**(CNN) . Expand abstract.

**neural**

**network**(CNN) . The operation data before and immediately after disturbance is used to construct the input tensor data of CNN, with the dynamic frequency features of the power system after the disturbance as the output. The operation data of the power system such as generators unbalanced power has spatial distribution characteristics. The electrical distance is presented to describe the spatial correlation of power system nodes, and the t-SNE dimensionality reduction algorithm is used to map the high-dimensional distance information of nodes to the 2-D plane, thereby constructing the CNN input tensor to reflect spatial distribution of nodes operation data on 2-D plane. The CNN with deep

**network**structure and local connectivity characteristics is adopted and the

**network**parameters are trained by utilizing the backpropagation-gradient descent algorithm. The case study results on an improved IEEE 39-node system and an actual power grid in USA shows that the proposed method can predict the lowest frequency of power system after the disturbance accurately and quickly.

5/10 relevant

arXiv

Recognition of Handwritten Digit using Convolutional **Neural** **Network** in Python with Tensorflow and Comparison of Performance for Various Hidden Layers

**Neural**

**Network**(CNN) is at the center of spectacular advances that mixes Artificial Neural Network (ANN) and up to date deep learning strategies. Expand abstract.

**Neural**

**Network**(ANN), deep learning has brought a dramatic twist in the field of machine learning by making it more Artificial Intelligence (AI). Deep learning is used remarkably used in vast ranges of fields because of its diverse range of applications such as surveillance, health, medicine, sports, robotics, drones etc. In deep learning, Convolutional

**Neural**

**Network**(CNN) is at the center of spectacular advances that mixes Artificial

**Neural**

**Network**(ANN) and up to date deep learning strategies. It has been used broadly in pattern recognition, sentence classification, speech recognition, face recognition, text categorization, document analysis, scene, and handwritten digit recognition. The goal of this paper is to observe the variation of accuracies of CNN to classify handwritten digits using various numbers of hidden layer and epochs and to make the comparison between the accuracies. For this performance evaluation of CNN, we performed our experiment using Modified National Institute of Standards and Technology (MNIST) dataset. Further, the

**network**is trained using stochastic gradient descent and the backpropagation algorithm.

8/10 relevant

Preprints.org

Testing the robustness of attribution methods for convolutional **neural**
**networks** in MRI-based Alzheimer's disease classification

**neural**

**network**(CNN) with identical training settings in order to separate structural MRI data of patients with Alzheimer's disease and healthy controls. Expand abstract.

**neural**

**network**(CNN) with identical training settings in order to separate structural MRI data of patients with Alzheimer's disease and healthy controls. Afterwards, we produced attribution maps for each subject in the test data and quantitatively compared them across models and attribution methods. We show that visual comparison is not sufficient and that some widely used attribution methods produce highly inconsistent outcomes.

7/10 relevant

arXiv

Biometric Face Presentation Attack Detection with Multi-Channel
Convolutional **Neural** **Network**

**Neural**

**Network**based approach for presentation attack detection (PAD). Expand abstract.

**Neural**

**Network**based approach for presentation attack detection (PAD). We also introduce the new Wide Multi-Channel presentation Attack (WMCA) database for face PAD which contains a wide variety of 2D and 3D presentation attacks for both impersonation and obfuscation attacks. Data from different channels such as color, depth, near-infrared and thermal are available to advance the research in face PAD. The proposed method was compared with feature-based approaches and found to outperform the baselines achieving an ACER of 0.3% on the introduced dataset. The database and the software to reproduce the results are made available publicly.

7/10 relevant

arXiv

PgNN: Physics-guided **Neural** **Network** for Fourier Ptychographic Microscopy

**neural**

**network**(PgNN), where the reconstructed image in the complex domain is considered as the learnable parameters of the neural network. Expand abstract.

**neural**

**network**(PgNN), where the reconstructed image in the complex domain is considered as the learnable parameters of the

**neural**

**network**. Since the optimal parameters of the PgNN can be derived by minimizing the difference between the model-generated images and real captured angle-varied images corresponding to the same scene, the proposed PgNN can get rid of the problem of massive training data as in traditional supervised methods. Applying the alternate updating mechanism and the total variation regularization, PgNN can flexibly reconstruct images with improved performance. In addition, the Zernike mode is incorporated to compensate for optical aberrations to enhance the robustness of FP reconstructions. As a demonstration, we show our method can reconstruct images with smooth performance and detailed information in both simulated and experimental datasets. In particular, when validated in an extension of a high-defocus, high-exposure tissue section dataset, PgNN outperforms traditional FP methods with fewer artifacts and distinguishable structures.

7/10 relevant

arXiv

Timage -- A Robust Time Series Classification Pipeline

**Neural**

**Networks**have achieved very good results, we use a Residual Neural

**Network**s architecture known as ResNet. Expand abstract.

**Neural**

**Networks**and a 2D representation of time series known as Recurrence Plots. In order to utilize the research done in the area of image classification, where Deep

**Neural**

**Networks**have achieved very good results, we use a Residual

**Neural**

**Networks**architecture known as ResNet. As preprocessing of time series is a major part of every time series classification pipeline, the method proposed simplifies this step and requires only few parameters. For the first time we propose a method for multi time series classification: Training a single

**network**to classify all datasets in the archive with one

**network**. We are among the first to evaluate the method on the latest 2018 release of the UCR archive, a well established time series classification benchmarking dataset.

5/10 relevant

arXiv

Density Encoding Enables Resource-Efficient Randomly Connected **Neural**
**Networks**

**neural**

**networks**known as Random Vector Functional Link (RVFL)

**network**s since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. Expand abstract.

**neural**

**networks**known as Random Vector Functional Link (RVFL)

**networks**since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world datasets from the UCI Machine Learning Repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n-bits integers, which results in a computationally efficient architecture. Finally, through hardware FPGA implementations, we show that such an approach consumes approximately eleven times less energy than that of the conventional RVFL.

10/10 relevant

arXiv