About 45,000 results
Open links in new tab
  1. First, a network was trained with Gaussian noise injection (Sec. 5.3) and subsequently tested using the delta network GRU formulation given in Sec. 3. A second network was trained directly on the delta …

  2. This paper presents OptNet, a network architec-ture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end train-able …

  3. Shallow-Deep Networks: Understanding and Mitigating Network

    For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, …

  4. Adaptive Smoothing Gradient Learning for Spiking Neural Networks

    Here, we propose a methodology such that training a prototype neural network will evolve into training an SNN gradually by fusing the learnable relaxation degree into the network with random spike noise.

  5. One of the main appeals of neural network-based models is that a single model architecture can often be used to solve a variety of related tasks. However, many recent advances are based on special …

  6. Analogously, to mitigate network overthinking, we propose two SDN-based heuristics: the confidence-based early exits (Section 5.1) and network confusion analysis (Section 5.2).

  7. We proved that any target network with width n, depth L and inputs in Rd can be approximated by a network with width O(d), where the number of parameters increases by only a factor of L over the …

  8. Network Morphism - PMLR

    We present a systematic study on how to morph a well-trained neural network to a new one so that its network function can be completely preserved. We define this as network morphism in this research.

  9. For a single network (depth 4, width 16), Figure 7 indicates that this distribution does not significantly change during training, although there appears to be a slight skew towards larger regions, in …

  10. Our empirical results show that IPM-MPNNs can lead to reduced solving times compared to a state-of-the-art LP solver with time constraints and competing neu-ral network-based approaches; see Figure …