swMATH36213MaRDI QIDQ51915FDOQ51915
Author name not available (Why is that?)
Official website: https://arxiv.org/abs/1806.09055
Source code repository: https://github.com/quark0/darts
Cited In (75)
- Betty
- HardCoReNAS
- PLiNIO
- ACNet
- OMLT
- AutoSTG
- GraphNAS
- Res2Net
- Puzzle-CAM
- Symbolic DNN-Tuner
- BET
- How can machine learning and optimization help each other better?
- Computer vision. Algorithms and applications
- MobileNetV2
- Traditional and accelerated gradient descent for neural architecture search
- ESAE: Evolutionary Strategy-Based Architecture Evolution
- Efficient Evolutionary Deep Neural Architecture Search (NAS) by Noisy Network Morphism Mutation
- Pruning deep convolutional neural networks architectures with evolution strategy
- Symbolic DNN-tuner
- Meta-learning PINN loss functions
- NetPyNE
- Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
- One-stage tree: end-to-end tree builder and pruner
- FDMRP
- GRADIENT
- Caltech-UCSD Birds
- DeepArchitect
- AutoKeras
- OpenQEMIST
- MiniGrid
- DENSER
- learn2learn
- BOML
- Dragonfly
- ThiNet
- VMAF
- ProBO
- Automated deep abstractions for stochastic chemical reaction networks
- A decomposable Winograd method for N-D convolution acceleration in video analysis
- SSN: learning sparse switchable normalization via SparsestMax
- BinaryConnect
- FiLM
- TopicRNN
- Net2Net
- BenchOpt
- AutoAugment
- EfficientNet
- MobileNets
- ShuffleNet
- FedNAS
- ReCirq
- gplearn
- AutoSlim
- AutoFormer
- AttnGAN
- Auto-DeepLab
- DADA
- FBNetV2
- Dist-GAN
- AdversarialNAS
- AutoGAN
- BossNAS
- MEAL
- MnasNet
- LR-GAN
- EnlightenGAN
- MGAN
- ProxylessNAS
- PSGAN
- SNAS
- StackGAN
- TransGAN
- TADAM
- HAWQ
- Architecture self-attention mechanism: nonlinear optimization for neural architecture search
This page was built for software: DARTS