Pages that link to "Item:Q5125679"
From MaRDI portal
The following pages link to Accelerated Distributed Nesterov Gradient Descent (Q5125679):
Displaying 15 items.
- Projected subgradient based distributed convex optimization with transmission noises (Q2073071) (← links)
- An accelerated distributed gradient method with local memory (Q2097691) (← links)
- Distributed adaptive Newton methods with global superlinear convergence (Q2123229) (← links)
- Blended dynamics approach to distributed optimization: sum convexity and convergence rate (Q2139400) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- Predefined-time optimization for distributed resource allocation (Q2205512) (← links)
- Accelerated gradient boosting (Q2425242) (← links)
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization (Q2696917) (← links)
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation (Q5026632) (← links)
- Hybrid online learning control in networked multiagent systems: A survey (Q5222713) (← links)
- A dual approach for optimal algorithms in distributed optimization over networks (Q5859014) (← links)
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs (Q6060563) (← links)
- Distributed optimization under edge agreements: a continuous-time algorithm (Q6131483) (← links)
- ET-PDA: an event-triggered parameter distributed accelerated algorithm for economic dispatch problems (Q6152344) (← links)
- Recent theoretical advances in decentralized distributed convex optimization (Q6354638) (← links)