Pages that link to "Item:Q5256808"
From MaRDI portal
The following pages link to Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms (Q5256808):
Displaying 15 items.
- Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients (Q1717855) (← links)
- Asynchronous gossip-based gradient-free method for multiagent optimization (Q1724545) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- Gradient-free distributed optimization with exact convergence (Q2165968) (← links)
- Cooperative convex optimization with subgradient delays using push-sum distributed dual averaging (Q2230849) (← links)
- An improved distributed gradient-push algorithm for bandwidth resource allocation over wireless local area network (Q2278902) (← links)
- Incremental gradient-free method for nonsmooth distributed optimization (Q2411165) (← links)
- Strong consistency of random gradient‐free algorithms for distributed optimization (Q5346596) (← links)
- A gradient‐free distributed optimization method for convex sum of nonconvex cost functions (Q6069332) (← links)
- Optimal consensus for uncertain high‐order multi‐agent systems by output feedback (Q6071456) (← links)
- Differentially private distributed online learning over time‐varying digraphs via dual averaging (Q6085173) (← links)
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks (Q6085458) (← links)
- Resilient consensus‐based distributed optimization under deception attacks (Q6089837) (← links)
- Distributed continuous‐time constrained convex optimization with general time‐varying cost functions (Q6089867) (← links)
- Federated learning for minimizing nonsmooth convex loss functions (Q6112869) (← links)