Computation and optimization methods for multiresource queues (Q1385770)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Computation and optimization methods for multiresource queues
scientific article

    Statements

    Computation and optimization methods for multiresource queues (English)
    0 references
    0 references
    0 references
    1 November 1998
    0 references
    The operation of a large class of complex cybernetic systems that process and transmit information is described with fair accuracy by models of multistream queueing systems. The need to use multistream models emerges with particular clarity in the context of digital integral-service queueing networks, where a single network transmits different kinds of information: speech, video, fax, etc. Yet the modeling of stochastic processes describable by multistream queues depends on various assumptions. The main assumption is basic for the classical queueing theory: a customer from any of the streams uses at most one server (resource) at any time. In many real queueing systems, however, this assumption is not satisfied. For such systems \textit{K. J. Omahen} [J. Assoc. Comput. Machin. 24, 646-663 (1977; Zbl 0401.68014)] introduced the notion of multiresource queues. A multiresource queue (MRQ) is a multiserver system in which customers of different types request a random number of servers simultaneously. MRQ models are highly relevant: on the one hand, they constitute a generalization of classical queueing models, and on the other hand they provide a fairly precise description of the operation of various complex cybernetic systems. We distinguish two types of MRQ, depending on the strategy by which an arriving customer captures servers, and the fate of the server that has completed its ``share'' of serving the customer. These two types are characterized as follows [see \textit{L. A. Ponomarenko} and the author, Elektron. Modelirov. 11, No. 2, 67-70 (1989) and \textit{Y. de Serre} and \textit{L. G. Mason}, IEEE Trans. Commun. 36, No. 6, 675-684 (1988)]: 1) Servers requested by one customer start processing simultaneously, and the server that has completed its share of processing is blocked until the last server in the group finishes its operation. 2) Different servers requested by one customer do not necessarily start at the same time, and each server is assigned an individual task to serve the customer; once its specific task is completed, the server becomes available to other customers in the queue.
    0 references
    0 references
    multistream models
    0 references
    queueing theory
    0 references
    random number of servers
    0 references
    0 references
    0 references
    0 references
    0 references