The distribution of processing and communications activity between components to minimize the resources used and the time spent waiting by any single component.
A technique for distributing the user load evenly among identical servlet objects distributed across several computers running Netscape Application Server.
The process of dividing work done by each available processor into approximately equal amounts.
In routing, the ability of a router to distribute traffic over all its network ports that are the same distance from the destination address. Good load-balancing algorithms use both line speed and reliability information. Load balancing increases the utilization of network segments, thus increasing effective network bandwidth.
Spreading user requests among available servers within a cluster of servers, based on a variety of algorithms.
The process by which load (number of requests, number of users, etc.) is spread throughout a network so that no individual device becomes overwhelmed by too much traffic, causing it to fail. Load balancing also involves redirection in the case of server or device failure to allow for Failover and promote Fault tolerance.
The process whereby multiple services units are used equally. For example if two communications lines are available between two points each carries half of the traffic load.
Automatic balancing of requests among replicated servers to ensure that no server is overloaded.
Distributing network traffic evenly over two or more servers to provide better response times and reduce server overload.
Distribution of calls among terminating Gateways based on the Priorities and Weights assigned by the Buyer.
A technique used by some web hosts to scale the performance of a web server by distributing its client requests across multiple servers within the cluster. Each server can specify the load percentage that it will handle, or the load can be equally distributed across all the servers. If a server fails, the load from that server is automatically redistributed among the remaining servers.
The process of distributing the demands by client computers for network services across multiple servers in order to optimize performance by fully utilizing the capacity of all available servers.
Distributing processing and communications activity across multiple systems or network devices so that no single device is overwhelmed.
A method of distributing traffic across the network physical ports in a link aggregate. Unicast and multicast traffic is distributed across the network physical ports in a link aggregate. Broadcast traffic is always sent out the first network physical port in a link aggregate
Enables the even distribution of data and/or processing packets across available network resources. For example, load balancing may distribute the incoming packets evenly to all servers, or redirect the packets to the next available server.
A hardware and/or software means of distributing requests to multiple computers to satisfy user requests in a timely, transparent way that does not cause breakdown to the computers of the network.
A feature in which HTTP requests are distributed among application Web servers so that no single server is overloaded.
The process of balancing contacts between multiple sites, queues, or agent groups.
shifting workloads among available processors and systems to increase efficiency.
The process of dividing network traffic between parallel paths or devices in order to handle more transactions without overloading a specific resource.
A mechanism that enables balancing traffic between different servers. All traffic is directed to a single IP, but the load-balancer smartly divides the traffic between the different servers.
Where a PBX or central office switch has been designed to allow each network group to share traffic / communication paths.
A feature in which HTTP requests are distributed among origin server so that no single server is overloaded.
The process of dividing a single server's workload between two or more servers to improve efficiency, and ultimately serve users faster.
Optimizing performance and scalability by deploying WebSphere Portal Server and other requisite software across multiple servers.
Load balancing is the act of distributing the load of a single web site or other service to multiple physical servers. It offers lower cost, higher performance and reliability than having one large enterprise-scale server. A load-balanced server set provides redundancy and practically infinite scalability. If one server goes down, there is no noticeable effect on end users and no downtime. A load-balanced set can consist of as little as two servers, or of thousands of servers. The term load balancing refers to front-end (i.e. incoming traffic) distribution only; load balancing does not include back-end functionality such as data replication or mirroring; that further service is known as clustering.
Techniques which aim to spread tasks among the processors in a parallel processing system in order to avoid some processors standing idle while others have tasks queueing for execution. Load balancing may be performed by (a) heavily loaded processors sending tasks to other processors; (b) idle processors requesting work from others; (c) some centralized task distribution mechanism; or (d) some combination of these options.
Distributing data across a network of servers in order to ensure that a single Web server does not get overloaded with work, thereby affecting performance.
An optimization strategy that aims at evenly distributing the work load among processors.
management of workload trough distributing requests evenly across available server machines. ISL Advanced network primarily distributes load upon request's geographical origin to achieve faster operation.
Load balancing is a feature that is integrated into catalog solutions in order to prevent system crashes and ensure trouble-free access to the system when a large number of users access an electronic marketplace at the same time.
A technique for scaling performance by distributing load among multiple servers. Network Load Balancing distributes load for networked client/server applications in the form of client requests that it partitions across multiple cluster hosts.
When a server cluster shares the information requests equally over all of its active nodes. This can be done either statically, by tying clients directly to different back-end servers, or dynamically by having each client tied to a different back-end server controlled by software or a hardware device. The Network Load Balancing feature of Windows 2000 Advanced Server provides load balancing for HTTP services.
A technique for distributing the user load evenly among multiple server s in a cluster. Also see sticky load balancing.
This is used to distribute data across 2 or more servers to ensure that a single web server is not overloaded with traffic thereby affecting network performance.
Popular Web services can become too busy to run from a single computer, and administrators may choose to distribute the document collection and processing across several networked computers. To reduce the Server Load that numerous users place place on critical resources, the server may be configured to perform automatic balancing between available computers. By passing off requests to alternating machines, the server can improve response time (often transparently) significantly.
A feature by which client connections are distributed evenly among multiple listeners, dispatchers, instances, and nodes so that no single component is overloaded. Oracle Network Services support client load balancing and connection load balancing.
Refer also to "Load Balancing" and Fail Over.
The ability of processors to schedule themselves to ensure that all are kept busy while instruction streams are available.
Load balancing is dividing the amount of work that a server has to do between two or more servers so that more work gets done in the same amount of time.
Directing new connections to the node with the most available resources. In a LAT network, load balancing is performed by the terminal server, which routes connection requests to the service with the highest rating.
Measure of how evenly the work load is distributed among an application's processes. When an application is perfectly balanced, all processes share the total work load and complete at the same time.
The distribution of tasks amongst multiple processors such that all processors finish their tasks at approximately the same time, thus reducing idle time in any of the processors.
The practice of splitting communication into two (or more) routes. By balancing the traffic on each route, communication is made faster and more reliable.
In routing, the ability of the router to distribute traffic over all its network ports that are the same distance from the destination address. It increases the use of network segments, which increase the effective network bandwidth.
Balancing traffic between two or more destinations.
Referring to the ability to redistribute load (read/write requests) to an alternate path between server and storage device, load balancing helps to maintain high performance networking.
Dividing the load of a single website or service over several web servers.
Loading balancing is a technique used to distribute the load of system among the available servers or machines, so that all the servers get their fair share of incoming requests.
A feature of Inn-Reach software that balances the lending and borrowing of individual institutions so that no institution bears a greater burden of lending.
All machines are load balanced so that not one machine is heavily used while the rest have light loads.
The process of involving multiple computers in serving a common processing task to divide the work and, therefore, to balance the load between or among them.
A technique that distributes network traffic along parallel paths in order to maximize the available network bandwidth while providing redundancy.
A technique used for scaling the performance of a server-based program by distributing client requests across multiple servers.
Process that installs all next-hop destinations for an active route in the forwarding table. You can use load balancing across multiple paths between routers. The behavior of load balancing varies according to the version of the Internet Protocol ASIC in the router. Also called per-packet load balancing.
Fine-tuning of a computer system, network or disk subsystem in order to more evenly distribute the data and/or processing across available resources. For example, in clustering, load balancing might distribute the incoming transactions evenly to all servers, or it might redirect them to the next available server.
A technique used by Windows Clustering to scale the performance of a server-based program (such as a Web server) by distributing its client requests across multiple servers within the cluster. Each host can specify the load percentage that it will handle, or the load can be equally distributed across all the hosts. If a host fails, Windows Clustering dynamically redistributes the load among the remaining hosts. See also: cluster; host
Distributing processing and communications activity evenly across a computer network so that no single device is overwhelmed. Load balancing is especially important for networks where it is difficult to predict the number of requests that will be issued to a server. Busy Web sites typically employ two or more Web servers in a load balancing scheme. If one server starts to get swamped, requests are forwarded to another server with more capacity. Load balancing can also refer to the communications channels themselves.
In computing, load balancing is a technique (usually performed by load balancers) to spread work between many computers, processes, hard disks or other resources in order to get optimal resource utilization and decrease computing time.