Synchronizing Execution and Resolving ContentionNext

Chapter 1: Introduction to Multi-threading

Multi-threading is a term used to describe:

1.1 Multi-threading and the Operating System

On most modern operating systems, many different users can run programs (processes) at what appears to be the same time. If the operating system is hosted by a hardware system with a single Central Processing Unit (CPU) then these processes are not running at exactly the same time - they just appear to be. The operating system allocates the CPU and other system resources to the processes that need them and (preemptively) switches these resources from one running program to another. This switching of resources gives the appearance that the programs are running at the same time.

In a very similar way, many modern operating systems enable multiple threads to exist within each process. Again, the operating system handles the allocation of the CPU and various resources to individual threads, and switches these allocations from one running thread to another. Again, this gives the appearance that the threads are running at the same time.

If processes and threads are handled in a similar way by the operating system, what distinguishes them from each other?

All processes have distinct address spaces, open files and other system resources; this enables each process to run with little or no effect on other processes running at the same time. However, this separation of resources has a run-time cost:

Because threads run within a single process, all of the threads associated with a process share the same data and code address space, the same open files and most other resources. Each thread also has its own register set and stack space. Creation of a thread, therefore, does not require extensive system memory. When the operating system finds it has to switch execution from one thread to another within the same process, the operating system need in general only switch CPU ownership and register sets. This is the reason that threads are often called lightweight processes.

1.2 Multi-threading and Multi-processors

As you saw in the previous section, although most modern operating systems enable many different users to run programs (processes) apparently simultaneously, if the operating system is hosted by a hardware system with a single CPU the processes are not running at exactly the same time. The situation is slightly different if the operating system supports and is running on a hardware system that has multiple CPUs (usually known as multi-processor systems). Multi-processor systems enable multiple processes to run at exactly the same time, with one processor executing one process while another processor executes a different process. Such operating system and hardware combinations can provide dramatic improvements in system throughput for many types of applications.

In general, only the operating sytem can take advantage of multi-processor hardware; only it can determine how and where an application process and its threads are scheduled for execution. However, some operating systems can schedule different threads within a process to be executed on different physical processors, while other operating systems schedule the entire process (and all threads within it) to a single processor. So, while multi-process applications that run under an operating system that takes advantage of multi-processor hardware will benefit from faster system throughput, such benefits can be variable, depending on the operating system hosting the application.

1.3 Multi-threading and the Application

A multi-threaded application is an application whose architecture takes advantage of the multi-threading provided by the operating system. Such applications assign specific jobs to individual threads within the process and the threads communicate, using various means, to synchronize their actions. For example, a data processing application might be designed so that the graphical user interface is completely handled by one thread, while the actual work of the application is performed by another thread. This architecture would enable the complete separation of the business logic from the user-interface logic within the application.

Another interesting use for multi-threading within an application is in the server side of a client/server application. The server could be designed so that for each service request submitted to a main controller/communications thread, a separate service thread is created to process that request and communicate the results back to the controller. The controller would then communicate these results back to the client. The main benefit of such a design is that the server is not tied up acting on one client's request to the extent that it cannot respond to other client's requests quickly.

Of course, client-server applications that have immediate (or very fast) server response have been written without multi-threading for years. This is usually accomplished by a main controller/communications process creating separate service processes for each client request. So why does an application like this need multi-threading? The main answer is: efficient use of system resources.

The creation of a thread requires very little operating system resource (memory, context swap overhead and so on) and the sharing of files and other resources is simplified. The creation of a process requires more system resources and it tends to be much more difficult to communicate and share files between processes. Multi-threading enables you to make the best use out of your existing hardware resources and also enables simple resource sharing.

There are, of course, disadvantages in using multi-threaded applications. Each thread must be aware of the resources that it might be sharing with other threads in the same process. You must remember that threads execute at the same time and that if two threads write to the same data item, without any form of execution synchronization, the data-item is likely to be corrupted.

For example, consider the following code fragments:

Working-Storage Section.
01 group-item.
  05 field-1  pic x.
  05 field-2  pic x.
procedure division.
move 'A' to field-1
move 'B' to field-2
display group-item
Working-Storage Section.
01 group-item.
  05 field-1  pic x.
  05 field-2  pic x.
procedure division.
move 'C' to field-2
move 'D' to field-1
display group-item

Consider the following possible order of thread execution:

Processing step

Thread 1 execution

Thread 2 execution

1 move 'A' to field-1
2 move 'C' to field-2
3 move 'B' to field-2
4 move 'D' to field-1
5 display group-item
6 display group-item

In this example, neither thread will display its expected result. The intended result of Thread 1 was to display 'AB', while the intended result of Thread 2 was to display 'DC'. However, the actual result is that both threads display 'DB'. If you are creating multi-threaded applications, you must synchronize execution between threads, and avoid and resolve data contention between threads, as described in the chapter Synchronizing Execution and Resolving Contention.

Copyright © 2000 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.

Synchronizing Execution and Resolving ContentionNext